“While the press brouhaha happily follows Apple about and co-conspirator Intel looks on, smugly hoping its tie-up with the much-loved computer maker will bring it some added kudos in its assault on the consumer electronics market, IBM, the giant ousted from the party, is getting on with business. Big Blue may have been dumped by Apple but its compensation is plentiful. Its Power chips form the heart of upcoming console offerings from Sony and Nintendo as well as the XBox from Microsoft. And let’s face it, the press might like Apple and the kids might dig iTunes but sales of a million or so computers annually is pretty small beer in the grand scheme of things.”
Try > 4 million. And those numbers will easily double in 2006.
Even if i’s >4 million, compare that to just the sales of the PS2, which has surpassed 100 million units in just 5 years. That’s 20 million units a year. There’s no reason to think that the ps3 sales will be smaller, and even if they are for sony, it’s aprobably because people are choosing one of the other consoles that also have ibm chips in them.
So even 8 million Apple machines in 2006 is smallish potatoes compared to the 3 console manufacturers, besides which if there’s any truth in Apple being a pain in the ass to do business with for IBM, it seems like it’s not worth it.
IBM’s needs to tidy up its image now after being so lowsy with Apple and the G5.
IBM was always late on delivery and couldn’t meet Apple’s demand for a notebook chip and power.
A percentage of Apple’s loose sales do reflect IBM’s inability to deliver G5 970 chips on a timely basis.
After reading the article, this guy is most likey MS lover and just plain jealous over Apple and now targets Intel. How dumb! where is good writing these days?
Edited 2006-01-14 01:18
It’s a fact, read it all here:
Weren’t you there during the discussions when IBM convinced Apple to adopt the G5?
Mayer: In my previous job, I ran IBM’s semiconductor business. So I’ve seen both sides of the Apple story, because I sold the G5 to Steve (Jobs) the first time he wanted to move to Intel.
Five years ago?
Mayer: Yeah, that’s about right. So I sold the G5. First I told IBM that we needed to do it, and then I sold it to Apple that the G5 was good and it was going to be the follow-on of the PowerPC road map for the desktop. It worked pretty well. And then IBM decided not to take the G5 into the laptop and decided to really focus its chip business on the game consoles.
http://news.com.com/Is+the+PowerPC+due+for+a+second+wind/2008-1006_…
I’m assuming IBM dumped Apple because it wasn’t worth developing a cooler G5 processor without the Intel created HDCP and other DRM schemes coming down the pipe.
I would like to know exactly how AMD plans to compete and if their processors contain Intel created HDCP.
As you know no HDCP, no HDTV on PC’s.
Cell isn’t squat but a single processor and 8 other vector processors, it’s like combining a video card and a general purpose CPU, it’s hot as hell too. So forget laptops, Apple already shut the Cell out of their equation long ago.
True… but why did Apple not take responsibility into its own hands and co-fund development on the dual core G4 from FreeScale?? -Or even low power G5 chips??
Apple knows it is ruling a fairly small amount of the market. Why not actively take part in the preservation and development of the platform? Apple seems to be obsessed with superior software and smooth marketing but whenever they want what is really essensial, namely power, they turn their heads and start mumbling about others (IBM) not doing their job properly…
It is a hard race to stay the fastest amongst the fast as a minority platform but Apple tends to think this is just what Apple is… The G3/G4/G5 processors have always been touted the fastest desktop machines, eventhough we all know this may not be true for quite a few reasons. Anyway, Apple now wants us to believe that their latest offers are 4x faster than their previous ‘super duper faster than every PC’ offers…. It is just not plausible to me… Based on the fact that the Intel Core Duo 1.67 GHz, featuring two processor cores, is about 200 MHz faster than a single core G4, I feel that a 60-70% speed increase over the G4 is what I would realistically expect of Intel…
Apple is no more high-end than Dell is low-end…. They are both somewhat mainstream performers… Apple just seem to be a bit more full of themselves…
True… but why did Apple not take responsibility into its own hands and co-fund development on the dual core G4 from FreeScale?? -Or even low power G5 chips?? Apple knows it is ruling a fairly small amount of the market. Why not actively take part in the preservation and development of the platform?
Ah, you’re getting close to the truth. The mythical low-power, high-performance non-x86 processor is economically unviable. If IBM builds it, IBM loses money. If Apple pays someone to build it, Apple loses money.
Ah, you’re getting close to the truth. The mythical low-power, high-performance non-x86 processor is economically unviable. If IBM builds it, IBM loses money. If Apple pays someone to build it, Apple loses money.
Not quite true – IBM would have to atleast come up with a business case – they *COULD* push the idea of blade servers running this ultra-low power PowerPC processor, but at the same time, would it be worth their while doing so?
Mind you, I thought it was funny, however, how the G5 processor wasn’t that worse when compared to the G4 in Stevo’s “watt per performance’ matrix be splattered onto the screen – so there is the potential there to make it perform better.
With that being said, the G4’s main problem isn’t necessarily the processor itself, its the anemic FSB, if it could have been booted out to 667Mhz, you would find that the performance numbers of the PowerBook would fly through the roof.
Now sure, in some benchmarks the G4 might get its ass handed to it on a platter, but at the same time, however, it would perform alot better than it is now give the current circumstances.
Based on the fact that the Intel Core Duo 1.67 GHz, featuring two processor cores, is about 200 MHz faster than a single core G4, I feel that a 60-70% speed increase over the G4 is what I would realistically expect of Intel…
It really depends on the task. Not all tasks are purely arithmetic bound. For those that are, the 3-4x figure isn’t unrealistic. The G4 is a PIII-class CPU. A P-M, on the other hand, has a much better branch predictor, much faster (by a factor of four!) bus, much more cache, macro-ops fusion, much better FPU, etc. The Core Duo’s IPC is just much higher than a PIII’s at the same clockspeed (by 70% or so), and thus much higher than a G4’s at the same clockspeed.
Apple’s benchmark results are only surprising to those who forget that a G4 is a late 1990s era CPU. It’s 2006, processors have come quite a way since then!
Whoa, you need to adjust your tinfoil hat. HDCP goes in the GPU, not the processor. And besides, any company can license that stuff.
AMD will continue to compete just like they do now, by shipping slightly improved copies of Intel’s technologies (e.g. AMD’s Presidio sounds like a copy of Intel’s La Grande).
Thanks for clearing that up Wes,
It seems by the quotes above that the only one “sold” on PPC for desktops was Mayer himself, since he had to sell it on both ends, once he was out of the picture that pretty much spelled the end for any further development.
The constant speculation of the Cells future dominance is getting pretty old. We haven’t even seen this thing in action and it’s going do, supposedly, decimate Intel?
I’ll believe it when I see it. Let’s see what else was going to destroy Intel? PowerPC (nope), Dec Alpha (nope), Transmeta (nope), the mysterious Russian chip (nope) and Sparc (nope), etc, etc. What is the Cell bringing to the table that hasn’t been done before?
And I dig iMacs. But i got a cool big screen (rather cheap too) and my PC keeps going, as long as it’s linux (freezes hard in XP) so….
Games is all I need 😉
Edited 2006-01-14 01:42
Has this guy had his head in the sand for the past year? Apple is just kicking butt right now and it’s market share continues to grow. They can only go up and despite what IBM keeps shovelling (and this guy) it was a huge black eye to lose Apple the way they did. It really sucked too because I am a big PowerPC fan. It is a great architecture and it was what made an Apple different in my opinion. Now that is gone, but amazingly Apple seems to have absorbed the body blows and picked up more steam. I feel sad for the PowerPC because now it will be relegated to the corners of the computing world. It will continue to make a lot of money, but for those of us who like computers more than game consoles it is a sad day. The only way to continue PowerPC software development will be on Linux, AIX, or other less known alternatives. Having Apple on PowerPC was the only thing keeping it in the forefront of the news. But for this guy to suggest though that Apple would not be a money maker for IBM in the future is just lunacy.
Has this guy had his head in the sand for the past year? Apple is just kicking butt right now and it’s market share continues to grow.
RTFA. Cell is used for a completely different market than ‘Apple’, and it’s a much larger market. IBM. Does. Not. Care. For. Apple.
RTFA. Cell is used for a completely different market than ‘Apple’, and it’s a much larger market. IBM. Does. Not. Care. For. Apple.
Exactly. More media cetric, less office. Pushing streaming and multimedia capabilities to the edge. Exactly what I wished for, media centric computer. Last year I wrote whole lot of (2 to be exact, but still 100% increase from year before) office documents, lot of coding though.
At the same time, PS2EE is somewhat same design (streaming data trough CPU, but smaller without multiple SPEs) in basics. Linux kit on PS2 doesn’t perform unacceptable, so why would Cell be so bad as some are saying? One kind of people glorifies, second bashes Cell. While one side is talking about 20W per CPU (which would be a notebook wish come true), second is saying that it will need its own power plant. Same goes for heat. While 20W would mean CPU without overheating problems, powerplant would mean industrial sized cooler fan.
All sides should just wait and see. Personally, for my needs I see it as a gods gift to me. But all that is just on speculative papers and articles without a single grain of real-life tests.
As soon as PS3 hits the street, I’m buying it. Same as first workstation that shows up with Cell. Better to try than speculate. It is easier to decide if you actualy try. All I can say is, while my personal history lesson tells I was never dissapointed with IBM and Sony (AMD could be included here, as it showed it is going for better in the last years), I can say only bad things about Intel and Apple (Both going for worse with time).
Linux kit on PS2 doesn’t perform unacceptable,
Define “unacceptable”. Provide numbers. Without a recent and heavy desktop environment Linux runs acceptably even on my old Pentium 120 box.
so why would Cell be so bad as some are saying?
Linux will run alright on Cell, but nowhere near as fast as on Intel’s or AMD’s processors, or even the G5.
That’s because the Cell’s SPEs will go totally unused, unless software is specially rewritten (not just “optimised”) to take advantage of them.
Furthermore, the SPEs’ design is quite specialised, which means that anything that isn’t media codecs or number crunching is very difficult to adapt to it.
So that leaves the vast majority of programs with the PPE, a single, narrow, in-order core. Even though it runs at 3.2 GHz, I’d be very surprised if it performed better than a 1.5 GHz G4, never mind anything more recent.
While one side is talking about 20W per CPU (which would be a notebook wish come true), second is saying that it will need its own power plant. Same goes for heat.
It’s a big chip and it runs at 3.2 GHz. No way can they keep that to 20W. Just look at the power-hungry X-Box, using the same PPE cores and running at the same clockrate.
If IBM could do a 20W Cell, then why couldn’t they do a 3 GHZ G5?
As soon as PS3 hits the street, I’m buying it.
Go ahead, it’s perfect for that, because that’s what it was designed for.
Same as first workstation that shows up with Cell.
You must have deep pockets. And what are you going to run on it?
There is a difference in our thinking. You want to believe it is no good, I plan to wait and see if it suits me. Since nothing more than speculations were posted how can you be so sure in what you say. Cell is completely different than anything else, aka. not compareable to any tech.
Define “unacceptable”. Provide numbers. Without a recent and heavy desktop environment Linux runs acceptably even on my old Pentium 120 box.
That’s what I’m saying. I don’t need faster environment (I could easily live with 5-10x slower than my current Opterons). I only need speedup in some selected departments.
Furthermore, the SPEs’ design is quite specialised, which means that anything that isn’t media codecs or number crunching is very difficult to adapt to it.
Which makes it perfect for image manipulation, for example optimized VIPS, 3d modeling, number crunching and audio/video production. All the departments where current computers suck.
If I look where I’ve been feeling left out by performance, it sure isn’t Office, Internet or E-Mail.
Well, (talking about Cell here) one thing that would I think of being slower are compilers, but then again I need fast runing of my apps not fast compiling. My money comes from app that runs not from app that compiles.
So that leaves the vast majority of programs with the PPE, a single, narrow, in-order core. Even though it runs at 3.2 GHz, I’d be very surprised if it performed better than a 1.5 GHz G4, never mind anything more recent.
Which is way more than acceptable for me. Did you ever run Linux on G4? I did, if it wasn’t my special needs in some departments, I would still be more than happy running Linux on my good old G3 tower (it is a long time since I had it and it was my first Linux PPC desktop, since then I just can’t help my self but to love PPC/Linux combo).
It’s a big chip and it runs at 3.2 GHz. No way can they keep that to 20W. Just look at the power-hungry X-Box, using the same PPE cores and running at the same clockrate.
Big, yes. But as I said, some speculations and papers say 20W, some industrial power plant. I don’t try to imagine best things here, it is you who tries to imagine worst. I plan to wait and see reality.
Ok, here you’re completely wrong. PPE cores on X-Box account to 3 cores, Cell 1 with 8 SPE. But still completely different PPE on X-Box than PPE on Cell. X-Box can’t be taken as measure for Cell. And as I already said few times now. It feels pretty stupid to speculate based on few papers, shouldn’t we all just wait and see? Cell is a completely new tech, and compareable to nothing ’till now. As always, time will tell. X-Box is much closer to 970, just as Cell is much closer to Sony EE.
If IBM could do a 20W Cell, then why couldn’t they do a 3 GHZ G5?
I think it wasn’t technical problem. Problem is that IBM centred its force on Cell (contract for Cell was signed on year 2000) and G5 was never planed on notebook. Apple market is too small to be taken seriously here. IBM is just bussiness nothing more, and in bussiness money is all that counts.
You must have deep pockets. And what are you going to run on it?
Not really, but you can say I really don’t have a problem in this department. What will I be running? Linux, just as I’m running it now. Basically, I plan on trying to adapt my code to this platform, since it provides exactly what I miss with current PC (high load streaming is just the thing I need). PPC was better but unfortunatelly too expensive according to gain it provided (performance difference was a bit to small, but as I always say I /*me personaly and for my own taste*/ preffer PPC over PC anytime).
p.s. Since basic price of PS3 should be $399, I don’t expect basic Cell WS (which could simply consist of the same HW as PS3 except HDD and more RAM) to go much over $3000 (but I could be wrong here, as I said I expect that, not demand and for sure I don’t say this is the price with which it will hit the streets:), which seems more than acceptable for me to spend and see. As I said, on speculative papers it looks just as a gods gift for me and my needs.
You want to believe it is no good
No, I’m prepared to believe that it’s very good indeed at what it’s designed for: graphics-heavy games and media processing. I just don’t like that whole uber-chip myth around it.
It feels pretty stupid to speculate based on few papers, shouldn’t we all just wait and see?
IBM and Sony have had real Cells running for quite some time. So why don’t they provide the complete picture? One can only suspect that it isn’t all that impressive. After all, they’re happy to boast with theoretical FLOP numbers and provide selected bits of information that suit them.
But still completely different PPE on X-Box than PPE on Cell.
What leads you to believe that?
X-Box can’t be taken as measure for Cell.
Why not? Same manufacturer, same process, same clockrate, similar size, same kind of execution units. Only different ways of feeding those execution units. I’d be very interested to see how the Cell is supposed to save much power in those circumstances.
Basically, I plan on trying to adapt my code to this platform, since it provides exactly what I miss with current PC (high load streaming is just the thing I need).
Fair enough, if your requirements fit the peculiarities of the Cell and you don’t mind to rewrite your code for it, there’s not much of an argument.
No, I’m prepared to believe that it’s very good indeed at what it’s designed for: graphics-heavy games and media processing.
Now, what else than image/audio/video processing, number crunching and heavy graphics sucks in current arch? This is what really interests me:) Because those I named suck major and Cell seems the right answer.
ANSWER PLEASE:)
I can name one that Cell does not adress: databases, I can’t see how databases (except some special case off streamed report) would be boosted with Cell.
I could still live with P2-233 or G3 if there would be no one of those named before.
I just don’t like that whole uber-chip myth around it.
Neither do I. But fact is that for last few years any computer is decent for everything but that. With nothing but a linear progress where things really suck (as soon as computer goes 2x faster, people have better printers, cameras etc, files get bigger by larger factor than cpu speed progress. Meaning same work (but customer demands are bigger), bigger speed actualy results in spending more time for the same job than before. This doesn’t account video production, at least it didn’t. But with move on HDTV even that goes down the drain?). Cell seems to be booster in those departments.
IBM and Sony have had real Cells running for quite some time. So why don’t they provide the complete picture? One can only suspect that it isn’t all that impressive. After all, they’re happy to boast with theoretical FLOP numbers and provide selected bits of information that suit them.
Actualy, quite a few real life demonstrations were already presented, unfortunatelly, none was covered (or very badly). This site covers Cell quite good, but it seems site has crashed and lost most of data.
http://www.cell-processor.net/news.php
What leads you to believe that?
Tech papers. Read them on IBM. Not same arch.
Why not? Same manufacturer, same process, same clockrate, similar size, same kind of execution units. Only different ways of feeding those execution units. I’d be very interested to see how the Cell is supposed to save much power in those circumstances.
Because they are completely different. Cell is something else. There are few interviews on developerWorks where they discused this. X-Box PPE is derived from current arch. Cell is redesigned completely. At least interview went like that. They even discussed why this change.
Try googling for +developerWorks +Cell +interview
here is the IBM Cell coverage
http://www-128.ibm.com/developerworks/power/cell/community.html
Fair enough, if your requirements fit the peculiarities of the Cell and you don’t mind to rewrite your code for it, there’s not much of an argument.
You call them “peculiarities”, I call them “candy” or “money for me”:) Fair enough?
btw. If I can show gain to my customers (let’s say AFTER they have prooven in real life for me) as papers speculate, I won’t have trouble to sell that thing. And that is a big speculative IF
Now, what else than image/audio/video processing, number crunching and heavy graphics sucks in current arch?
Compiling, boot-up, application startup, virus scanners, game AI. All integer-heavy. (And no, boot-up and application startup times aren’t due to slow disk access alone, but also script interpreters, dynamic linking, process creation.)
Not only does the Cell not improve things there, it actually goes backwards by a significant amount.
I do take your point about current performance being quite enough for most things though (including simple video editing).
As for the Cell addressing the most pressing bottleneck, you’re probably right there. I’m just not sure it addresses it in the right way.
Having to divide your algortithm in such a way that they fit the SPUs with their local memory and having to manually deal with communication between them seems like an awful lot of extra programming effort, where we’ll have to wait and see whether it’s actually worth it.
Also, there are some scalability problems. Being in-order, changes to the pipelines require rescheduling of the code to get best performance. If the SPEs get more local memory, existing programs don’t take advantage of it without changes. And it’s difficult to virtualise the SPEs because there’s so much local context that needs to be saved.
I can name one that Cell does not adress: databases
Yep. And web servers, especially when server-side scripting and Java are involved.
Because they are completely different. Cell is something else.
Yes, the SPU approach is completely different from the Xenon’s symmetric multicore approach, but the PPE is largely the same. Have a look at this detailed article on arstechnica, apparently they only differ in their vector units.
http://arstechnica.com/articles/paedia/cpu/xbox360-2.ars/3
Try googling for +developerWorks +Cell +interview
Found it, it doesn’t mention the XBox processor at all and doesn’t rule out that Cell’s PPE was reused there.
Compiling, boot-up, application startup, virus scanners, game AI. All integer-heavy. (And no, boot-up and application startup times aren’t due to slow disk access alone, but also script interpreters, dynamic linking, process creation.)
Here I wave my killer comment:) I mean integer being slow here. Completely agree on others. SPEs are not designed for that. I think I already mentioned that as a bad feature.
Since neither of us benchmarked that thing, one can’t say how slow/fast integer calculationsis really are
There are two kinds of slow.
1. Against other optimized functions
2. Against current arch
1 could still be ultra fast, 2 would be slow. It would be unfair to just pick one of those until real-life tests show results.
Funny, your comments are sounding like you know which one. Even article you posted as relevant is full of “probably”, “I think”, “I suspect”. Not even one fact.
I always want to achieve one thing only. People to stop bashing/zealoting everything. You can’t know until you test it. I hate those who spread hype just as much as those who bash without actual figures.
Having to divide your algortithm in such a way that they fit the SPUs with their local memory and having to manually deal with communication between them seems like an awful lot of extra programming effort, where we’ll have to wait and see whether it’s actually worth it.
Ok, a bit selfish of me. I don’t see it (but then again I haven’t tested it yet).
There are more ways of dividing algorithm /*at least those needed by me*/. Few are quite simple, but I will have to test them.
Also, there are some scalability problems. Being in-order, changes to the pipelines require rescheduling of the code to get best performance. If the SPEs get more local memory, existing programs don’t take advantage of it without changes. And it’s difficult to virtualise the SPEs because there’s so much local context that needs to be saved.
PC has just other side of scalability problems. Speed progressing slower than needs for bigger multimedia tasks
It is just another Blue-Ray – HD-DVD battle.
One addressing one type of problems, but starting with already limited media and no way out, other one not addressing those problems but without such limitations in media.
I guess time will tell best.
Found it, it doesn’t mention the XBox processor at all and doesn’t rule out that Cell’s PPE was reused there.
Actualy, there is a lot of interviews, one does.
EXACTLY! THANK YOU! Apple and intel can skip thru the daisies holding hands and pretend it is them that smell sweet but apple is just another computer now….
Apple isn’t in the same league as Dell or Gateway, because Apple is a software company also. They make a OS and media rich softwares. Plus make money off online services, .mac, iTunes. iPod, should I say more..
Apple always was another computer. There is not and never was anything really special about Apples. The OS is based on UNIX and their outlook on GUI design is what makes it so attractive. Check out the performance review at Anandtech and you will find that its performance is not impressive compared to Linux.
The Anand review you cite was discredited. Which means the performance issue is still an open question. I expect Apple to cream Windows on equal hardware, but LINUX will be much closer.
The Anand review you cite was discredited
When did that happen’? Hope you won’t talk about how TCP_XXX is not optimized.
I expect Apple to cream Windows on equal hardware
That will actualy just stay a wet dream. Microkernel has loss with its internal message stack where modules communicate one with another. Windows hasn’t got this drawback because Windows kernel is not microkernel. Second thing is Spotlight, which is enabled by default. Damn thing consumes ram as crazy. In just 4 days it brought minimac with 512MB to completely unusable state where 5-10 seconds were needed to switch between two windows. It a true “please restart” machine with max uptime up to 3-4 days. Windows is still waiting to get same brake with WinFS in Vista (gaining speed only from the lack of functionality).
The only shining option for OSX are xxxCore functionalities. Software that uses them could surpass Windows one, again because Windows until Vista lacks this. But then again, not a lot of software uses those.
btw. If anyone knows: Is there any way to turn Spotlight off? I don’t bother with it (got 2GB of ram, and avoid my g5), but that minimac from friend of mine would need that treatment desperately.
No bother: just found on the first google try
http://www.tuaw.com/2005/05/13/tiger-tips-hate-spotlight-turn-it-of…
http://www.fixamacsoftware.com/software/spot/
Edited 2006-01-14 23:27
The Anand review you cite was discredited. Which means the performance issue is still an open question. I expect Apple to cream Windows on equal hardware, but LINUX will be much closer.
Who, why, what, when, where ?
URL’s Please.
The article could have gone to greater lengths to back up its position but it didn’t. I am disappointed.
The biggest DRM champ and the second biggest DRM champ teaming up. Yuk, Yuk, Yuk! Well at least they will both be together in one bunch so the big one can get’em both with one swat and we can avoid anything with either name on it!!!!!!!!!!!!
The wattage of any modern CPU depends heavily on the chosen voltage and frequency.
You can make any G5 run at 20W just clocking it at 1 Ghz.
Aside of that, some info about the Cell power consumption can be found in this document:
ISSCC-07.4-Cell_SPU.PDF (google for this string).
The chart is not complete, but you can easily deduce that a Cell running at 2 Ghz fits laptop computer requirements.
Also adding/removing SPUs in the Cell design is a cheap procedure, as the bus chosen is the ring kind.
Producing a reduced Cell should be quite easy.
The wattage of any modern CPU depends heavily on the chosen voltage and frequency.
You can make any G5 run at 20W just clocking it at 1 Ghz.
True enough, yet in that case you’re better off with a processor that was actually designed for a lower clockrate.
The main consideration here is pipeline length. The longer the pipeline, the higher the penalty in case of a pipeline stall due to dependencies in the code. The tradeoff here is that the higher clockrate hopefully makes up for that.
So if you run a processor like the G5 or Cell at lower clockrates, you pay the price for a longer pipeline without reaping the rewards. (That’s the main reason why the Pentium 4 never fulfilled its promise: due to power trouble Intel could never run it at the clockrates it was designed for.)
Aside of that, some info about the Cell power consumption can be found in this document:
ISSCC-07.4-Cell_SPU.PDF (google for this string).
Interesting. But where are the figures for the whole chip, including cache and memory controller?
The chart is not complete, but you can easily deduce that a Cell running at 2 Ghz fits laptop computer requirements.
Maybe, but with it’s 20-stage in-order pipeline it would suck very badly compared to a Pentium M (never mind a Core Duo) running at the same clockrate. I’d guess the Cell (its PPE, to be more precise) would get no more than a quarter of the Pentium M’s SPECint score.
Also adding/removing SPUs in the Cell design is a cheap procedure, as the bus chosen is the ring kind.
And it wouldn’t hurt, because they’d go largely unused in a laptop anyway.
But if you just want a cheap, power-saving chip and you don’t need too much performance you might as well use the G4.
nimble, comparing a potential Cell laptop at 2 ghz with a Intel/AMD is uninteresting if we use your terms.
It’s uninteresting because it seems you are only interested in raw performance based on existing code/applications. In that case, yes, a Cell chip will be slow.
So what? I, like many others, am interested in its potential as a computer, not at as Windows box. I am not married with the current programming paradigms. In fact, I am pretty sure this paradigm is dying as cheap massive parallel processing starts raising it’s head.
And talking about parallel processing, whatever AMD and Intel offers you based in the old x86 crap is going to be uncompetitive. Any company can place a full operating cpu core in the space Intel needs just to put the x86 transcriptor.