Linked by Thom Holwerda on Sun 19th Feb 2006 13:34 UTC
Intel "Many people in the industry assumed that Itanium had a low - and poor - profile among end users. That was what the folks at IDC assumed until recently, when they surveyed 500 members of their Enterprise Server Customer Panel. The results were somewhat surprising, they said. Not only was there a high level of awareness among the users - more than 80 percent knew of the platform - but that their intent to buy an Itanium system was fairly strong. About 24 percent of those polled said they had bought at least one Itanium system, though only 13 percent of non-HP users had done so. However, more than a third of all participants said they were highly likely to buy an Itanium system within the next 12 to 18 months."
Thread beginning with comment 97478
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: IDC survey ... lol
by stare on Mon 20th Feb 2006 12:23 UTC in reply to "IDC survey ... lol"
stare
Member since:
2005-07-06

What does surprise me that Intel just won't give up on the agonizing platform.

I'd say x86 is the agonizing platform, sooner or later this 30 y.o. architecture will hit the scalability and performance wall.

Plus if you want more power you can just get more CPUs and it really doesn't cost you that much more

You can't infinitely add more processors, otherwise we would be running clusters of 486s.

Reply Parent Score: 2

RE[2]: IDC survey ... lol
by nimble on Mon 20th Feb 2006 12:52 in reply to "RE: IDC survey ... lol"
nimble Member since:
2005-07-06

I'd say x86 is the agonizing platform, sooner or later this 30 y.o. architecture will hit the scalability and performance wall.

The x86's demise has been predicted for at least 20 years, so you're gonna have to come up with some more convincing evidence for your assertion.

The x86 ISA has been extended and adapted so often and successfully that the scalability argument is just silly. And if x86 is so bad, why does nobody, including Intel themselves, manage to beat it (and not just for special applications) at the same transistor budget?

x86 may not be pretty, but it certainly does the job. And with its compact code it's actually quite well suited to today's requirements, where memory bandwidth
and latency are much more important than the size of the instruction decoder.

Besides, what exactly is an "agonizing platform"?

Reply Parent Score: 2

RE[3]: IDC survey ... lol
by magick on Mon 20th Feb 2006 14:33 in reply to "RE[2]: IDC survey ... lol"
magick Member since:
2005-08-29

The x86's demise has been predicted for at least 20 years, so you're gonna have to come up with some more convincing evidence for your assertion.

And so was predicted the end of litographic technology, which still resists, but it doesn't proofs that it will not reach its practical/physical limitations at some point. It WILL.

Technology/engineer will always find it's way around, but it doesn't mean it's the best way. Transition costs and compatability are really the key terms in this issue, so industry always tend to postpone such gigantic transitions. (LCD vs CRT etc)

The x86 ISA has been extended and adapted so often and successfully that the scalability argument is just silly.

You think so? Just look at the figures showing real performance gain for past decade. You'll be surprised how curve subsides due to different factors. x86 just hapens to be one of them.

And if x86 is so bad, why does nobody, including Intel themselves, manage to beat it (and not just for special applications) at the same transistor budget?

And who would beat that mamoth application base with its software developers? Like I said, compatability is really a key issue here.

x86 may not be pretty, but it certainly does the job. And with its compact code it's actually quite well suited to today's requirements, where memory bandwidth
and latency are much more important than the size of the instruction decoder.


Now, you are not having any clue about what you're talking, do you? Bandwidth is always opposed to latency, and instruction decoder is just a way to save bandwidth on part of latency. Further more, it limits CPUs ability to process data by delaying and limiting number of instructions which are fed to its pipelines. Out-of-order execution just makes things worse when it comes to prediction miss (pipeline flush). It's not that simple, you know.

Edited 2006-02-20 14:36

Reply Parent Score: 2