“Dell Computer Corp. has discontinued its Itanium-based workstation due to weak demand, marking another setback in Intel Corp.’s efforts to promote its 64-bit chip released eight months ago.” Read the rest of the report at ExtremeTech. Our Take: It is astonishing (and truly disapointing) to see a super-chip (a real wonder in the CPU design), like Itanium is, not being able to sell well, mostly because sysadmins not wanting to give up on x86. I think, now I understand better when software companies choose to support legacy code, even if it bloats their product. It seems to be a necessary reason to commercially succeed, no matter what we geeks say about clean designs and speed. Let’s see what the new Intel 64-bit CPU McKinley can do in the marketplace. The failure of Itanium so far also caused Intel to try competing with AMD Hammer in the x86-64 bit area.
“It is astonishing (and truly disapointing) to see a super-chip (a real wonder in the CPU design), like Itanium is.”
There is absoutely nothing super, spectacular, or wonderful about Itanium. Itanium is nothing more than Intel’s extremely late entry into 64 bit computing, a market that has been dominated by Sun, Alpha, and IBM’s RS/6000 series for many years.
It doesn’t surprise me at all that Itanium based workstations didn’t take off. The average person has no need for an Itanium based workstation. (Plus how much software is there for Itanium? Not much.)
As far as the high end server arena, the enterprises that need that kind of server power are already running Sun, Alpha, or IBM. They are not going to switch over to Intel. It’s far too expensive. New hardware, new software, etc.
My take: Itanium entered the 64 bit arena way to late.
There is absoutely nothing super, spectacular, or wonderful about Itanium.
No offense, but I take it you’ve no idea about CPUs…
As I said, let’s see what McKinley will do now.
No offense, but I take it you’ve no idea about CPUs..
And you do?? Hahahaha…
and just sell the Alpha they bought from Compaq…..atleast ther eis a market for the damn thing.
The Alpha was an awesome chip. Unfortunately once Microsoft stopped developing WinNT on it everyone else dropped too. Digital used to have a great module called FX32 or something like that that would allow you to run x86 binaries in emulation. The next time you ran the binary it would translate as much of the program into native calls as possible. Supposedly it was slicker than slick but who knows where it sits now.
We’ve been running an Alpha with VMS for years – incredible chip, absolutely incredible.
I do know a little bit about CPU’s and after reading through the product brief on the Itanium I don’t really see any major technological jumps. The EPIC sounds really cool, but is just another small step. Operating on multiple instructions at one time by different processor units has been possible for some time now they just have there own memory units. The other small advancement in EPIC is the branch and cache “hints” added at compiler time. I guess that is cool. I guess that they did the math to show the clock time used by processing the “hints” is less than the time for waiting for a cache miss. I also guess that is why they feel that they can make it on, at best, half the cache of the last Alpha chip.
Also, only 44 bits of memory addressing? That’s a little behind the rest fo the pack who do a full 64 bit, IIRC.
Remarkable for the desktop world? Yes. For the 64 bit server world? No.
Landar_c
intel has been trying to sell half-assed products at high prices, they can surely do better! thanks to all the people who decided against buying such overly expensive stuff!! intel should do a decent job and stop trying to rip people off like crazy. good luck to AMD. hopefully there will be new and improved chipsets for their CPUs.
thanks!
intel is a know lier. they always cripple their CPUs to make more money or to force power users to buy XEON.
>I do know a little bit about CPU’s and after reading through the product brief on the Itanium I don’t really see any major technological jumps.
<p>
The great thing about Itanium architecture is that it can execute a lot more instructions per cycle than any other CPU (because of the VLIW (“high parallelism”) architecture compared to the existing CISC/RISC ones) and that supposedly it can do that in higher MHz than CISC/RISC. But, it doesn’t come without its shortcomings (the compiler should do all the hard work instead – the optimizations are now in software level instead of hardware, high price etc), which might make it commercially unviable, but that doesn’t stop it of being a spectacular chip.
shouldn’t the fact the software does all the work make things cheaper rather than more expensive??? somehting like the case of soft-modems????
Dell is used to this after all, the things Dell had to do trying to sell that awfull RDRAM.
Anyone remember that Tomshardware article?, Dissecting Rambus, I think the ocassion deserves to remember it, specially the http://www4.tomshardware.com/mainboard/00q1/000315/rambus-04.html“&… . Love would make you do things that you know is wrong:
“Dell is perhaps the world’s largest manufacturer of personal computers. Dell is also widely considered closely wedded to Intel. On the computer titan’s site, the speed of the RDRAM used in its systems is difficult to find and, for the typical consumer, difficult to interpret. After drilling down to the RDRAM specifications for the Dell XPS B, the computer giant provides information that is not only misleading, but also simply false. Dell boasts that http://www.dell.com/html/us/compt/dimen/memory.htm“>”RDRAM but elsewhere on this page the frequency of the RDRAM used in the system is stated at 356 MHz. As already explained in this article, this indicates in an indirect way that the system is equipped with the slower PC700 RDRAM which will never reach a bandwidth of 1.6 GB/sec.
Also on this page Dell states correctly that the bus width for its RDRAM systems is 16 bits, but IT ALSO STATES THAT SDRAM’S BUS WIDTH IS ONLY 8 BITS when, as you already know, SDRAM has a 64-bit bus. Incorrect at best, misleading at worst, Dell should be harshly criticized for providing this disservice to its customers. In light of other misinformation currently surrounding RDRAM, Dell’s actions are cast in an unfavorable light.”
sounds like intel is now officially a victim of their own device. For years x86 and windows compatibility/market share have protected Intel and Microsoft from competitors.
Now compatibility and a lack of software are hurting Intel’s ability to migrate beyond x86. no one wants to give up their existing investments in sun, Ibm, etc. The AMD 64-bit strategy will likely have better results.
ohhh……so, Intel crated a high powered Transmeta ripoff that sucks up more power and is 10 times as expensive.
hmmm
>ohhh……so, Intel crated a high powered Transmeta ripoff that sucks up more power and is 10 times as expensive.
Or, Transmeta ripped off Intel, as Intel was working with HP for Itanium since 1989, it sucks more power because it is a server CPU, not targetting laptops, and it is more expensive because of all the above reasons (plus the 10-year labor they have to pay off). Get your facts straight before you start comparing mobile-centric cpus to scalable (up to 512 cpus on the same machine) servers.
I worked on Windows NT compilers for the PowerPC and Alpha platforms, and watched them both disappear. Although the failure of those is due in part to lack of visible support from IBM and Compaq respectively, it is also true that it is hard for any non-x86 platform to compete – especially on the desktop.
Few people are willing to buy a new platform that runs their existing applications slowly if at all, especially if there’s no track record to show that new applications will be ported. This is less true for server applications than desktop ones, but it still applies.
To assure success, an alternative platform will have to offer performance comparable to the contemporary x86 processors with legacy code and superior performance on native code. I see only two ways to do that: an x86 derivative like x86-64, or an architecture optimized for x86 interpretation / recompilation like Transmeta uses in its Crusoe processors.
Perhaps Intel has the clout, money and will to succeed, but that is far from assured.
Eugenia, it may be your forum, but it’s still unwise to assume that your readers are stupid and try to bluff them. Some of us studied CPU architecture, a few of us actually used the Itanium, and many of us have read the (abysmal) benchmark results and can do price/performance metrics in our heads.
A nine-wheeled, inflatable, two person vehicle that travels at four miles an hour and runs on a patented new fuel might be a clever technological achievement (as Itanium is) but it’s not astonishing when it fails in the market. As nasty as the IA32 CPUs are they are fast, and there’s open competition.
Intel and AMD in competition to produce cheap x86-64 CPUs is a dream come true for big iron users, even if manufacturers would have preferred the high margins and exclusive partnerships of IA64. There’s a >50% chance now that McKinley will flop and take IA64 with it forever.
It’s a lot of wasted effort for Microsoft and Red Hat, but the 64-bit market was already too crowded and Intel’s entry just didn’t bring anything that was really needed. AMD’s design does have something new, a real migration route from cheap and cheerful x86.
I have seen Itanium docs and the chip is technically awesome. That does not subtract any value from the competition (PowerPC, etc.) but Itanium is something I would like to have.
So, why do I *NOT* have an Itanium? Simple. Money does not grow on tress. So a new technology, even if cool, is no reason on its own for me to go running and buy it. I have a PC that works. However, when it’s time to buy a new PC, I’d check upon Itanium.
I’m an AMD fan, but credit where credit is due. Intel has made mistakes, but Itanium looks good. AMD’s hammer looks nice too, and I’m not going to delve into which one is better just yet.
So, Dell decided to not sell Itaniums. Does that say that the technology is bad? Did ALL of you go buy a new CPU *that you would like to have* EVERY time it comes out? Of course not.
It’s amazing how technological value is awarded in terms of sales. Remember, present!=future, so the Itanium could very well be among the top future contenders despite the current complaints. Weak sales now does not mean weak sales later.
Plus, there is no real excuse for it not being a contender from the technical standpoint. Both Java and .NET are arquitectures that are [somewhat and moving fully towards] CPU-independent, and thus the future is about CPUs becoming another commodity. True, it won’t be exactly the same, but it will be pretty much like that.
So, TIME is needed, and it is also needed that marketing does not weaken. (Because marketing is an additional factor, not the only one!)
I’m sorry but VLIW has been around for a long time and was dropped in favor of superscalar architectures used today like IBM’s Power, Sun’s UltraSparc and other “real” CPUs. The problem with EPIC is that it is just another name for VLIW which has gained a rather bad reputation because the advancements in compiler technology were, at the time VLIW was hip, and are just not in sight. After all Intel intended to reach 4 IPCs on Itanium but reached a meager 0.7. Without a breakthrough with compiler tech this will not change though, meaning that although McKinley will certainly be faster 3/4th of it will still be ideling. Not so cool!
Looks to me that the real competition for Itanium in the 64-bit server CPI market is Sun’s SPARC. I am no CPU expert but I can understand the articles I read about each architecture. Looks to me that the Itanium is a far better chip than the SPARC. Will it get a foothold in the large server market where Sun already has a big presence? There is a good chance Intel won’t even with a better chip. Sun has done well in the server market even when it was at the very bottom of price/performance. Intel has always had trouble gaining credibility here. The i860/i960 and the Pentium Pro were promoted by Intel as next generation server CPU’s but they sold poorly.
Sun, as you know, has also gone x86, with AMD.
Hey Intel!, can’t touch this!!! @+j+@
Wow. This is a pretty heated debate! 🙂
As someone who has worked very closely with Itanium chips (while at Red Hat, we were the first OS to boot on Merced/Itanium) and on Transmeta’s Crusoe (as an employee of Transmeta) I think I can offer some nice balance to this argument.
First of all, as Eugenia pointed out, there are two very distinct markets being targeted by Crusoe and Merced. With that in mind, there are MAJOR architectural differences, caused by this fact, as well as desgin differences that this fact entails (multi-CPU processing vs. dynamic voltage/frequency shifting etc.) You need to try to stick to comparing apples to apples, in whatever domain you’re chosing to look at — EPIC/VLIW vs. CISC for CPU cores, Tualitan vs. Crusoe for mobile CPU’s etc.
Secondly, there is, and always will be, a debate on what “type” of processing is superior, VLIW/EPIC, RISC, CISC, CISC+SIMD, RISC+SIMD (with SIMD meaning the generic single-instruction-multiple-data meaning, not Intel’s marketing definition.) That debate will never be resolved. People can test things at the scientific level and prove RISC or VLIW superior to CISC, but a real-world test might refute it, even with *EXACT* testing standards. The bottom line is that everyone has their own preference, either from a technical and/or marketing perspective and you’re probably not going to convert someone’s CPU religion easily.
Finally, I think the best way to look at things of this type is to see how well they did what they set out to do. BeOS, OS/2, and the Alpha CPU are all *EXCELLENT* technologies (I’ll refrain from labeling/categorizing Merced) that just didn’t survive in the market for whatever reason. A dual P4-Xeon (Foster) server is an *EXCELLENT* server because it is fast, cheap, and available. It may not be the most technically savvy or sexy solution, but it gets the job done. For Dell, Itanium workstations were not getting the job done. With the rise in popularity of x86-powered PC Linux worktations, all workstation vendors have felt the pinch (Sun, Sgi, HP, IBM) and they are all having to react.
I’m by no means up to the level of most of you regarding CPU architecture and whatnot, and I’m only throwing this out as possibility — mainly for your opinions…
…with new and “clean” technologies (software in particular), there doesn’t seem to be any happy median because we’ve dug ourselves into holes where we DO need to support legacy software and hardware. The original posting by Eugenia mentions something along these lines:
“It seems to be a necessary reason to commercially succeed, no matter what we geeks say about clean designs and speed.”
I’m asking you gurus if this is possible (in terms of hardware): Is it possible to have a dual-processor system of two different chips (such as the x86 and McKinley)?Granted, you’d have to have two separate controller chips, busses, etc… not to mention an OS that takes advantage of this — kicking into the 64-bit mode when applications written to take advantage of it call for it. This way, the x86 will eventually be phased out in a few years.
I realize that the cost for design and implementation would most likely eclipse the benefits (at least at this point), but could this be a solution to get us out of this “rut”?
Curious,
Mike
Strange as it may seem, IT is very conservative and afraid of technologial changes. People always stick with what they know (Unix, x86, C) and are afraid to pick up new ideas or concepts, no matter how promising they may be. *sigh*
VLIW has been around as a concept before Intel and Transmeta. It was really pioneered by HP, as I recall, and their engineers helped on both the Merced and McKinley project. It’s a fascinating concept, but I can’t help but remember all the circa-1994 predictions of how RISC chips were going to leave CISC in the dust, CISC was a dead-end, and everyone would be using PowerPC chips or successors to Intel’s i960 RISC line. Needless to say, the death of CISC seems to have been greatly exaggerated.
I’m not surprised by this, either. Well before Merced was released as the Itanium, analysts were suggesting that McKinley was the one to really wait for. Itanium servers may have good performance, but not good price-to-performance.
Everyone knows Dell is an Intel whore.
With the release of AMD’s Hammer, does anyone think Dell will remain an Intel whore? If Dell does not transition, but compaq, ibm, hp, etc ceter do, wouldn’t dell be at a distinct disadvantage?
“No offense, but I take it you’ve no idea about CPUs…
As I said, let’s see what McKinley will do now.”
I know enough to know that there is little exciting technology in Itanium. There is simply nothing remarkable about this chip.
Part of this is Intel’s own fault. Intel marketed Itanium as vaporware for so many years that when the actual product finally shipped, no one cared. There was simply nothing to get excited about. It’s the hype phenomena. Intel hyped Itanium so much that when it actually shipped, I was like “what’s so great about this?”. I suspect a lot of other people felt the same way.
Eugenia,
BTW, if Itanium is such a great chip, than how come the benchmarks are nothing to write home about? I could care less about the technological design behind the chip. I only care about how it performs. And the benchmarks are nothing specatacular.
You might be able to get away with saying that the technology behind the chip is impressive. But why should I care about that if the performance of the chip itself is not all that exciting?
Dual (different) processors were reasonably common at one time. Many Apple II users had (Z-80) CP/M boards, and in the early ’80s the 8088 and Z-80 were often paired so CP/M and DOS programs could be run on the same computer.
The problem is price/performance. If you have two different processors, one or the other will probably be dormant at any one time because the software you’re using was compiled for the other one. You end up having a more expensive computer that doesn’t do much more.
This is especially true because the economies of scale x86 processors use encourage the most advanced process technologies. Using similar process technology on a less-popular architecture is very expensive, so alternative chips tend to be expensive or about the same speed as a comparable x86 chip. Adding a alternative processor to a PC just doesn’t add up.
When most people think of the Titanic they think of a large sinking ship. However the Titanic was the nicest largest coolest ship ever built. The problem with it’s size was that it was a pain in the butt to turn, hence the iceberg hitting sinking incident.
The Itanium is a cool design. Most people like the theoretical part of it. The problem is the design is too big. It is too slow. That is why designs which are simple and are found in cars, microwaves, and tvs are also found in high end servers (PowerPC, MIPS).
On a more technical note. Do you know what a cache miss does to one of these things? That’s why only their floating point is even slightly competitive. This is why they have the biggest on chip cache known to man, and use a monster huge die size. Plus what does EPIC have over out of order execution? Not a lot.
The speed, not the compatibility, are why Sun Solaris and IBM AIX dropped support for the Itanium after they had already ported their OSs to it. You can get an unsupported developer preview copy of either if you try, but there is no active development being done anymore. This is why Dell couldn’t sell one to save their lives, nobody will pay that kind of money for that type of performance.
Maybe someday Intel/HP will get their act together. I mean with the Intel guys, the Compaq Alpha guys, and the HP PA-RISC guys they should do ok. Heck, they should rename the Alpha to Itanium III and start with that. The alpha was an example of a real good chip that didn’t make it because of compatibility (and marketing) Maybe they should just buy AMD, IBM, and MIPS to finish off the chip monopoly, then they could sell something that slow.
Simba: Benchmarks shouldnt be something to write home about, often benchmarks are biased towards a certain criteria and so even though a benchmark says “your computer got a zillion points” doesnt mean all that much.
The Itanium is quite marvelous, and go quite fast, but the compilers (which 99.99% of the world relies on (damn high level languages)) just arent any good at making Itanium code. If you could get an assembly coder to code an app for the itanium specifically for it, and actually use what the CPU has to offer, im pretty damn sure it would blow away alot of the competition.
So before shooting your mouth off like that, I suggest you learn a bit.
“The Itanium is quite marvelous, and go quite fast, but the compilers (which 99.99% of the world relies on (damn high level languages)) just arent any good at making Itanium code.”
It seems there are plenty of others posting here talking about the design problems of Itanium. It’s not a marvel. So I don’t think I am shooting my mouth off at all. Itanium is nothing spectacular.
And BTW, I knew someone was going to blame the software. But that is a fallacy. Intel produces a compiler optimized for Itanium. So yes, there is a compiler out there that produces decent Itanium code. So maybe you are the one who needs to learn a little bit? Sure the GCC compiler sucks at producing Itanium code. But the Intel compiler produces pretty decent Itanium code.
Well…
Microsoft, for one, seems to have put some faith in this processor (or at least the forthcoming McKinley).
The 64-bit edition of SQL Server as well as the 64-bit versions of their OSes (XP, .NET), which are soon to be completed, is a huge investment in time and resources for MS and I don’t think they would have bothered if they had any suspicions that IA64 was going to be anything less than successful.
Maybe IA64 workstations and servers will sell just fine once MS gets all of their high-end products finished for it (and McKinley is available) and this is what the market is waiting for. Only late beta versions of this stuff is available right now and that is what Dell has been installing on its IA64 machines.
This is a whole different topic, and it might be a stupid question…
If MS was really interested in 64 bit computing, how come they never ported to Sparc? After all, MS is a software vendor. It would seem they would be most interested in getting their software running on as many platforms as possible. They could care less what hardware it is running on as long as it is running MS software.
I know it can’t have anything to do with ties to Intel because there were versions of NT available for Alpha, and I think (I might be wrong about this one), that there was even a version of NT available for PowerPC for brief period of time. Microsoft abandoned both of these platforms though.
Given that Sparc is the most popular archetecure in enterprise server computing (I think they still have over 70% of the marketshare for high end servers), why didn’t Microsoft ever port NT to Sparc?
P.S. It also looks like Microsoft is betting completely on IA64 and is ignoring AMD Sledgehammer altogether. What does that tell us?
” P.S. It also looks like Microsoft is betting completely on IA64 and is ignoring AMD Sledgehammer altogether. What does that tell us? ”
Not a damn thing. Sledgehammer is backwards compatable with regular 32 bit current operating systems and software. You will be able to install and run win2000, 98, Linux BSD etc and they will think they are on an Athlon system. Only if you want the 64 bit extras do you need to recode.
Microsoft and OSS had to expend lots porting carma because Itanium is totaly new.
“P.S. It also looks like Microsoft is betting completely on IA64 and is ignoring AMD Sledgehammer altogether. What does that tell us?”
Probably not very much. After all, the relationship between Microsoft and Intel is a pretty rocky one. Apparently, Bill Gates himself called up Intel’s CEO to scream at him after they used Linux in their Itanium demo instead of Windows.
Intel is pretty good about cooperating with other OS vendors. For example, they want Linux to work on Itanium, and they have done a great deal of work on getting Linux to perform well on Itanium. This doesn’t sit very well with Gates and company.
http://www.anandtech.com/showdoc.html?i=1546
If you want to read a good breakdown of Sledgehammer.
Why do I want to go out and buy a Sledgehammer if I can only run 32-bit Windows by it making the OS think that it is on an Athlon? I still cannot run 64-bit Windows, can I?
Why not just buy an Athlon instead?
Is Sledgehammer significantly faster than Athlon in 32-bit?
What is the price/performance value comparison?
Well you can’t go out and buy a Sledgehammer because AMD has not released them yet.
The point is if you buy Sledgehammer you can still run legacy apps. With Itanium old x86 stuff runs really slow. Do you think microsoft will not release sledgehammer versions of its stuff when there will be versions of linux avaliable?
There is no way microsoft will let linux cream them running on sledgehammer.
For price vs performance take a look at this page.
http://www.xbitlabs.com/news/story.html?id=1011297004
Itanium scores a spec of 650
Sledgehammer scores a spec of 1350
Sledgehammer out scores everything including all of the 32 bit processors.
Intel could have just licensed Mips and built something like this.
http://www.eetimes.com/story/OEG20010613S0073
A dual core 64 bit mips running at 1 GHz and consuming just 5 watts of power.
Think what Intel could have achieved if they have started with this processor and then used some of that Itanium money for improvements.
If any of you actually took the time to dig in the impressive (read: extremely huge) manuals provided for the Itanium chip, you would notice that alot of thinking was put in the branch prediction and prefetch code.
Branch prediction has shown many times over the years to have a great impact on performances and by having specific instructions for indicating data to prefetch, they made it easier for the cpu to know what is really important to load next.
Optimizing a while() or a for() loop for branch predicatability is easy, but when it comes to if() and other such lines, it is often totally off. There are basically two ways to take advantage of that: 1.) run the binary once, mark which points are most often taken, and recompile. 2.) ask the programmer to indicate which branch is most likely taken.
I doubt that the C/C++ compiler Intel made for the Itanium is using any of those techniques. Ansi C doesn’t allow for any branch prediction tag to be inserted. Like Raptor-32 pointed out, these specific issues can only really be (at the moment) taken advantage of by an assembly programmer writing specific code for the chip.
Porting an operating system like WindowsNT to it can’t be done at such low level, so what they do is simply write the a new compatible ABI layer in the compiler and fire it off. Doesn’t always turn out that well.
Itanium provides a clean instruction set, one that offers alot of possibility and is easy to extend. They did not invent much stuff in their new architecture but they certainly haven’t turned down what other came up with.
I’m a die-hard AMD lover, but I salute Intel’s effort to try to bring in something cleaner than a 20 years old x86 architecture.
May a div not take 71 cycles, Amen.
From the looks of your data, McKinley is dead-on-arrival unless it can somehow narrow the clock speed gap (2.4 Ghz difference in your chart) between itself and AMD “ClawHammer”.
If AMD can maintain such a huge SPECint advantage as it stands in your data, then IA64 is hopelessly slow. Even if it offers a larger memory-space for apps like 64-bit SQL Server and 64-bit Exchange, I don’t see how it can possibly compete with performance numbers like that from a processor that is probably far less expensive.
Of course, part of the scenario is which of these chips can actually be delivered first and who gets a head-start in the market and by how much of a time difference…
Well speaking of prices I heard a rumor (for what its worth) that AMD people were saying Sledgehammer would be quite cheap. They were hinting at around 100$ a processor because the die area is quite small so yields should be really high. AMD is going to be making Hammers by the boat load. There will be desktop versions of Hammer and Laptop version of Hammer. I would like to see the comedy of Intel putting Itanium in a laptop(Itanium consumes about 130 watts of power right now). If AMD releases Sledgehammer at say 250$ retail then Itanium is DOA, with its several thousand dollar price, lousy performance and almost zero backwards compatability.
I don’t know about I64 but as far as I know the Alpha Processor Inc. is very bad in marketing. I’ve previously contact them several time regarding the processor bu the are not responding even once.
For those of you who think the 64-bit versions of MS software won’t run on AMD’s Hammer chips, don’t worry they will. Just because there are no public announcements does not mean MS and AMD aren’t working together to make sure their software will run on the Hammer line.
> I’m asking you gurus if this is possible (in terms of
> hardware): Is it possible to have a dual-processor system
> of two different chips (such as the x86 and McKinley)?
> Granted, you’d have to have two separate controller
> chips, busses, etc… not to mention an OS that takes
> advantage of this — kicking into the 64-bit mode when
> applications written to take advantage of it call for it. > This way, the x86 will eventually be phased out in a few > years.
Tha Amiga has been doing this for years. Boards were developed that fited into the CPU socket and had a PPC and a 680×0 chip on it they also have local memory for the PPC chip to use, very cool.
> P.S. It also looks like Microsoft is betting completely
> on IA64 and is ignoring AMD Sledgehammer altogether.
The Hammer is a bit like the curret 32 bit x86 processors are to their 16 bit ancestors.
If Intel can’t get any grip in the market with IA64 processors Bill has to decide whether to wait for x86-64 to be big before announcing that Windows 64 will, once again, be scrapped and done over.
Remember “Cairo”? That was supposed to include 64-bit Windows on Alpha. Instead customers who bought NT on Alpha watched their competitors using 64-bit Unix while they struggled to crush working sets down to 2Gb or less.
In 1998 Bill promised that this disappointment would soon be over. He told very influential people that despite previous delays Merced and Win64 were now both on target for delivery in 1999, shortly after NT 5.0, and that Microsoft were confident that IL32 P64 was a sensible choice.
Apple is now closer to a usable 64-bit OS than Microsoft, despite six years of hard work and broken promises. Maybe Asheron’s Call and Hotmail are great, I wouldn’t know, but I can tell you that on big iron MS is a big joke.
It doesnt matter how spectacular a design is ( and I agree that Itanic is spectacular ), if it doesnt bring something useful to the market it will not be successful.
“It is dark inside the box.”
Users dont care whats in the box, its the user experience that counts. The Itanic doesnt present a very good user experience, especially for workstations. Itanic x86 emulation is painfully slow, and its performance for integer code is mediocre. It can keep up with fast x86 processors for floating point, but I dont think the Dell workstation market really targets scientific users.
If you want some good opinions about Itanic try reading comp.arch, the guys in there have real knowledge. It comes down to this:
a) the basic premise of Itanic is flawed. Out Of Order execution is feasible.
b) the Itanic was designed by commity.
c) the Itanic was driven by political pressure, not technological pressure.
d) there are features on the critical path ( predication, register rotation ) that may limit the speed of execution.
e) totally dependent on compiler technology. It isnt easy.
Ever take a look at a Unisys E7000 box running Windows 2000 Datacenter?
It runs SAP faster than anything else out there (even Sun, IBM boxes with 50% MORE CPU) according to the TPC using SAP’s own load-simulation benchmarks (which are highly regarded), and for a hell of a lot less money.
Unisys says that E7000s are selling satisfactorily well but I don’t have any real numbers.
Guess I shouldn’t be so lazy. Here’s a link to the ES7000 info.
http://www.unisys.com/hw/servers/es7000/doc/unisys-sap.pdf
“Historically, Intel has this remarkable ability to charge a factor of eight for a performance boost of two in microprocessors.”
they are really bad and only after the money. performance of a PII is almost identical to a PIII to a P4. the L1 and L2 cache are all that seems to matter!! let’s not kid ourselves with crappy comparisions and charts!!!