Linked by Thom Holwerda on Wed 31st Oct 2007 14:14 UTC, submitted by Dorka
Intel Intel announced today its line of Itanium products for high-end computing servers. Codename Montvale, the chip is an update to Montecito, the Dual-Core Itanium 2 chip which was launched in July last year, Eddie Toh, regional platform marketing manager of Server Platforms Group for Asia-Pacific at Intel, told ZDNet Asia in an interview on Monday.
Order by: Score:
I can't believe it...
by kloty on Wed 31st Oct 2007 22:34 UTC
kloty
Member since:
2005-07-07

The latest greatest Intanium is manufactured in 90nm technology?! Xeons will be manufactured in 45nm pretty soon! There is no frequency upgrade except for the frontbus and the next version will appear sometimes next year. Sorry to say, but this is so lame, just compare it to the developments on POWER front. Shame on you Intel, so many great processor architectures have been buried because of all the promises Intel and HP made with Itanium and now we see, what came out.

Reply Score: 3

RE: I can't believe it...
by Downix on Wed 31st Oct 2007 23:46 UTC in reply to "I can't believe it..."
Downix Member since:
2007-08-21

you only mention POWER when SPARC is making leaps.... 8)

Reply Score: 1

RE: I can't believe it...
by kaiwai on Thu 1st Nov 2007 01:07 UTC in reply to "I can't believe it..."
kaiwai Member since:
2005-07-06

What I find funny is the fact that they would be better off fixing x86 than continuing to flog a dead horse; there are features in he high end which they would be better off going the full monty and incorporate into their mainstream processsors - MMIO for example.

Reply Score: 2

RE[2]: I can't believe it...
by nick on Thu 1st Nov 2007 02:54 UTC in reply to "RE: I can't believe it..."
nick Member since:
2006-04-17

MMIO? What's that? And how should it be introduced or fixed in x86?

And what I find funny is how many people know precisely what Intel is doing wrong in terms of their strategic and economic decisions. Even when carrying the disadvantage of having just a smidgen less information than the decision makers at Intel.

Reply Score: 1

RE[3]: I can't believe it...
by kaiwai on Thu 1st Nov 2007 03:39 UTC in reply to "RE[2]: I can't believe it..."
kaiwai Member since:
2005-07-06

MMIO? What's that? And how should it be introduced or fixed in x86?


Information is on wikipedia, as for why? look at the latest conversations regarding SCSI/OpenSolaris and how the lack of MMIO (when compared to SPARC) makes driver writing that little bit more difficult. It would also improve performance, especially on very large configurations.

Intel will be introducing partially when they release their next x86 platform which will have all the components (chipset/processor/etc) virtualisation aware, which should also address any performance issues as well.

It would also be great if the PC market got their act together and finally killed off BIOS; UEFI is here, lets move on. Since moving to Apple, thanks to dropping all the legacy crap, the OS loads faster, there aren't there laundry list of issues which I face etc. part of the Mac's succcess is in the hardware, OpenFirmware avoided the crap of BIOS and UEFI does the same time.

Reply Score: 2

RE[4]: I can't believe it...
by rayiner on Thu 1st Nov 2007 04:07 UTC in reply to "RE[3]: I can't believe it..."
rayiner Member since:
2005-07-06

What in god's name are you talking about? x86 systems have supported memory-mapped I/O since forever.

Edited 2007-11-01 04:07

Reply Score: 2

RE[4]: I can't believe it...
by nick on Thu 1st Nov 2007 04:41 UTC in reply to "RE[3]: I can't believe it..."
nick Member since:
2006-04-17

Information is on wikipedia, as for why?

MMIO? As in memory mapped IO? Intel x86 CPUs have had this capability for a long time. Seeing as all their memory traffic goes through a discrete northbridge chip anyway, the main thing for the actual CPU to provide is really the memory access policies to make it useful.

It helps if you actually know what you're talking about, when you're making assertions.


look at the latest conversations regarding SCSI/OpenSolaris and how the lack of MMIO (when compared to SPARC) makes driver writing that little bit more difficult.

I have a feeling you read some thread where people were talking about IOMMUs. Completely different, but again due to the nature of Intel's CPUs, IOMMUs are more a function of the platform. And that's true of ia64 too, Itanium CPUs don't have IOMMUs either. While IBM xseries x86 systems do have IOMMUs, and on the other hand, SGI's ia64 Altix systems don't.

It would also improve performance, especially on very large configurations.

Possible, but not always the case. If devices are capable, there is no big for an IOMMU to improve performance since the days of DAC over slow old 32 bit PCI are over. (Actually using it could reduce performance due to translation management overhead). Memory protection is the main reason for the renewed interest recently.

Reply Score: 2

RE: I can't believe it...
by acobar on Thu 1st Nov 2007 02:48 UTC in reply to "I can't believe it..."
acobar Member since:
2005-11-15

They probably would like to kill the whole thing asap, but probably don't do it because:
- would give them a lot of headache because contracts;
- would damage their public image with tech partners, tech media and customers (the big ones that really expend money).

After all, Intel did pledge ones faith, they must honor their words, even though they will give more and more incentives to people move away.

Reply Score: 2

RE: I can't believe it...
by javiercero1 on Thu 1st Nov 2007 09:36 UTC in reply to "I can't believe it..."
javiercero1 Member since:
2005-11-10

"The latest greatest Intanium is manufactured in 90nm technology?! Xeons will be manufactured in 45nm pretty soon! "

The problem is that for processes under 90nm there is still a lot of unknowns regarding electron migration. Which means that sub-90nm processors have a fairly compromised lifetime. In the Xeon non mission critical market place that expects replacement in less that 18 months. That is not an issue. On top of that the cache design for the Itanium is fairly hand tuned, and it is not so easily portable to other processes. And the gains of shrinking are not offset by the reduced performance of the resulting cache at 65nm. Even at 1.5Ghz, an Itanium2 still manages top FP scores. Not too shabby.


In the mission critical segment that IA64, and some other manufacturers target. Speed is not as important as it is being up for eons of time, and have parts not fail for years of 24/7 operation. That is why a lot of IBM mainframes are not using Power6 but rather some "unsexy" 130nm processors. Because even at 90nm electron migration may be considered too risky.

There is a method behind the madness...

Edited 2007-11-01 09:39

Reply Score: 4

RE[2]: I can't believe it...
by foobar on Thu 1st Nov 2007 22:46 UTC in reply to "RE: I can't believe it..."
foobar Member since:
2006-02-07

"The problem is that for processes under 90nm there is still a lot of unknowns regarding electron migration. Which means that sub-90nm processors have a fairly compromised lifetime. In the Xeon non mission critical market place that expects replacement in less that 18 months. That is not an issue. On top of that the cache design for the Itanium is fairly hand tuned, and it is not so easily portable to other processes. And the gains of shrinking are not offset by the reduced performance of the resulting cache at 65nm. Even at 1.5Ghz, an Itanium2 still manages top FP scores. Not too shabby.


In the mission critical segment that IA64, and some other manufacturers target. Speed is not as important as it is being up for eons of time, and have parts not fail for years of 24/7 operation. That is why a lot of IBM mainframes are not using Power6 but rather some "unsexy" 130nm processors. Because even at 90nm electron migration may be considered too risky.

There is a method behind the madness... "



Yes, enteprise machines are more conservative, but you are full of crap wrt IBM:

power 6 - 65 nm
http://www.research.ibm.com/journal/rd/516/le.html

z6 - 65 nm
http://www2.hursley.ibm.com/decimal/IBM-z6-mainframe-microprocessor...

The previous power and mainframe processors were 90 nm.

Reply Score: 1

RE[3]: I can't believe it...
by javiercero1 on Fri 2nd Nov 2007 02:50 UTC in reply to "RE[2]: I can't believe it..."
javiercero1 Member since:
2005-11-10

"Yes, enteprise machines are more conservative, but you are full of crap wrt IBM:"

Before you use such language, I recommend you understand what you posted.

The current offering from ibm, the Z9, is a 90nm process just like Itanium. For the reasons I cited. The Z6 will come out in 65nm just as the new IA64 65nm parts roll out in a year or two. The Z-series are the types of systems targeted by the superdomes and integrity series from HP. Z-series don't use Power6, but the z9/z6 which have some commonalities but are different enough to be their own beasts.

The consumer parts go in a more aggressive process shrinking than the carrier-level grade stuff that is usually 1 or 2 process geometries behind. The reasons being some of the ones I cited before on why the IA64 and Z-series stuff are fabbed in less "sexy" 90nm.

http://www.research.ibm.com/journal/rd/511/poindexter.html

So rather than say I am full of crap, just bother to read the links you referred. In any case, I get a chuckle about all this nm stuff when most people in this forum don't even understand the basic operation of a transistor :-)

Reply Score: 1

RE[4]: I can't believe it...
by foobar on Sat 3rd Nov 2007 00:38 UTC in reply to "RE[3]: I can't believe it..."
foobar Member since:
2006-02-07

"Before you use such language, I recommend you understand what you posted.

The current offering from ibm, the Z9, is a 90nm process just like Itanium. For the reasons I cited. The Z6 will come out in 65nm just as the new IA64 65nm parts roll out in a year or two. The Z-series are the types of systems targeted by the superdomes and integrity series from HP. Z-series don't use Power6, but the z9/z6 which have some commonalities but are different enough to be their own beasts.

The consumer parts go in a more aggressive process shrinking than the carrier-level grade stuff that is usually 1 or 2 process geometries behind. The reasons being some of the ones I cited before on why the IA64 and Z-series stuff are fabbed in less "sexy" 90nm.

http://www.research.ibm.com/journal/rd/511/poindexter.html

So rather than say I am full of crap, just bother to read the links you referred. In any case, I get a chuckle about all this nm stuff when most people in this forum don't even understand the basic operation of a transistor :-)"




I read what I posted. In fact, I attended the z6 presentation.

Let's start from scratch. Here is what you originally posted:




"That is why a lot of IBM mainframes are not using Power6 but rather some "unsexy" 130nm processors. Because even at 90nm electron migration may be considered too risky. "




I never disagreed with your argument. I took issue with the numbers that you tried to use to support your argument. They are just wrong. The z9 mainframes that IBM is selling today with their bluefire processors are 90 nm. They are not "unsexy" 130 nm. It's nice that you corrected yourself in your reply ;)

Reply Score: 2

a view details
by smashIt on Thu 1st Nov 2007 03:05 UTC
smashIt
Member since:
2005-07-06

just wanted to note that intel is developing itanium as long as hp pays for it.
that the new itaniums are still 90nm just means that hp didn't want to pay the extra-price.

if you want to point fingers at someone do it towards hp. intel just invented a new architecture (which is still a good thing, even if it didn't go the way intel hoped), but hp dropped pa-risc and alpha

Edited 2007-11-01 03:10

Reply Score: 1

RE: a view details
by rayiner on Thu 1st Nov 2007 03:25 UTC in reply to "a view details"
rayiner Member since:
2005-07-06

(which is still a good thing, even if it didn't go the way intel hoped)

What's good about inventing a crappy new architecture?

Reply Score: 2

RE: a view details
by kaiwai on Thu 1st Nov 2007 03:54 UTC in reply to "a view details"
kaiwai Member since:
2005-07-06

Intel never invented the architecture, it was HP who did, Intel wanted a high end, high margin chip - HP had it, HP no longer wanted to be in the chip business, so they sold off what ever assets they had to Intel.

Intel is still developing it, but basically its a dead end for Intel. They have realised that although x86 is ugly, its going to be the architecture that never died. Btw, this isn't the first time a superior architecture has gone up against x86 and failed.

Reply Score: 3

RE[2]: a view details
by rayiner on Thu 1st Nov 2007 04:09 UTC in reply to "RE: a view details"
rayiner Member since:
2005-07-06

Alpha was a superior architecture that went up against x86 and failed. IA64... just failed...

Edited 2007-11-01 04:10

Reply Score: 2

RE[3]: a view details
by javiercero1 on Thu 1st Nov 2007 09:30 UTC in reply to "RE[2]: a view details"
javiercero1 Member since:
2005-11-10

Alpha superior in what aspect?

AXP was a very compromised architecture, a lot of people seem to take that as "elegant." By the time the 21364 came along, it was clear that it would take a significant investment to make it competitive in the post GHz era. The 21464 was a complete bloated pig that was almost impossible to fab. I love alternative architectures, however one must be realistic...

It is usually those who know the least about the subject who get to judge what "elegance" is. To this day I still get a kick about the typical fanboy complaining about the "ugliness" of the x86 and how PPC is the shit because it is "RISC!" never mind that those people can't barely write a half assed program in C much less code anything in assembler. And don't get me started on the idiots who couldn't pass an intro class to computer architecture, but they get to weigh in on the latest design from a top architecture bureau.

Itanium is geared towards a segment of computing, about which most people in this website have little to no knowledge. IA64 has been fantastically successful for HP, the superdomes et al are making a shitload of money for HP. In that context, Itanium is doing very well. Not only that but even at 1.5GHz it achieves some impressive performance numbers.

There is more to computer architecture than being able to assemble a computer from whatever parts you just bought at Fry's.

Reply Score: 5

RE[4]: a view details
by Javier O. Augusto on Thu 1st Nov 2007 12:25 UTC in reply to "RE[3]: a view details"
Javier O. Augusto Member since:
2005-08-10

[i]Itanium is geared towards a segment of computing, about which most people in this website have little to no knowledge. IA64 has been fantastically successful for HP, the superdomes et al are making a shitload of money for HP. In that context, Itanium is doing very well. Not only that but even at 1.5GHz it achieves some impressive performance numbers. [i]

I BACK YOU UP!

Reply Score: 1

RE[4]: a view details
by rayiner on Thu 1st Nov 2007 14:10 UTC in reply to "RE[3]: a view details"
rayiner Member since:
2005-07-06

AXP was a very compromised architecture, a lot of people seem to take that as "elegant." By the time the 21364 came along, it was clear that it would take a significant investment to make it competitive in the post GHz era.

We're talking about instruction sets here, not the micro-architectures of particular CPUs. The IA64 ISA itself is just a crappy design. VLIW is a dumb idea for a general-purpose processor, and IA64 is an overly-complex and obscure VLIW at that. Maybe these things weren't obvious when IA64 was designed, but they're painfully obvious now.

Itanium is geared towards a segment of computing, about which most people in this website have little to no knowledge. IA64 has been fantastically successful for HP, the superdomes et al are making a shitload of money for HP. In that context, Itanium is doing very well.

Itanium is a "success" by very limited and compromised criteria. The claims of Itanium "profitability" ignore the huge initial investment into the architecture. Itanium is making an operating profit for HP and Intel, but over the long term history of the product, it has lost money. The former point means it makes sense for HP and Intel to string Itanium along until x86 eats its lunch somewhere down the road, while the latter point means that if Intel and HP could go back in time and never do the whole EPIC thing, they would.

Business considerations aside, IA64 has failed as a piece of technology. Itanium is a reasonable chip for certain applications, but by and large its strengths have f*ck-all to do with IA64 itself. Pretty much the only codes that show EPIC in a good light are certain HPC codes, and while hindsight rationalizers will say otherwise, Intel and HP sure as hell didn't invest billions of dollars into a whole new architecture over a decade and a half to develop an ISA targeting for such a specific niche!

Fundamentally, IA64 bet on certain things that just didn't pan out. Specifically, mainstream software ecosystems aren't conducive to VLIW designs, the compiler technology to make them effective isn't feasible, and general-purpose software is moving away from the types of contexts in which VLIW makes sense. Things like this happen all the time, really --- technology fails because it makes assumptions about other technology that don't pan out. The only reason IA64 won't die quietly is because Intel and HP really put a lot of money into it, and they want to milk it and at least get some of it back. For the rest of the market, the only thing IA64 really accomplishes is keeping a bunch of compiler research dollars tied up in unproductive endeavors.

Edited 2007-11-01 14:14

Reply Score: 2

RE[5]: a view details
by javiercero1 on Thu 1st Nov 2007 15:35 UTC in reply to "RE[4]: a view details"
javiercero1 Member since:
2005-11-10

The problem with your analysis is that the IA64 ISA was never intended to be visible to the programmer, ironically neither did most RISC architectures. This whole nonsense of evaluating ISAs, which there is no quantitative way of doing BTW, gets tiresome really. It is like comparing what language is "better" French or English.

There is nothing inherent to VLIW, RISC, or CISC for that matter that makes them more or less suited for general purpose computing. An ISA is merely a programming interface to a microarchitecture at the end of the day. And it is a function of the programmability of that microarchitecture where the burden of the generality of it lies.

An Itanium2 is no better or worse general architecture than an Opteron. The compiler/VLIW pipeline combination seems to perform fairly decently as far as the Itanium2 is concerned, and the equivalent compiler/out-of-order pipeline seems to do for the Opteron machines, as far as general "purposedness" is concerned. They both do the same, get input... process input... dump output. It all depends upon the metrics you are considering. Price/power? Probably Opteron wins (I am just picking an example), reliability/throughput? Maybe the Itanium2 has the edge.

I don't want to put words in your mouth, but it seems that you are missing scalability with general purpose. As far as scalability goes, true the IA64 is no where near the scalability factor of x86 for example, which can move from a laptop all the way to low-mid range enterprise systems. Where as the IA64 was always intended to stay in the high range of things. I assume that was intel's idea. Things like predication on the Itanium means that it will never be a power efficient architecture, and its reliance on cache means that it will always be a pig and a silicon whore never to be suitable for resource constrained systems.

Is that a good or a bad decision? I don't know. But please make no mistake that most architectures succeed because their manufacturer puts wads of cash behind them. It took a loooong time for IBM to recoup the investment it made in POWER, and for the most part the only reason why we have seen a Power6 is because IBM decided to put wads of money into it. Further, the main reason why we may see a Power7 is because DARPA is paying for its development.


Architectures like Alpha, MIPS, etc have died off in the middle/high end of things because there is no single architecture that can maintain itself. In fact the only reason why x86 is where it is today, is because Intel poured wads of cash into the Pentium and Pentium Pro which allowed x86 to endure the onslaught of the RISC machines and out-of-order architectures that appeared through the 90s. Most people assumed x86 to be dead in the water in 1990...

Thus saying things like the only reason why IA64 won't die is because Intel and HP are poring wads of money into it, is a bit disingenuous IMHO because that is the reason why other architectures like SPARC, Power, and x86 wont die.

If you look at HPs standpoint on IA64, it is a win-win for them as they can get rid of the PA-RISC and AXP architectures, which they had no way of continue developing on their own. And they make money off their systems and services, not on the processors themselves as they stopped being a microelectronics vendor long time ago. As far as intel is concerned, the IA64 was a hefty investment, but it managed to pretty much kill off a big chunk of the competition at the high end. And it means that sales are going to either IA64 or X86-64, not the old 64 bit competitors that there were around a while back. So in the end it means $$$$ to intel, whether it comes from IA64 or X86 is not that important for the final balance sheet.

I don't particularly agree with the approach, and I am fairly attached to the AXP architecture as I did most of my graduate research on an AXP-like pipeline. However, one has to give the devil its due.

As far as Intel is concerned, they get to recoup a lot of their investment. A big deal of the tracing, value prediction and static analysis technology from the IA64 compilers are making their way into their x86 compilers. Their performance measurement unit is also being moved over the x86 pipeline. And a lot of the RAS research, will go into their general products. Intel is a very cheap corporation, and they always find a way of recouping their investments, either in the short or long run (which is fairly uncharacteristic for an American corporation, which are notoriously short sighted).

In any case, we may end up seeing the IA64 become part of the x86 ISA spec at some point. Once it becomes not cost effective to keep two separate lines for intel. At the end of the day, if they spent $1 billion to kill off and carve $10 billion worth from competitors, it seems quite a little investment in the scheme of things. And under that light is how Itanium is being considered inside Intel...

Edited 2007-11-01 15:42

Reply Score: 3

RE[6]: a view details
by rayiner on Thu 1st Nov 2007 20:38 UTC in reply to "RE[5]: a view details"
rayiner Member since:
2005-07-06

The problem with your analysis is that the IA64 ISA was never intended to be visible to the programmer, ironically neither did most RISC architectures.

VLIW isn't just an ISA design, it's an implementation strategy. The whole point of VLIW is to move complexity from the implementation to the compiler by creating a suitable ISA that exposes implementation details to the software. It is not just an abstract interface for programming the micro-architecture --- such a statement goes against the very idea of VLIW. The purpose of EPIC specifically goes further. It uses VLIW principles to allow an implementation that can take advantage of large amounts of static ILP (in an in-order design no less!). None IA64 makes any sense without keeping in mind that design goal.

There is nothing inherent to VLIW, RISC, or CISC for that matter that makes them more or less suited for general purpose computing.

The idea of depending on the compiler to discover ILP is what makes VLIW unsuitable for general purpose computing. The compiler technology isn't there, and even if it were, nobody wants to recompile their code every year anyway!

The compiler/VLIW pipeline combination seems to perform fairly decently as far as the Itanium2 is concerned

No, it doesn't. I2 performs "fairly decently" only on heavily optimized code run through heroic compilers. That's where my point about the software ecosystem comes in. When you're targeting the "general purpose" market (high volumes, wide distribution), a couple of heroic C and FORTRAN compilers and the necessity of recompiling your software for each new iteration of the architecture doesn't cut it.

A major lesson of the success of x86 processors in the last decade is that a good architecture is one that's easy to generate code for, and one that runs existing code adequately (eg: the Pentium Pro's poor performance on 16-bit code drastically curtailed its success in the mainstream). IA64 falls down very badly on this criterion.

Where as the IA64 was always intended to stay in the high range of things.

This is a retro-active rationalization. IA64 was intended to eventually replace x86. It doesn't make sense in any other context. You don't go to all the trouble of creating a fairly radical new architecture, one that you know is going to require a huge long-term investment into developing new software technologies especially for it, without expecting it to be very broadly applicable. IA64 was created because Intel thought that EPIC and VLIW would allow them to make better processors across its range of markets. It did not succeed in that regard.

Now, the Itanium series of processors was very likely always intended for the high range of things, but IA64 as an ISA wasn't. However, as it has become obvious that IA64's ideas of magic compilers hasn't panned out, people have realized that the sole redeeming qualities of the platform lie in high-end features of the Itanium implementation that have nothing to do with the ISA. As such, IA64 is de-facto relegated to the high-end, but not by choice!

Thus saying things like the only reason why IA64 won't die is because Intel and HP are poring wads of money into it, is a bit disingenuous

That's not what I said. I said that IA64 won't die because Intel has _already_ poured a wad of money into it, and is now looking to recoup whatever it can. In contrast, HP let Alpha die, because it had no such motivating drive.

As far as Intel is concerned, they get to recoup a lot of their investment. A big deal of the tracing, value prediction and static analysis technology from the IA64 compilers are making their way into their x86 compilers.

No they're not. The IA64 compiler technology is virtually useless to everyone else. They can get you a few percent here and there, but the complexity just isn't worth it. It's particularly stupid because nobody wants to do static FORTRAN compilers anymore anyway. The future, in the general purpose market, is in JIT's that have to do code-gen in 100ms on the fly, and all of the stuff developed for IA64 is just too damn expensive for that.

At the end of the day, if they spent $1 billion to kill off and carve $10 billion worth from competitors

First, Intel's investment into Itanium was more like $10 billion, and second, that's another retro-active rationalization. If the real goal was to kill off a bunch of RISC competitors, Intel could have achieved that at much lower cost and _much_ lower risk by creating a traditional RISC architecture. VLIW, EPIC, none of that stuff was necessary from a strictly marketing standpoint.

Reply Score: 3

RE[7]: a view details
by javiercero1 on Thu 1st Nov 2007 21:46 UTC in reply to "RE[6]: a view details"
javiercero1 Member since:
2005-11-10

You have a lot of preconceived notions from a 3rd party overly superficial stand point, so be it.

However, EPIC for example, was an HP product not an Intel one. And for the most part Intel never intended to replace its cash cow: x86 with IA64. As EPIC-based products where intended from the get go for the high end and the enterprise side of things, two fields for which Intel at the time had little to no presence. You make a set of assumptions and requirements that are of your own bias, and thus move the goal posts and declare success or failure accordingly. I was at intel until recently and that was not the light under which Itanium was casted upon (and the overall costs are much less than the $10billion as stated by the shareholder info I got :-) ).

Second, EPIC, in theory at least. Was not to require recompilation of the whole code. But rather that programs will rely heavily of shared code which can be in the form of libraries, which can be then recompiled whenever advances in the compiler were available. This was to offer a way to decouple silicon and compiler advances. When EPIC was conceived the silicon turn around cycle was nowhere near the speed it is right now, so a processor was expected to have a lifetime of a few years. Allowing speedups to come decoupled from the silicon during those few years, seemed like a sensible approach. At the time at least...

The main problem for the Itanium folks is not that it is a bad processor. But rather than the x86 folks have been fairly successful at not only keeping up but also commanding the performance curve. And that as far as Intel has been concerned is their main goal, as x86 is their cash cow.

Itanium for the most part has been subsidized, either by HP or by Intel's internal interests. Same can be said for any other non main stream architecture. POWER is only alive because IBM made a hefty investment in it, and even though a POWER chip for IBM is a money loser. They make it up in the long run with the services, which is their bread and butter. Same goes for SPARC, at least SPARC64 is a money loser for the microelectronics side of Fujitsu, it is the services attached to the primepower side of things that makes them money. The old adagio of "you got to spend money to make money" is also true in the micro electronics world.

Alpha died because it ended being a dead weight around the neck of DIGITAL first, and Compaq later. The size of the investment required to design and implement the AXPs was too much for Compaq to recoup with the services stream provided by the AlphaServers and AlphaStations. Same goes for PA-RISC that was at some point a significant drain in resources and money for HP. Heck, the only reason why MIPS lasted in the non-embedded side of things was because SGI poured shitloads of cash into a dying platform. Which in the end pretty much did the whole company in.

IBM, Intel/HP on the other hand are large enough that they can absorb the cost of their POWER, and IA64 investments. Because that gives them access to a market that other vendors are being locked out of, mostly due to the elevated entry fee required.

One has to understand the context under which Intel and HP see Itanium to better gauge its implications.

Reply Score: 1