Linked by Thom Holwerda on Mon 5th Apr 2010 18:29 UTC, submitted by poundsmack
Windows Ah, Intel's IA-64 architecture. More commonly known as Itanium, it can probably be seen as a market failure by now. Intel consistently failed to deliver promised updates, and clock speeds have lagged behind. Regular x86-64 processors have already overtaken Itanium, and now Microsoft has announced that Windows Server 2008 R2 is the last version of Windows to support the architecture.
Order by: Score:
Sad
by twitterfire on Mon 5th Apr 2010 18:47 UTC
twitterfire
Member since:
2008-09-11

It's sad to see IA64 dying. It was intended as a replacement of the old x86 but due to stubbornness and inertia on behalf of software developers and users, it newer make it. It's a shame since it's a newer, smarter technology which doesn't have x86's restrictions and doesn't sacrifice architecture design over supporting legacy features.

Edited 2010-04-05 18:57 UTC

Reply Score: 1

RE: Sad
by jgagnon on Mon 5th Apr 2010 18:59 UTC in reply to "Sad"
jgagnon Member since:
2008-06-24

It would have been a much easier sell to people if Intel was able to show consistent performance measurements over x86/x64, but they couldn't in the face of competition. Fact is that the newer x86/x64 chips can exceed Itanium in many (most?) meaningful benchmarks.

So what is the advantage of Itanium over x86? Exactly the question people keep asking and Intel can't seem to answer.

The fictitious problem with x86 that Intel wanted everyone to believe in was the struggle to maintain performance while still supporting legacy instruction sets. Considering how easy and fast it is today to emulate the older instructions, that "problem" sure doesn't seem like a real problem anymore.

Reply Score: 4

RE[2]: Sad
by Zbigniew on Mon 5th Apr 2010 19:11 UTC in reply to "RE: Sad"
Zbigniew Member since:
2008-08-28

The fictitious problem with x86 that Intel wanted everyone to believe in was the struggle to maintain performance while still supporting legacy instruction sets. Considering how easy and fast it is today to emulate the older instructions, that "problem" sure doesn't seem like a real problem anymore.

1. If we really want to emulate that "older instructions", we'll be stuck with that x86 for next 50 years.

2. In such case, why emulating anything, if there's decision like: "stand by x86"? Better would be just to make the same chips over and over... of course: with more GHz, more cores, and more cache each next version.

Personally I suppose, that OSOS (open sourced operating systems, of course Linux at first place) will slowly change the market; they are compatible with different hardware. Just consider all the handheld devices...

Reply Score: 1

RE[3]: Sad
by jgagnon on Mon 5th Apr 2010 19:21 UTC in reply to "RE[2]: Sad"
jgagnon Member since:
2008-06-24

My point was that Intel intentionally tried to fool us by claiming Itanium was easier to ramp up performance than x86 because of the burden of supporting the older instruction sets (backwards compatibility). They intended on supporting the the older chips through emulation on Itanium.

Funny how once AMD came up with x86 chips that were plenty fast compared to Itanium that Intel was able to ramp up the speeds at an accelerated rate for their x86 chips. x86 has a lot of life left in it so long as competition stays in place.

Edited 2010-04-05 19:21 UTC

Reply Score: 1

RE[4]: Sad
by TemporalBeing on Tue 6th Apr 2010 17:12 UTC in reply to "RE[3]: Sad"
TemporalBeing Member since:
2007-08-22

Funny how once AMD came up with x86 chips that were plenty fast compared to Itanium that Intel was able to ramp up the speeds at an accelerated rate for their x86 chips. x86 has a lot of life left in it so long as competition stays in place.


To clarify -

Intel wanted to to Itanium and get away from having to share technology with AMD, Cyrix, etc. They wanted to control the market again. They designed IA64 to move to 64-bit computing, emulate the 16/32-bit x86, and move on to new things. It was (and is) a powerhouse of a chip. But it was always expensive, and never really targeted at the average user. Intel went from aiming at everyone to just the server market.

AMD, OTOH, decided to stick with the x86 instruction set and designed AMD64. AMD64 allowed them to overcome the issues in the prior x86 versions; though micro-code probably helps at least as much. AMD64 simply extended x86 into 64-bit, creating 'long mode' (versus protected and real modes). Essentially, AMD beat Intel at their own game, and eventually forced Intel to support the AMD64 instruction set themselves (originally scaled down as EM64T, then brought to par and extended as Intel64).

Interestingly, if Intel had decided to not do the IA64 thing, AMD probably wouldn't still be around. AMD64 helped AMD out a lot.

Reply Score: 2

RE[3]: Sad
by nt_jerkface on Mon 5th Apr 2010 19:26 UTC in reply to "RE[2]: Sad"
nt_jerkface Member since:
2009-08-26


1. If we really want to emulate that "older instructions", we'll be stuck with that x86 for next 50 years.


Yes but what if that emulation has no appreciable cost?

Reply Score: 6

RE[4]: Sad
by cb88 on Mon 5th Apr 2010 20:47 UTC in reply to "RE[3]: Sad"
cb88 Member since:
2009-04-23

It costs power in (proformance/watts) dude just compare atom to arm for instance atom can run old intel 8 bit code but what is the point just to use more watts than arm? The tradeoff is that intel has the huge instruction decode unit that generates nothing but heat and arm has a very simple straight forward decode frontend

Edited 2010-04-05 20:48 UTC

Reply Score: 3

RE[5]: Sad
by poundsmack on Mon 5th Apr 2010 20:59 UTC in reply to "RE[4]: Sad"
poundsmack Member since:
2005-07-13

ya, that's basically what I was getting at when i said "So is virtually everything else." The x86 instruction set has a lot of baggage, baggage that needs be let go.

Reply Score: 2

RE[6]: Sad
by orestes on Mon 5th Apr 2010 21:46 UTC in reply to "RE[5]: Sad"
orestes Member since:
2005-07-06

Because reworking your entire corporate infrastructure around new technology that "better" at significant cost when you have a working "good enough" setup in place is a smart business decision?

Until other architectures offer some killer app or extreme cost savings, you're not going to see x86 going anywhere

Reply Score: 2

RE[7]: Sad
by poundsmack on Mon 5th Apr 2010 22:22 UTC in reply to "RE[6]: Sad"
poundsmack Member since:
2005-07-13

you are correct. If anything ends up replacing it they will need either initial hardware support for the x86 instruction set, or one hell of a software emulator (on par or better than Apple's Roseta).

I have nothing against x86-64, it does its job well enough, but their will come a day in the near(ish) future that bring about it's end. (10-15 years realistically). Then again Graphine could very well keep x86 alive forever... Personally i am hoping for quantum computing. Or making Nvidia's Fermi into a CPU/GPU combo. though it would need a bit of additional instruction sets to pull that off.

Reply Score: 2

RE[5]: Sad
by nt_jerkface on Mon 5th Apr 2010 21:16 UTC in reply to "RE[4]: Sad"
nt_jerkface Member since:
2009-08-26

Are we talking about servers or netbooks here?

When Intel has had Itanium undermined by their own x64 cpus you really need to question how much the x86 baggage actually matters.

When you can get an 80 watt quad core Xeon for $230 the Itanium becomes a very hard sell.

Reply Score: 2

RE[6]: Sad
by lucas_maximus on Mon 5th Apr 2010 22:45 UTC in reply to "RE[5]: Sad"
lucas_maximus Member since:
2009-08-18

The x86 baggage makes hell of a lot of difference per MHZ ... a SGI fuel machine with a 900MHZ MIPS Processor (M16000 CPU?) had similar floating point calculation speed to a 3.4 GHZ Pentium 4.

A Chip that had almost 4x the amount of MHZ was being equalled by something that was sub GHZ.

It does make a difference.

Reply Score: 0

RE[7]: Sad
by nt_jerkface on Tue 6th Apr 2010 00:41 UTC in reply to "RE[6]: Sad"
nt_jerkface Member since:
2009-08-26

No one disputes that it makes a difference but that doesn't mean it always makes sense to use ARM from a performance/price perspective, especially when Intel and AMD are constantly improving the performance/watts of their x64 cpus.

Look I find Itanium interesting and I'm disappointed to see it in decline but I also know that there is rarely a good business case for it. Red Hat dropped Itanium support last year which shows that even Linux shops aren't interested in it. It's rare for a business to even need more than a couple Xeons so a 20% drop in power in those cases would be pocket change.

Reply Score: 2

RE[7]: Sad
by smitty on Tue 6th Apr 2010 02:08 UTC in reply to "RE[6]: Sad"
smitty Member since:
2005-10-13

The x86 baggage makes hell of a lot of difference per MHZ ... a SGI fuel machine with a 900MHZ MIPS Processor (M16000 CPU?) had similar floating point calculation speed to a 3.4 GHZ Pentium 4.

A Chip that had almost 4x the amount of MHZ was being equalled by something that was sub GHZ.

It does make a difference.


No it doesn't. Please stop spouting off about things you clearly don't understand.

A processor's IPC is all about tradeoffs, and Intel decided to go with a design that focused on high clockspeed with low IPC, while other chips have gone the other way. This has nothing to do with the instruction set, it's a valid tradeoff that you can choose to go either way on. PowerPC chips have gotten really high speed as well, and they aren't x86.

Reply Score: 4

RE[7]: Sad
by tylerdurden on Tue 6th Apr 2010 03:19 UTC in reply to "RE[6]: Sad"
tylerdurden Member since:
2009-03-17

First off, those 900 MIPS chips were rarer than hen teeth, and had a cost many times that of the P4.

Now, calculate the cost/performance ratio of the MIPS part vs. the P4 and weep.

Also the MIPS part needed some insane caches in order to be competitive in SPEC, that drove the cost even further.

Reply Score: 2

RE[5]: Sad
by kaiwai on Mon 5th Apr 2010 22:15 UTC in reply to "RE[4]: Sad"
kaiwai Member since:
2005-07-06

It costs power in (proformance/watts) dude just compare atom to arm for instance atom can run old intel 8 bit code but what is the point just to use more watts than arm? The tradeoff is that intel has the huge instruction decode unit that generates nothing but heat and arm has a very simple straight forward decode frontend


And thus I point you to this (a quote of a quote):

http://kawaii-gardiner.blogspot.com/2010/04/one-thing-i-love-about-...

Ummm... no. This was correct in 1995. Computers have evolved a long, long way since. x86 compilers do not use any complex, long-running instructions, since modern x86 processors run them very, very slowly. x86 processors have backwards compatibility with the old string operations, but this is implemented under the assumption it will almost never be used. The processor slows down tremendously when it hits one of these instructions, but the backwards compatibility uses almost zero die space (remember, a complete 80386 can fit in under 1/1000th of a Core 2 Duo), negligible power, and gives zero performance hit to the instructions that are now used.

Now, the bigger difference between instruction sets is that CISC instructions are variable-length, while RISC instructions are fixed-length. This makes the fetch/decode units for RISC processors smaller and more efficient than those for CISC processors. In practice, however, this is not where the die space and power is spent anymore, at least in desktop systems. In low-power embedded systems, this makes a bigger difference, but still not a big difference (hence the influx of x86 into high-end embedded).

Why do people who have no clue about computer architecture write articles like these?


Itanium isn't power efficient for starters, it is massive in size, it is expensive and the question remains - what can it do that x86 can't. It is a solution finding a problem to solve and so far as much as people would love x86 to die, it hasn't done so. In the years it has been out, Intel have failed to provide a Itanium CPU that is cheaper, more power efficient and scalable down to the laptop level - that alone is indication that as much as people would love to dream Itanium up as a replacement for x86, Intel's actions have been in the opposite direction. What I think would be a more fruitful discussion is why in 2010 we're still dicking around with BIOS when there is UEFI. Hopefully in my lifetime we'll eventually see a move away from the awful BIOS along with the buggy ACPI implementations.

Edited 2010-04-05 22:20 UTC

Reply Score: 2

RE[5]: Sad
by tylerdurden on Tue 6th Apr 2010 03:04 UTC in reply to "RE[4]: Sad"
tylerdurden Member since:
2009-03-17

The overhead of x86 decoding is now to less than 5% in area/power (in worst case). That is a very small price to pay for basically unlimited backwards compatibility.

BTW, Arm and Atom target two very different segments.

Reply Score: 2

RE[2]: Sad
by twitterfire on Mon 5th Apr 2010 20:19 UTC in reply to "RE: Sad"
twitterfire Member since:
2008-09-11

It would have been a much easier sell to people if Intel was able to show consistent performance measurements over x86/x64, but they couldn't in the face of competition.


I was talking about the IA64 arhitecture, not about the actual implementations. And I think that IA64 is superior when compared to x86.

Reply Score: 2

RE[3]: Sad
by poundsmack on Mon 5th Apr 2010 20:39 UTC in reply to "RE[2]: Sad"
poundsmack Member since:
2005-07-13

"And I think that IA64 is superior when compared to x86."

So is virtually everything else... ;)

Reply Score: 3

RE[4]: Sad
by nt_jerkface on Mon 5th Apr 2010 20:56 UTC in reply to "RE[3]: Sad"
nt_jerkface Member since:
2009-08-26

But which platform is superior when it comes to providing a poor performance/cost ratio? Sparc or Itanium?

Reply Score: 2

RE[5]: Sad
by poundsmack on Mon 5th Apr 2010 22:00 UTC in reply to "RE[4]: Sad"
poundsmack Member since:
2005-07-13

Itanium. SPARC is very very good, even Oracle believes in SPARC in a bit way. It's also my personal favorite architecture, but I am trying to be as unbias as I can be.

for the cost (and the open factor) SPARC wins.

Reply Score: 2

RE[6]: Sad
by nt_jerkface on Tue 6th Apr 2010 01:21 UTC in reply to "RE[5]: Sad"
nt_jerkface Member since:
2009-08-26

Cost is relative to needs. You can't just say "Sparc wins" because I can show a thousand cases where building a Sparc server is a waste of money compared to x64. Actually I would take that even farther and say that for most server needs Sparc is a waste of money.

Sparc is more elegant than x64 but that doesn't matter when it comes to performance/price. I think it would be nice if RISC was more competitive at the server level but it isn't. Sparc sales are in decline and for good reason. They usually aren't worth the price unless you have code designed for RISC systems. The new Xeon 7500s will drive even deeper into RISC territory.
http://www.itproportal.com/portal/news/article/2010/4/3/intel-prese...

Look I wish there was more investment into RISC at the server end but the sales growth is really in x64. Maybe Oracle will mix things up a bit but at this point the future of servers is really looking like x64. But if it makes you feel any better Sun 1U servers are really cheap on ebay right now and make great file servers.

Reply Score: 2

RE[3]: Sad
by Delgarde on Mon 5th Apr 2010 21:04 UTC in reply to "RE[2]: Sad"
Delgarde Member since:
2008-08-19

I was talking about the IA64 arhitecture, not about the actual implementations. And I think that IA64 is superior when compared to x86.


Maybe, but that really isn't important. *All* that matters is implementation - the best architecture is irrelevant if the implementation doesn't deliver.

IA64 is a great architecture that never delivered - but x86 is a good-enough architecture that's been delivering for some thirty years now. There's only one winner in that fight...

Reply Score: 3

RE: Sad
by mtilsted on Mon 5th Apr 2010 21:06 UTC in reply to "Sad"
mtilsted Member since:
2006-01-15

I really think it died* due to being bad/expensive hardware. There were no time during the Itaniums lifetime that it gave a better price/performance ratio for any 1P/2P servers (Which is >95% of all servers).

I mean Linux did have some good support for Itanium, but there were no reason to move any linux servers to Itanium even if all the software were supported. (And most servers just need apache/php,java,MySql/PostgreSQL which do run fine on Itanium).

So no it was not just lag of software. Even people with full software support did not move.

*Insert Monty Pyton Perrot joke here

Reply Score: 2

RE[2]: Sad
by Tuishimi on Mon 5th Apr 2010 22:05 UTC in reply to "RE: Sad"
Tuishimi Member since:
2005-07-06

It's not dead. It's sleeping!

Reply Score: 2

RE[3]: Sad
by Doc Pain on Tue 6th Apr 2010 04:28 UTC in reply to "RE[2]: Sad"
Doc Pain Member since:
2006-10-08

Wakey wakey!!! :-)

Reply Score: 2

RE: Sad
by toast88 on Mon 5th Apr 2010 21:20 UTC in reply to "Sad"
toast88 Member since:
2009-09-23

It's sad to see IA64 dying. It was intended as a replacement of the old x86 but due to stubbornness and inertia on behalf of software developers and users, it newer make it.

I don't think it was ever supposed to be a replacement for the x86 platform, if it had been, Intel would have been pushing it into the market much harder.

Itanium was mainly introduced to compete with other high-end non-x86 platform like MIPS, Alpha, PA-RISC and so on. Now, that Itanium has pushed all of them out of the market, Intel can abandon Itanium and has a cleaned-up processor market.

Intel has the same attitude towards backwards compatibility like Microsoft. Or how do you explain that even the latest x86 processors (I don't know about amd64 though) still have that much-hated A20 gate.

Adrian

Reply Score: 2

RE[2]: Sad
by bhtooefr on Mon 5th Apr 2010 22:49 UTC in reply to "RE: Sad"
bhtooefr Member since:
2009-02-19

Although it didn't push SPARC or POWER out of the market...

(Then again, AMD64 did that.)

Reply Score: 1

RE[2]: Sad
by nt_jerkface on Tue 6th Apr 2010 01:41 UTC in reply to "RE: Sad"
nt_jerkface Member since:
2009-08-26


I don't think it was ever supposed to be a replacement for the x86 platform, if it had been, Intel would have been pushing it into the market much harder.


Well part of the original appeal of moving away from x86 meant that Intel could sell chips that AMD couldn't clone. AMD threw a monkey wrench into that plan with x64 which had backwards compatibility and addressed the main shortcoming of x86 on the server which was 32 bit addressing. So Intel had to build their own x64 chips to remain competitive which then cut into their own Itanium sales.

Reply Score: 2

RE[3]: Sad
by smitty on Tue 6th Apr 2010 02:15 UTC in reply to "RE[2]: Sad"
smitty Member since:
2005-10-13

Well part of the original appeal of moving away from x86 meant that Intel could sell chips that AMD couldn't clone. AMD threw a monkey wrench into that plan with x64 which had backwards compatibility and addressed the main shortcoming of x86 on the server which was 32 bit addressing. So Intel had to build their own x64 chips to remain competitive which then cut into their own Itanium sales.

I think that's an often overlooked point. Intel really wanted to freeze AMD out of the market entirely by moving everyone to an instruction set that AMD couldn't legally copy. That opened up a window for AMD, though, to come up with an improved x86 architecture while Intel was still trying to get IA64 right, and the success of the original Athlon64 showed that there wasn't any real benefit to leaving x86, at least for the majority of the market.

Edited 2010-04-06 02:15 UTC

Reply Score: 2

RE[4]: Sad
by tylerdurden on Tue 6th Apr 2010 03:10 UTC in reply to "RE[3]: Sad"
tylerdurden Member since:
2009-03-17

No, intel wanted to neutralize the high end RISC processors in the datacenter.

Given that today you can't buy any server platform designed around Alpha, PA-RISC, or MIPS. And SPARC is in life support. I'd wager Intel has succeeded.

As usual, a lot of people in this site tend to equate their particular (and in most cases very detached from the reality of the field) opinion with fact.

Reply Score: 2

RE[5]: Sad
by lemur2 on Tue 6th Apr 2010 13:10 UTC in reply to "RE[4]: Sad"
lemur2 Member since:
2007-02-17

No, intel wanted to neutralize the high end RISC processors in the datacenter.

Given that today you can't buy any server platform designed around Alpha, PA-RISC, or MIPS. And SPARC is in life support. I'd wager Intel has succeeded.

As usual, a lot of people in this site tend to equate their particular (and in most cases very detached from the reality of the field) opinion with fact.


An interesting approach might be found in clusters.

http://www.beowulf.org/

Take something like this:
http://cs.boisestate.edu/~amit/research/beowulf/
or this:
http://lis.gsfc.nasa.gov/Documentation/Documents/cluster.shtml

and build it instead out of thousands of tiny ARM SoCs. Put a smallish hard disk with each CPU, say 100GB or so. Maybe a "headless smartbook" type of configuration.

Low end RISC processors in the datcentre, but a great many of them.

- Fault tolerant.
- Easy repair by replacement of nodes, possibly hot replacement (self-healing).
- Redundant, distributed storage, but still many terabytes of it.
- Low cost.
- Low power.
- Significant performance.

Edited 2010-04-06 13:18 UTC

Reply Score: 2

RE: Sad
by tylerdurden on Tue 6th Apr 2010 03:02 UTC in reply to "Sad"
tylerdurden Member since:
2009-03-17

What are x86 restrictions exactly?

Reply Score: 2

RE: Sad
by bnolsen on Tue 6th Apr 2010 12:26 UTC in reply to "Sad"
bnolsen Member since:
2006-01-06

At the time it seemed like a market grab by intel. Instead of adding 64bit to x86 they decided to release a brand new chip with 64bit support and charge lots of $$$$$ for an architecture they fully controlled.

AMD did what intel should have done and just released amd64, and pretty much owned the x86 server market for several years.

Finally the deprecated platform is dying.

Reply Score: 2

Itanium, a platform for professionals
by reez on Mon 5th Apr 2010 19:04 UTC
reez
Member since:
2006-06-28

So Itanium becomes a platform for professionals only?

Don't take it serious, just a bad joke ;)

Reply Score: 2

sergio Member since:
2005-07-06

When nobody wants your overpriced, outdated and awkward technology... call it uber ultra Pro Enterprise. ;)

It worked for AIX... why not Itanium!? ;)

Reply Score: 0

sad indeed
by poundsmack on Mon 5th Apr 2010 20:06 UTC
poundsmack
Member since:
2005-07-13

At least the last OS to support it is a great stable one from MS. I am one of the few who like the Itanuim processor, not just because i can dual boot OpenVMS and Windows Server 2008 R2. If I recall correctly, MS only signed on to release operating systems for intel's Itanium for x amount of years anyways.

Reply Score: 2

RE: sad indeed
by Anachronda on Tue 6th Apr 2010 18:00 UTC in reply to "sad indeed"
Anachronda Member since:
2007-04-18

Good thing DEC killed Alpha so they wouldn't be stuck on a niche architecture only used for VMS and a proprietary Unix, eh?

Bitter? Me? Nah!

Reply Score: 0

Itanium and HP-UX
by james_parker on Mon 5th Apr 2010 21:26 UTC
james_parker
Member since:
2005-06-29

I wonder if HP will change their strategy for HP-UX, since they have canceled their PA-RISC and begun supporting Itanium as their remaining HP-UX platform.

Given that HP has completely outsourced their HP-UX file system software to Veritas, I've been expecting HP to find a strategy to discontinue HP-UX and port their system administration tools (which are generally considered "best in class") to Solaris and/or AIX and license it as a third party enhancement (thus getting out of the OS business altogether).

If Itanium continues to lose support this seems even more likely.

Reply Score: 1

RE: Itanium and HP-UX
by poundsmack on Mon 5th Apr 2010 22:04 UTC in reply to "Itanium and HP-UX"
poundsmack Member since:
2005-07-13

I could see HP picking up Solaris. AIX would be a longggggggg shot and I don't think IBM has any plans to let anyone but IBM mess with it.

HP-UX is in support mode (yes new features are being added, some of them awesome, but its going to go the way of IRIX).

OpenVMS might not be around much longer either, except for fixes based on customer request.

Don't get me wrong, i REALLY hope I am wrong here (at least about OpenVMS, i couldnt care less about HP-UX), but it looks the be where things are heading...

Reply Score: 2

RE: Itanium and HP-UX
by sergio on Mon 5th Apr 2010 22:25 UTC in reply to "Itanium and HP-UX"
sergio Member since:
2005-07-06

That would be a really cool idea, specially for Solaris (and maybe Linux). AIX has smit and It's pretty pretty good. xD

BTW I'll be happy if they kill HP-UX all together... c'mon HP, you can't sell that thing anymore, It's unethical. I feel guilty when I see our clients paying uber expensive support and licences for that sh*t (and I can't say a word 'bout it). You have to pay even to extend an FS... wtf

Reply Score: 1

RE[2]: Itanium and HP-UX
by Delgarde on Mon 5th Apr 2010 22:37 UTC in reply to "RE: Itanium and HP-UX"
Delgarde Member since:
2008-08-19

BTW I'll be happy if they kill HP-UX all together...


Amen to that. I can't speak for it's capabilities at a kernel level, but the userspace is appalling - not just compared to Linux (i.e GNU utils), but compared to pretty much any other Unix variant I've dealt with.

Reply Score: 2

RE[3]: Itanium and HP-UX
by PlatformAgnostic on Tue 6th Apr 2010 04:58 UTC in reply to "RE[2]: Itanium and HP-UX"
PlatformAgnostic Member since:
2006-01-02

Apparently its kernel level capabilities are quite good, at least at getting out of Oracle's way. It's long been a leader in database performance.

Reply Score: 2

RE: Itanium and HP-UX
by rjamorim on Tue 6th Apr 2010 02:37 UTC in reply to "Itanium and HP-UX"
rjamorim Member since:
2005-12-05

...and port their system administration tools (which are generally considered "best in class")...


lolwut? Are you on drugs? Being an HP-UX sysadmin is absolutely nightmarish! Their tools are buggy, weird, arcane, and unreliable. SAM is a joke, and other tools are not much better than that. Everything reeks of early-80s-UNIX, even on 11i.

Reply Score: 2

Who used it with Windows anyway
by henno on Mon 5th Apr 2010 21:37 UTC
henno
Member since:
2009-06-25

Maybe the reason is that it did not sell. You can get IA64 with perfectly fine Linux or Unix support. I think the target audience was using it more with those OS-es then with Windows... It'll live some more!

Reply Score: 1

tylerdurden Member since:
2009-03-17

... don't forget about OpenVMS ;-)

Reply Score: 2

Itanium...
by Lazarus on Mon 5th Apr 2010 22:30 UTC
Lazarus
Member since:
2005-08-10

It was EPIC :-P

I'll be here all week.

Reply Score: 2

Comment by Ravyne
by Ravyne on Tue 6th Apr 2010 00:38 UTC
Ravyne
Member since:
2006-01-08

I thought Itanium was still producing strong performance in certain scientific workloads though? I wonder if Windows HPC will still continue to support it for some time -- it may not be very good for server workloads, but the only way I see Microsoft pulling out entirely is if Intel themselves are pulling out of Itanium.

Then again, nVidia and, to a lesser extent, AMD are gunning for the scientific processing market, and given the huge performance benefits GPUs exhibit for the types of jobs their good at, Itanium may soon find itself in an unsustainably-small niche.

Itanium is an interesting technology, and an even more interesting approach in breaking free of x86 compatability, but the technology, particularly on the compiler side, just isn't there in a strong enough way to make Itanium the clear win it needed to be if it was going to have any chance at surplanting x86.

Sparc failed, MIPS failed, Alpha failed (even with a huge performance advantage at the time), PowerPC failed (even after a good run), and now Itanium has seemingly failed.

I strongly believe that ARM has a real shot -- probably the best shot any competing architecture has had -- if they keep making inroads from the mobile/embedded/low-power space, and don't make the mistake of trying to compete with Intel in the desktop/mainstream market too soon (on the other hand, I'd love to see some snappy ARM-based netbooks/nettops/STBs and even thin-and-light laptops right now.)

Reply Score: 0

RE: Comment by Ravyne
by bhtooefr on Tue 6th Apr 2010 02:06 UTC in reply to "Comment by Ravyne"
bhtooefr Member since:
2009-02-19

IMO, ARM's best chance was 1987.

Every year since then, their chances have gotten worse, until 2007's netbook revolution. Even then, ARM can't truly challenge x86 - the ARM netbooks that are actually coming to market appear to be glorified iPads, essentially, with real keyboards.

It'll take a massive shift in the software most people run for ARM to stand a chance. Two ways that can happen: mass migration to open source, or Microsoft creating a fat binary format, making all of their compilers build for ARM, x86, and AMD64, only giving Windows Logo approval to applications and drivers that are compiled for all three architectures (unless they're not applicable to certain architectures - for example, I wouldn't expect drivers for an ATI northbridge for an AMD CPU to be compiled for ARM,) AND supporting this for 10 years without expecting ANY return on investment. (However, the way Microsoft does Windows ports, they may not need to do the funding. See Alpha - DEC and Compaq funded that.)

Then, an ARM port of Windows MIGHT take off.

Alpha almost did it in 6 years, but it was significantly faster than x86 at the time (and could EMULATE x86 as fast as the fastest x86s could run natively.) The only thing that killed it was Compaq pulling the plug on it in favor of Itanium, and killing the Windows 2000 port just before it was finished. (Of course, Windows on Alpha was 32-bit, despite Alpha being a 64-bit CPU. There was ALSO a 64-bit port of Windows 2000 in the works... and Microsoft continued that on their own internally, as they needed Windows to be 64-bit clean for the Itanium port, and the work on finishing the 64-bit Alpha port would be valuable for the Itanium port.)

Reply Score: 1

RE[2]: Comment by Ravyne
by tylerdurden on Tue 6th Apr 2010 03:12 UTC in reply to "RE: Comment by Ravyne"
tylerdurden Member since:
2009-03-17

ARM is the best selling processor right now.

ARM is exactly in the market segment they want to be, why on earth would they move over to a space in which they have to compete face to face with intel.

I dunno if I misunderstood your post, but if you think that ARM has missed any opportunity for unmitigated success, you haven't paid attention... at all.

Reply Score: 2

RE[3]: Comment by Ravyne
by bhtooefr on Tue 6th Apr 2010 04:17 UTC in reply to "RE[2]: Comment by Ravyne"
bhtooefr Member since:
2009-02-19

I'm well aware that ARM is doing quite well, however, they have announced an attack on Intel: http://www.pcpro.co.uk/news/351619/arm-launches-attack-on-intels-ne...

And, Intel's going after the high-end ARMs: http://news.cnet.com/Intel-has-ARM-in-its-crosshairs/2100-1006_3-62...

ARM has designs that are within striking distance of Atom, and Atom is within striking distance (except on power) of ARM's high-end smartphone designs.

ARM needs to push up, to displace Atom altogether, just to STAY where they are. If they let Intel keep coming down, my prediction is Intel will eventually start digging up old designs, die-shrinking them, and starting to compete with ARM11 and Cortex-A5, and then ARM7 and Cortex-M3. If Intel gets the ARM7 market, with, say, a massively die-shrunk 386, ARM is dead.

As for the comment about 1987... had Acorn pushed the Archimedes worldwide, HARD, directly against IBM and Compaq, things would've been very, very interesting. The RISC vs. x86 battle royale would've happened then and there, rather than the PowerPC vs. x86 long drawn out battle that ended with PowerPC fizzling out, because Acorn could've massively undercut the Intel machines on price, and beat them on performance (and graphics capability, too.)

Edited 2010-04-06 04:20 UTC

Reply Score: 1

RE[4]: Comment by Ravyne
by bnolsen on Tue 6th Apr 2010 12:32 UTC in reply to "RE[3]: Comment by Ravyne"
bnolsen Member since:
2006-01-06

If moore's law had held regarding die shrinking and clock rate bumps intel could have won. It seems at this point in time intel's manufacturing lead over other foundries isn't what it once was. So ARM stays a serious contender in the low end. Intel would have to find some way to make the die space/power hungry x86 decoder just *go away* to make it work.

Reply Score: 2

RE[5]: Comment by Ravyne
by tylerdurden on Tue 6th Apr 2010 16:07 UTC in reply to "RE[4]: Comment by Ravyne"
tylerdurden Member since:
2009-03-17

Question: Do you even know the overhead that an x86 decoder induces?

A lot of you do not seem to understand the difference between ISA and microarchitecture, or where most of the power consumption comes from in a modern microprocessor.

Also the context of the workload/application between ARM vs. x86 are for the most part completely different.

If you were to scale up an ARM core in order to offer a similar single thread performance of a modern x86 core. Chaces are that you will end up with a similar power envelope.

ARM understand that, and that is why they will not target the same areas where x86 is king right now. Among other things, because they don't have to do so. They are fairly dominating in the embedded market.

Edited 2010-04-06 16:14 UTC

Reply Score: 2

RE[6]: Comment by Ravyne
by Zbigniew on Tue 6th Apr 2010 21:46 UTC in reply to "RE[5]: Comment by Ravyne"
Zbigniew Member since:
2008-08-28

If you were to scale up an ARM core in order to offer a similar single thread performance of a modern x86 core. Chaces are that you will end up with a similar power envelope.

1. I think, we cannot just simply write "chances are"; consider, for example, power consumption of processors made by VIA ("Silent power"), and Pentiums of equal computing power.

2. ...and even, when "chances are" (I doubt) - you were right writing "If". Why? Because I don't need that much. I'm using for everydays work old Pentium III/750 (and I'm not even fully using my 750 MB of RAM), so Cortex A9 will be still much, much more than I need. If the news from ARM are correct, such 4-core A9 will have power dissipation of about 1 W, so it won't need neither any fan, nor even radiator...

ARM understand that, and that is why they will not target the same areas where x86 is king right now.

Yes, they will target - as someone wrote already. And as I wrote in the past: I'd like to buy ATX motherboard, fitted with ARM (but not Beagleboard, and not that expensive "development board" from ARM).

Reply Score: 1

RE[7]: Comment by Ravyne
by tylerdurden on Tue 6th Apr 2010 22:27 UTC in reply to "RE[6]: Comment by Ravyne"
tylerdurden Member since:
2009-03-17

Again, by the time you are adding the multiple functional units, the aggressive dispatch out of order unit, and the massive branch predictors needed to match the single thread performance of a core i7... an ARM core will end up with a similar power envelope as the i7 at the same fab process. In fact, Intel will probably have a better envelope in that case since they control both: the microarchtiecture and the fab process. Which ARM does not.

A lot of you keep on focusing on the ISA, which honestly has not been an issue for the better part of a decade. Right now the power/area budged overhead associated with x86 decoding is less than 5%. Unless some of you truly think that ARM's ISA has magical qualities which allows the microarchitecture executing it to completely ignore the laws of physics.

ARM is not stupid enough to target the high performance market. Period. In fact none of ARMs partners are remotely interested in that space. And eventually they are the ones which drive ARM development/targeting.

Edited 2010-04-06 22:29 UTC

Reply Score: 2

RE[4]: Comment by Ravyne
by tylerdurden on Tue 6th Apr 2010 16:13 UTC in reply to "RE[3]: Comment by Ravyne"
tylerdurden Member since:
2009-03-17

... and if my grandma had grown balls, we would have called her grandpa.

How exactly could Acorn push hard a product against IBM/Intel/Microsoft... when they were a tiny British company, barely making it.

Besides, you don't seem to have much of a historical context. There were previous hard pushes to use RISC in order to fight x86 directly: For example, the ACE consortium which was based around MIPS processors running NT and Unix. It had large players like Compaq, Microsoft, Olivetti, Digital, etc and it still failed. So you expect a two bit British company to succeed against a giant like Intel?

Reply Score: 1

RE[5]: Comment by Ravyne
by bhtooefr on Tue 6th Apr 2010 16:21 UTC in reply to "RE[4]: Comment by Ravyne"
bhtooefr Member since:
2009-02-19

ACE was later, though, and x86 had penetrated further in 1993 than it had in 1987.

But, of course, at the time, Olivetti owned Acorn, and could've spent their own money pushing their subsidiary's products as world-beaters.

Edited 2010-04-06 16:21 UTC

Reply Score: 1

RE[6]: Comment by Ravyne
by tylerdurden on Tue 6th Apr 2010 22:18 UTC in reply to "RE[5]: Comment by Ravyne"
tylerdurden Member since:
2009-03-17

ACE started being drafted in 89/90. And before that there were plenty of RISC competitors: MIPS, SPARC, and Motorola's 88000. All were en vogue in the late 80s and were offering far better performance than ARM. Even intel had its own RISC CPU, the i860, at that time.

There is more to a computing ecosystem than an ISA. Which is what you seem unwilling to accept. I consider these "what if" exercises rather silly. IMHO.

Reply Score: 1

RE[7]: Comment by Ravyne
by bhtooefr on Tue 6th Apr 2010 22:25 UTC in reply to "RE[6]: Comment by Ravyne"
bhtooefr Member since:
2009-02-19

One HUGE difference between your MIPS, SPARC, and 88k example, and ARM is...

Those three architectures were EXPENSIVE. A MIPS R2000, SPARC MB86900, or 88k box (and 88k was a flop even in the workstation market, FWIW) would have been significantly more than a 386 desktop, possibly even of equivalent performance, in 1987.

ARM2 was CHEAP. ARM sharply undercut 386s at equivalent performance.

And, I'm not even talking about the ISA. I'm talking about actual silicon, and either actual machines using that silicon, or estimates thereof. In the case of ARM vs. x86, I most definitely am using actual machines. Go look up, in 1987, the price of an Acorn Archimedes 440. Now go look up, again in 1987, the price of a Compaq Deskpro 386/25 with 4 megs of RAM and a hard drive (I think 40 megs?)

Edited 2010-04-06 22:27 UTC

Reply Score: 2

RE: Comment by Ravyne
by ShadesFox on Tue 6th Apr 2010 15:14 UTC in reply to "Comment by Ravyne"
ShadesFox Member since:
2006-10-01

Itanium is nowhere to be seen in scientific work loads. And GPUs are seen as a curious gimmick right now. With the number of cores packed onto each chip exploding things do not look promising for GPUs.

Edited 2010-04-06 15:14 UTC

Reply Score: 1

RE: Comment by Ravyne
by mojmir on Wed 7th Apr 2010 08:27 UTC in reply to "Comment by Ravyne"
mojmir Member since:
2009-01-05

I thought Itanium was still producing strong performance in certain scientific workloads though?

I doubt so.. maybe true in past, but there's cheaper horsepower available today.

Itanium is an interesting technology, and...

... as well as it is an historicaly interesting screw-up :] Really.

Sparc failed, MIPS failed, Alpha failed (even with a huge performance advantage at the time), PowerPC failed (even after a good run), and now Itanium has seemingly failed.

I would not call Sparc failure (yet) as Sunacle seems to believe in it, MIPS reincarnated in China and PowerPC is in fact very successful: there is ppc in every ps3, two of them in every x360.. that makes dozens of millions ppc cpu sold. And there is Power7 coming.

Reply Score: 1