Linked by Thom Holwerda on Tue 22nd Sep 2009 21:54 UTC
Intel The Intel Developer Forum is currently in full swing, but it kicked off with a speech by Intel CEO Paul Otellini. Well, there's bad news for those of us who long for a time where lots of different architectures compete with one another, ensuring that technology is moved forward. Otellini's plans for Intel basically come down to one thing: x86 everywhere.
Thread beginning with comment 385658
To view parent comment, click here.
To read all comments associated with this story, please click here.
gustl
Member since:
2006-01-19

I once read an analysis by a chip hardware guy, why the x86 architecture refuses to die.

His argument was like this: RISC-like architectures need no die space for that translation layer, but more instructions have to be stored in cache.

Imagine calculating a sinus: One instruction in x86, a whole algorithm in RISC. The translation layer of the x86 makes the one instruction to become the same algorithm as on RISK internally, but for caching and bus transfer purposes only one 32 bit command has to be considered, whereas for RISK it most often will be a much larger amount.

That leads to "more of the program can be stored in cache" for x86, and subsequently to faster execution. Loosing some speed in the translation layer is the disadvantage this architecture has, but as with all things tech: The best compromise is "The Solution", no matter how outstanding one of the pieces may be.

Reply Parent Score: 3

spiderman Member since:
2008-10-23

I believe it depends on the software.
You don't copy the sinus routine x times. You do it once and you call it with some kind of jump command. It should still take less space than the translation layer because your software will never use all routines.
Moreover, you can make better routines in software when you know what they are used for. You just need clever compilers.

Reply Parent Score: 3

torbenm Member since:
2007-04-23

I once read an analysis by a chip hardware guy, why the x86 architecture refuses to die.

His argument was like this: RISC-like architectures need no die space for that translation layer, but more instructions have to be stored in cache.


This is only true if you compare the best CISC with the worst RISC (in terms of code density), say Motorola 86K against MIPS. Modern x86 code is not really very compact. ARM with the Thumb2 ISA has, generally, much better code density than x86. Sure, you can find examples of a single x86 instruction that needs a sequence of ARM instructions, but the converse is also true, but that is really besides the point: You need to look at the average code density over a large set of programs.

Reply Parent Score: 5

viton Member since:
2005-08-09

Imagine calculating a sinus: One instruction in x86, a whole algorithm in RISC.

Wrong anology. FPU transcendent instructions are very slow. And it isn't supported in 64 bit Windows anyway. You need to code the sinus in RISC way to be fast. And the code footprint here can be worse for x86 because of long SSE instructions encoding and lack of MAD instruction.

Reply Parent Score: 3

Zbigniew Member since:
2008-08-28

That leads to "more of the program can be stored in cache" for x86, and subsequently to faster execution.

Maybe, but during that sinus calculation by x86 - the RISC will fill and empty its cache several times... there's wrong assumption, that x86 and RISC are of the same speed; no, RISC is much faster.

Reply Parent Score: 1