Linked by Thom Holwerda on Wed 1st Mar 2006 11:58 UTC, submitted by mcsimpson
Mac OS X Geekpatrol benchmarks Rosetta performance, and concludes: "I'm impressed with Rosetta; Geekbench performance running under Rosetta is 40% to 80% of what it is running natively. Plus, running Geekbench under Rosetta is comparable to running Geekbench natively on a Power Mac G5 1.6GHz (our baseline system), at least in the single-threaded tests."
Thread beginning with comment 100533
To view parent comment, click here.
To read all comments associated with this story, please click here.
Disruptor
Member since:
2005-11-06

One of the most excellent posts I've seen on G5 vS CoreDuo. Hats of to you rayiner - I'll save your post as soon as I hit `submit' here. Just a quick question. The two processors have a 40% gap right? How much founding in R&D and years of development did Intel put in it's processors to achieve this and how difficult would it be for the G processor to close it (if not surpass it). I think you know the answer as well as I. You mentioned:

"One reason why POWER5 performs much better than POWER4 on integer code is that those formation rules were tweeked heavily. "

This is what I am trying to say: Every next vintage the performance skyrockets. Really how difficult would it be to improve RISC processors like these? My guess: `(relatively) not at all'. Sticking with a processor that is inherently hard to improve is what I find suspicious and what bothers me.

Reply Parent Score: 1

macintroll Member since:
2005-11-15

You seem to believe that PowerPC processors are RISC and x86 processors are CISC. This is not true. RISC and CISC are textbook idealizations that do not exist in pure form in any modern processor.

All advanced processor designs are "inherently hard to improve." What evidence do you have that x86 designs are inherently harder to improve than PowerPC designs? The actual rate of recent processor improvement suggests the opposite.

Reply Parent Score: 3

rayiner Member since:
2005-07-06

This is what I am trying to say: Every next vintage the performance skyrockets. Really how difficult would it be to improve RISC processors like these? My guess: `(relatively) not at all'. Sticking with a processor that is inherently hard to improve is what I find suspicious and what bothers me.

That's the thing. x86 chips aren't "inherently hard to improve", at least not within the design envelope which most workstation/server processors occupy. These are generally aggressively OOO designs with deep pipelines. Once you incur the complexity of OOO and a long pipeline, a few extra stages to handle ISA translation isn't that bad. Indeed, it's enough of a win that even RISC chips like the POWER4/5 are doing it now, because PowerPC isn't quite RISC-y enough (not all instructions are 2-src 1-dst).

So once you've got a few pipeline stages devoted to translating x86 or PPC to the internal ISA, improvements to the chip are decoupled from the limitations of the ISA*. At that point, you're competing on the quality of the internal architecture and the performance of the process on which the chips are fabbed. Like with most things, these get better the more money you throw at them, and the x86 world simply has more money to throw at them.

Now, outside the world of highly OOO chips, things are different. A shallow-pipeline, in-order x86 simply wouldn't perform as well as a shallow-pipeline, in-order RISC. Things like Niagra would likely not be possible using x86 cores. However, for at least the forseeable future, highly-OOO chips will remain the standard for the desktop/workstation/server market.

* To be fully accurate, it's not completely decoupled. The need for the processor to be able to reconstruct the original ISA machine state puts some limitations on the internal ISA. These limitations are fairly minor, however, and usually hits paths that are slow relative to the speed of the core (eg: exception handling, interrupt handling, memory access, etc). At the u-op level, x86 doesn't look much different from a plain-jane RISC with fancy memory addressing modes.

Reply Parent Score: 4

nimble Member since:
2005-07-06

A shallow-pipeline, in-order x86 simply wouldn't perform as well as a shallow-pipeline, in-order RISC.

Sure? The 486 and the Pentium were shallow-pipeline in-order and compared alright to their competitors at the time, except for the tweaked-to-the-last-gate Alpha perhaps.

Of course the core itself would be quite a bit bigger, but it makes up for that in reduced instruction cache/memory requirements. Where x86 really falls down for embedded stuff is the increased power consumption that comes with the more complex core.

Reply Parent Score: 1