Linked by Thom Holwerda on Fri 29th Dec 2006 21:35 UTC
IBM Judging by details revealed in a chip conference agenda, the clock frequency race isn't over yet. IBM's Power6 processor will be able to exceed 5 gigahertz in a high-performance mode, and the second-generation Cell Broadband Engine processor from IBM, Sony and Toshiba will run at 6GHz, according to the program for the International Solid State Circuits Conference that begins February 11 in San Francisco.
Thread beginning with comment 197213
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: Uh-Oh
by rayiner on Sat 30th Dec 2006 04:34 UTC in reply to "Uh-Oh"
Member since:

You think Apple didn't already know about Power6? Power6 represents the application of the advantages of the POWER system architecture to the design of the CPU. The chip is designed around an insane amount of memory and I/O bandwidth, huge caches, etc, to compensate for lower IPC from a simpler core. A Power6 CPU is going to be several times bigger than an Intel CPU, an order of magnitude more expensive, and require massive supporting infrastructure. The damn thing is going to require more than a dozen channels of DDR2-667 memory just to provide full memory bandwidth!

It's a huge honking server chip designed for huge honking servers. Apple doesn't sell huge honking servers, what it sells are laptop and desktop machines that need high performance with lower power dissipation and with very cheap supporting infrastructure. Intel's Core provides that, in a way no Power6 derivative is going to.

Power6 is what you get when you tell the designers they can have 350mm^2 of die space, 6000 contact pads, a huge external cache, industrial-strength cooling, a multi-thousand dollar per piece budget, not to mention a dozen channels of RAM and fat inter-CPU links on an extremely expensive multi-layer MCM. Core 2 is what you get when you tell designers that 150mm^2 of die space is pushing it, that they have to fit into an existing 775-pin socket, that the core has to scale from a 4 lb laptop to a 50 lb workstation, from a 5W ULV chip to a 80W workstation chip, and from a $500 desktop to a $50,000 server. Apple needs the latter, not the former.

Reply Parent Score: 5

RE[2]: Uh-Oh
by raynevandunem on Sat 30th Dec 2006 07:58 in reply to "RE: Uh-Oh"
raynevandunem Member since:

I'm a tad ignorant concerning the RISC vs. CISC debate.

If Apple is moving to x86 for their desktop systems (they have server systems, though) just like all their other desktop competitors, does this mean that RISC processors (SPARC, ARM, POWER/PPC/Cell, etc.) are meant more for every other hardware system (embedded, server, console, etc.) except for the desktop?

If so, is it because of the processors being RISC, or because of the maker (IBM, Sun, etc.) being geared toward explicitly non-desktop systems from the getgo?

I read that Apple left PowerPC because IBM couldn't make the processor exceed a certain amount of megahertz for their desktop systems, while Intel could for Dell and their other clients.

And does this mean that one can't use a...say...SPARC processor for their desktop system and still be as productive as a counterpart system with x86?

I'm just wondering about this because Apple, prior to the big switch, was pimping their "Think Different" image even in their hardware architecture in a "PPC is us, x86 is them" kind of way. The infamous bunny suit commercial comes to mind.

In fact, that was so far and so long impressed into the Apple brand that it became a part of the very difference between a Mac and a PC ("G4" has a certain "rad" essence to it).

This is why the diehard Mac fans have been so wary about the switch. These people bought the candy-colored iBooks and iMacs with (today, the much-derided) Mac OS 8 and 9 on it. These people bought the uber-expensive G4 Cube and terribly-glassy Bondi Blue PowerMacs.

Fercrissakes, these were the folks with the "X" tats on their chests in "The Cult of Mac" (that was a good book, btw)!

The different processor was Apple's ultimate symbol of being different from the rest for the longest time, more so than the operating system.

What does this switch mean for all the other processor architectures and makers out there, at least those who may want to target desktop systems? Are there any others left?

Reply Parent Score: 1

RE[3]: Uh-Oh
by rayiner on Sat 30th Dec 2006 08:39 in reply to "RE[2]: Uh-Oh"
rayiner Member since:

It is very important in this discussion to seperate instruction sets from microarchitectures. The instruction set is how the processor exposes operations to software. The microarchitecture is how those operations are implemented. RISC and CISC are general design principles of the instruction set, not the microarchitecture.

Back when RISC chips were introduced, this distinction was less meaningful. CPUs implemented instruction sets in a very direct way, and so the instruction set largely dictated what the microarchitecture looked like. Since RISC was created, a great deal of complexity has moved into the microarchitecture. Things like superscaler execution, pipelining, out-of-order execution, etc all have a major effect on the microarchitecture, and are affected only indirectly by the instruction set (and even more indirectly by whether the chip is RISC or CISC).

You also have to remember that Core 2 CISC is not like 8086 CISC (and PowerPC RISC never was that RISC anyway). Modern x86s are in some ways RISC, because they translate x86 code to internal RISC operations. In other ways, they are like souped-up CISC, because they take advantage of things like memory operands to improve performance.

The suitability of a particular chip for a task is directly only a function of the microarchitecture. On a complex chip like POWER6 or Core2, this microarchitecture is largely independent of the instruction set, so in this realm the question really becomes "which microarchitecture is more suitable, Power6 or Core 2", without consideration of x86 versus PPC or RISC versus CISC. As CPUs get simpler, and microarchitectural features are removed, the instruction set increasingly drives the microarchitecture, and thus becomes a bigger deal. Simple embedded CPUs can probably save a few tenths of a watt by using an easier-to-decode RISC instruction set than a more complex-to-decode CISC one. At the very bottom of the ladder, you have microcontrollers, whose microarchitectures are almost completely driven by their instruction set. Interestingly, most of these are CISC chips, because of the code-density advantages of CISC versus RISC.

At the level of Power6 versus Core 2, instruction set doesn't play a huge role either way. Core 2 leverages x86's CISC-y memory-operand model a good bit to do some instruction dispatching optimizations, and Power6 benefits from PPC's floating-point multiply-accumulate instruction, but essentially, Core 2 is suitable for workstations because Intel designed the microarchitecture for that role, while Power6 is suitable for servers because IBM designed the microarchitecture for that role, while Cell is suitable for consoles because IBM designed the microarchitecture for that role. All of these chips could've been designed with a different ISA without dramatically changing their performance characteristics.

Reply Parent Score: 5

RE[3]: Uh-Oh
by flywheel on Sat 30th Dec 2006 18:38 in reply to "RE[2]: Uh-Oh"
flywheel Member since:

The RISC/CISC debate is a tad more complicated, since the in the IBM world, RISC does not strictly imply a reduced number of instructions, but also include composite instructions. I once saw an alternative IBM akronym for RISC, cant remember where.

And does this mean that one can't use a...say...SPARC processor for their desktop system and still be as productive as a counterpart system with x86?

The differences is at low level - whether you use a SPARC or a AMD64 hardwareplatform doesn't matter.

Reply Parent Score: 1

RE[3]: Uh-Oh
by renox on Tue 2nd Jan 2007 13:02 in reply to "RE[2]: Uh-Oh"
renox Member since:

> does this mean that RISC processors (SPARC, ARM, POWER/PPC/Cell, etc.) are meant more for every other hardware system (embedded, server, console, etc.) except for the desktop?

The reason why embedded CPUs are RISC is that they are new CPUs: every "new" CPU is RISC, even Itanium has RISC (additionally it has VLIW characteristic) features: lots of register, orthogonal ISA, load/store ISA, easy instruction decoding..

In the desktop space, software compatibility has proven more important than CPU performance alone: the big number of x86 sold reduced the price of x86 CPU which are very good in performance/price ratio even when they had less power than RISCs so x86 won the desktop space..
As it happens x86 are CISCs.

It doesn't mean that RISCs are 'bad for desktop', it just means that in the desktop space, software compatibility has more importance than in the embedded space..

Reply Parent Score: 1