posted by Federico Biancuzzi on Tue 27th Mar 2007 15:41 UTC

"Page 3"

And what about performance-per-watt of consumer GPUs compared to those included in PS3 and XBox 360?

Jon Stokes: That's hard to estimate, I think, because "consumer GPUs" is such a broad category. I'm sure that for high-end GPUs that are comparable in horsepower to those in the PS3 and XBox 360, the performance/watt numbers are also comparable.

Why do you think this new generation of consoles abandoned the x86 instruction set choosing RISC CPUs?

Jon Stokes: This is a hard one to really pin down. Honestly, I think that IBM just did a great job pitching them on their chip design competency. I don't think it had much to do with the ISA, and I also think that IBM made the sale on a case-by-case basis to each of the console makers, appealing to different aspects for Sony, MS, and Nintendo.

With Nintendo, it was about the fact that IBM had already proven they could deliver a console product with the GameCube. Nintendo was clearly pleased with the GC, and in fact Wii is basically just a GC with a higher clockspeed and a new controller.

With Sony, IBM was able to sell them on this exotic workstation chip. Sony likes to really push the envelope with their consoles, as evidenced by both the PS2 (really exotic and hard to program when it first came out) and the PS3. So IBM was able to appeal to their desire to have something radically different and potentially more powerful than everyone else.

As for MS, I have no idea how they pulled it off. I think that if the Xbox 360's successor had used a dual-core Intel x86 chip or even an Opteron, everyone would've been better off. This is especially true if Intel could've found a way to get a Core 2 Duo, with its increased vector processing capabilities, out the door in time for the console launch. Of course, even Core 2 Duo can't really stand up to the Xenon's VMX-128 units, especially given VMX's superiority to the SSE family of vector instructions, so Xenon does have that edge.

But regardless of the SSE vs. VMX (or AltiVec) issue, I'm not convinced that letting IBM design a custom PPC part for Xbox 360 was the best move, because now MS has to support two ISAs in-house, and I don't think it really buys them much extra horsepower. But I acknowledge that I may be entirely wrong on this, and in the end you're better off asking a game developer who codes for both platforms which one he'd rather have.

It seems that AMD (+ATI) is working on merging CPU and GPU. At the same time some projects, such as brookgpu, try to exploit GPU power to crunch numbers. What is your point of view on the evolution of CPUs and GPUs?

Jon Stokes: I don't really have much of an idea where this is really headed right now. I don't think anyone does. I mean, you could do a coarse-grained merging, like AMD says they want to do with a GPU core and a CPU core on one die, but I'm not convinced that this is really the best way to attack this problem. Ultimately, a "merged CPU/GPU" is probably going to be a NUMA, system on a chip (SoC), heterogeneous multicore design, much like Cell.

I also think it's possible to overhype the idea of merging these two components. Regular old per-thread performance on serially dependent code with low levels of task and data parallelism will remain important for the vast majority of computing workloads from here on out, so a lot of this talk of high degrees of task-level parallelism (i.e. homogeneous multicore) and data-level parallelism (i.e. GPUs and heterogeneous multicore, like Cell) is really about the high-performance computing market, at least in the near-term.

At any rate, right now we're all sort of in a "wait and see" mode with respect to a lot of this stuff, because CPU/GPU and some of the other ideas out there right now look a lot like solutions in search of a problem.

Table of contents
  1. "Page 1"
  2. "Page 2"
  3. "Page 3"
e p (2)    19 Comment(s)

Technology White Papers

See More