Linked by Nicholas Blachford on Wed 9th Jul 2003 16:43 UTC
Talk, Rumors, X Versus Y This article started life when I was asked to write a comparison of x86 and PowerPC CPUs for work. We produce PowerPC based systems and are often asked why we use PowerPC CPUs instead of x86 so a comparison is rather useful. While I have had an interest in CPUs for quite some time but I have never explored this issue in any detail so writing the document proved an interesting exercise. I thought my conclusions would be of interest to OSNews readers so I've done more research and written this new, rather more detailed article. This article is concerned with the technical differences between the families not the market differences.
Permalink for comment
To read all comments associated with this story, please click here.
Re:
by Dawnrider on Wed 9th Jul 2003 21:34 UTC

ILoveWindows: Without using SSE or Altivec, you are really going against the abilities of modern processors, and the results you get are not meaningful. Most especially, if you look at the P4, the classic FP performance is very weak, while using SSE literally will double performance in many cases. Go to Anandtech, Tom's Hardware, Ars Technica and other tech sites will show you that on intensive processes such as 3D rendering, for the first half year of P4 release, without SSE2 recompiles of the software, relying on x87 floating point, it got creamed by the Athlon. Once software became recompiled, performance was better. Apple's little notes about disabling SSE on the P4/Xeon benchmarks effectively cripple the floating point performance of that CPU, and they knew that. That is not to mention that due to the architectures of the P4 vs. the 970, they will perform differently depending on the detailed formation of the code, such as the sizes of matrices, fp precision required, formation of loops/conditions and a whole host of other factors. And no, I'm not a 3D developer, I'm a university researcher into image processing (2D, 3D, stereo vision) and I deal with this stuff a lot. You can't just throw a piece of entirely un-optimised code at a CPU and expect the initial response to be true of the capabilities of the chip. In the case of the P4, this is incredibly pronounced, due to the design decisions that Intel took. I disagree in many places with their implementation, and prefer Athlons myself, precisely because they are better at x87 rather than requiring the SSE2 optimisation, but there it is.

Stingerman: Firstly, "Dawnrider your wrong" should be "Dawnrider you're wrong". Just FYI. I won't argue that offloading processing onto the GPU is a bad thing, because it isn't, but that is only worthwhile if you intend to use some serious vector-based tasks on such a system. Current iterations of rendering software, for example, are using the GPUs in that way. The trouble with Quartz Extreme, which I was wishing to highlight, is simply that rendering basic 2D forms in a windowing environment is a minor task to a modern processor. In fact, OpenGL effectively offloads a degree of that to the GPU to start with, which is why the graphics card needs memory for more than just a look-up table, as opposed to simply streaming a framebuffer out to the screen. My points was, that it is wrong-headed to the point of being moronic to take such a ripe source of processing power and then create spurious tasks such as rendering shrinking windows to saturate it with. Longhorn is daft in trying to do that as well. Moreover, OSX users suffer, because having a framebuffer-sized chunk of memory dedicated to each window rapidly chews through physical memory once you start using more than a few applications. You add massive overhead to the system and quickly reduce responsiveness if the thing has to start paging to disk to support your graphical excess. You might have no problem watching your windows shrink, spin, etc. when you just have one or two, but if I have >20 windows open at a time (and I do), it would absolutely destroy the performance of my system. In short, get rid of quartz effects, save memory and use those GPU cycles for more useful work.