There has been a great deal of controversy over the benchmarks that Apple has published when it announced the new PPC 970 based G5 .
The figures Apple gave for the Dell PC were a great deal lower than the figures presented on the SPEC website. Many have criticised Apple for this but all they did is use a different compiler (GCC) and this gave the lower x86 results. GCC may not be the best x86 compiler but it contains a scheduler for neither the P4 or PPC 970 however it is considerably more mature on x86 than PowerPC. In fact only very recently has the PowerPC code generation began to approach the quality of x86 code generation. GCC 3.2 for instance produced incorrect code for some PowerPC applications.
However, this does lead to the question of why the SPEC scores produced by GCC are so different from those produced by Intel's ICC compiler which it uses when submitting SPEC results. Is ICC really that much better than GCC? In a recent test  of x86 compilers most results turned out glaringly similar but when SEE2 is activated ICC completely floors the competition. ICC is picking up the code and auto-vectorising it for the x86 SSE2 unit, the other compilers do not have this feature so don't get it's benefit. I think it's fairly safe to assume this at least in part is the reason for the difference between the SPEC scores produced by Apple and Intel.
This was a set of artificial benchmarks but does this translate into real life speed improvements? According to this comment  by an ICC user the auto-vectorising for the most part doesn't make any difference as most code cannot be auto-vectorised.
In the description of the SPEC CPU2000 benchmarks the following is stated:
"These benchmarks measure the performance of the processor, memory and compiler on the tested system."
SPEC marks are generally used to compare the performance of CPUs however the above states explicitly this is not what they are designed for, SPEC marks also also test the compiler. There are no doubt real life areas where the auto-vectorisation works but if these are only a small minority of applications, benchmarks that are effected by it become rather meaningless since they do show reliably how most applications are likely to perform.
Auto-vetorisation also work the other way, The PowerPCs Altivec unit is very powerful and benchmarks which are vectorised for it can show a G4 outperforming a P4 by up to 3 1/2.
By using GCC Apple removed the compiler from the factors effecting system speed and gave a more direct CPU to CPU comparison. This is a better comparison if you just want to compare CPUs and prevents the CPU vendor from getting inflated results due to the compiler.
x86 CPUs may use all the tricks in the book to improve performance but for the reasons I explained above they remain inefficient and are not as fast as you may think or as benchmarks appear to indicate. I'm not the only one to hold such an opinion:
"Intel's chips perform disproportionately well on SPEC's tests because Intel has optimised its compiler for such tests"* - Peter Glaskowsky, editor-in-chief of Microprocessor Report.
I note that the term "chips" is used, I wonder does the same apply to the Itanium? This architecture is also highly sensitive to the compiler and this author has read (on more than one occasion) from Itanium users that it's performance is not what the benchmarks suggest.
If SPEC marks are to a useful measure of CPU performance they should use the same compiler, an open source compiler is ideal for this as any optimisations added for one CPU will be in the source code and can thus be added to the other CPUs also keeping things rather more balanced.
People accuse Apple of fudging their benchmarks, but everybody in the industry does it - and SPEC marks are certainly not immune, it's called marketing.
Personally I liked the following comment from Slashdot which pretty much sums the situation up:
"The only benchmarks that matter is my impression of the system while using the apps I use. Everything else is opinion." - FooGoo
x86 has the advantage of a massive market place and the domination of Microsoft. There is plenty of low cost hardware and tons of software to run on it, the same cannot be said for any other CPU architecture. RISC may be technically better but it is held in a niche by market forces which prefer the lower cost and plentiful software for x86. Market forces do not work on technical grounds and rarely chose the best solution.
Could that be about to change? There are changes afoot and these could have an unpredictable effect on the market:
1) Corporate adoption of Linux
Microsoft is now facing competition from Linux and unlike Windows it is not locked into x86. Linux runs across many different architectures if you need more power or low heat / noise you can run Linux on systems which have those features. If you are adopting Linux you are no longer locked into x86.
2) Market saturation
The computer age as we know it is at an end. The massive growth of the computer market is ending as the market is reaching saturation. Companies wishing to sell more computers will need to find reasons for people to upgrade, unfortunately these reasons are beginning to run out.
3) No more need for speed
Computers are now so fast it's getting difficult to tell the difference between CPUs even if their clock speeds are a GHz apart. What's the point of upgrading your computer if you're not going to notice any difference? How many people really need a computer that's even over 1GHz? If your computer feels slow at that speed it's because the OS has not been optimised for responsiveness, it's not the fault of the CPU - just ask anyone using BeOS or MorphOS.
There have of course always been people who can use as much power as they can get their hands on but their numbers are small and getting smaller. Notably Apple's software division has invested in exactly these sorts of applications.
4) Heat problems
What is going to be a hurdle for x86 systems is heat. x86 CPUs already get hot and require considerable cooling but this is getting worse and eventually it will hit a wall. A report by the publishers of Microprocessor Report indicated that Intel is expected to start hitting the heat wall in 2004.
x86 CPUs generate a great deal of heat because they are pushed to give maximum performance but because of their inefficient instruction set this takes a lot of energy. In order to compete with one another AMD and Intel will need to keep upping their clock rates and running their chips at the limit, their chips are going to get hotter and hotter.
You may not think heat is important but once you put a number of computers together heat becomes a real problem as does the cost of electricity. The x86's cost advantage becomes irrelevant when the cooling system costs many times the cost of the computers.
RISC CPUs like the 970 are at a distinct advantage here as they give competitive performance at significantly lower power consumption, they don't need to be pushed to their limit to perform. Once they get a die shrink into the next process generation power consumption for the existing performance will go down. This strategy looks set to continue in the next generation POWER5.
The POWER5 (of which there will be a "consumer version") will include Simultaneous Multi-Threading which effectively doubles the performance of the processor unlike Intel's Hyper Threading which only boosted the performance by 20% (although this looks set to improve). IBM are also adding hardware acceleration of common functions such as communications and virtual memory acceleration onto the CPU. Despite these the number of transistors is not expected to grow by any significant measure so both manufacturing cost and heat dissipation will go down.
- "History, Architectural differences, RISC Vs CISC, Current state of these CPUs"
- "Law of Diminishing , Performance, Vector Processing and Power Consumption differences"
- "Low Power x86s, Why The Difference?, To RISC Or Not To RISC, PPC and x86 get more Bits"
- "Benchmarks, the Future"
- "Conclusion, References"