Intel, which next week is expected to announce plans to move to a new processor architecture, is switching to a new yardstick to measure processor performance: performance per watt. Intel’s announcement will publicly signal an internal shift that’s already taken place. After years of promoting clock speed, it’s now emphasizing overall performance and power-efficiency.
First of all, how do you measure “Performance”? Second of all, if you increase wattage, likely you will increase performance, so I could have a wide range of performances with the same p/w ratio.
How will intel give us a measure of performance regardless of wattage?
just a few questions I have.
First of all, how do you measure “Performance”?
The SPEC benchmarks would be a start.
Second of all, if you increase wattage, likely you will increase performance, so I could have a wide range of performances with the same p/w ratio.
You’re right, the p/w ratio on its own is not much use, at least you’d want to know the maximum performance too.
Furthermore, performance and wattage aren’t linearly related anyway. Cue arguments about what point the ratio should be measured at.
How will intel give us a measure of performance regardless of wattage?
Easy. Take their performance-per-watt number. Multiply by number of watts that the processor draws. Bingo, there you have your performance rating.
Performance / Watts * Watts = Performance ??
Isn’t that just what we started with?
Performance / Watts * Watts = Performance ??
Isn’t that just what we started with?
Correct!
Original poster asked for the Performance metric, so I supplied it. It’s not complicated.
Wattage != performance. Period.
Wattage != performance. Period.
This is a good thing, since if wattage == performance, performance-per-watt would always be 1. Which isn’t informative.
“Performance divided by Watt yields what?” was posted by me. I put my name and password in expecting to post as myself, but it was posted anonymously instead.
just a helpful suggestion to the maintainers of osnews.
Intel is reacting to a reality that hasn’t generally penetrated even the technical community very well: the era of simple CMOS scaling is over, and the reason is that atoms don’t scale. Up until 2002 or so, you could pretty much guarantee that after applying simple scaling rules, the next technology generation was not only smaller and faster, but also lower-power, than the generation before. That’s over. Further scaling now requires redesigns, leakage power is becoming dominant as electrons tunnel across gates that are only five atoms thick. Worse, leakage increases exponentially with temperature, making thermal runaway a very real risk.
Generally, chip designers are reacting to the change by adjusting their designs to exploit the increased area to put more functions on board without increasing the clock speed. That’s why you’re seeing multicore chips. This is going to continue to increase, so if you’re a young geek it’s time to learn about parallel programming. Your career is going to depend on it.
I just wish they would produce something that performs well and isn’t 100% driven by marketing hype. Ever since the days of the P3 that’s just been Intel’s mode of operation.
The reviews I’ve seen so far on the new Pentium M processors is that they freakin’ bake. At full loads they go well over 70C. I still haven’t seen any athlon64 go over 55C.
“The reviews I’ve seen so far on the new Pentium M processors is that they freakin’ bake. At full loads they go well over 70C. I still haven’t seen any athlon64 go over 55C.”
Junction T of P-M is 100°C.
But junction T is not related with TDP (P-M TDP is very lower than Athlon64 ones, 21-27W against 60-110W, Google for it) , so is not a parameter to see how much a cpu will be hard to cool. Period.
But junction T is not related with TDP (P-M TDP is very lower than Athlon64 ones, 21-27W against 60-110W, Google for it)
TDP is just as useless. The early Venice core chips (don’t know whether that’s gone up) maxed out below 30W.
“”But junction T is not related with TDP (P-M TDP is very lower than Athlon64 ones, 21-27W against 60-110W, Google for it)”
TDP is just as useless.”
???
If phisic hadn’t been entirely rewritten, TDP is THE measure of how difficult to cool since it is the Termal Dissipated Power, / it for the area that dissipate that power and obtain the cooling density (W/mm^2) you need to don’t get the chip boil.
P-M machines have ridicously small dissipators if compared with Athlon, Opteron and Netburst family chips, some of them (and almost the LV and ULV lines) can also been cooled fanless (as i do with my P-M).
This holds very well for Via, who can make ultra-low wattage chips. Intel better hope that small chips like the C7 and Geode do not get a large amount of publicity or enter the low-end market.
http://www.via.com.tw/en/products/processors/c7-m/
Nice performance per watt ratio with this CPU. Finally, a VIA processor with a full speed FPU. I’ve read from media reviews that an equally clocked CPU vs Intel, that the C7-M is 10% less effecient but 20% battery usage. With performance per watt like that I’d really be intrested in seeing C7-M laptops here just for the sake of battery life and weight. The CPU is going to be the size of a penny. Looking forward to a half lb notebook .
The pentium 4 can be 3ghz using 110 watts, which is not completely desirable.
Then look at the aging PowerPC G3 at 700Mhz using between 5 and 20 watts.
The latest G5 which gives the Intel P4 is roughly more MIPS than the P4 and achieves this in 30 to 40 watts.
Now, if intel can keep their CPUs below the psychological eye-watering barrier of 100 watts regardless of performance, then I’ll buy their chip.
“The latest G5 which gives the Intel P4 is roughly more MIPS than the P4 and achieves this in 30 to 40 watts. ”
You can get same computing power with a P-M at 4-7 to 21-27 W, less than half of G5… as also Steve Jobs observed…
In light of this, Apples decision to switch is making more sense then ever. The future of Intel processors appears to be just the kind of fast-but-efficient processors that Apple prefers (and will need if they want to avoid significant redesigns).
You mean AMD may finally be in for some competition in the enthusiast market?
Maybe they can use a system where they can relate the performance of their chips with a theoretical “clock speed” to easily let customers know how the performance of various chips compare. Maybe we can call this scale something like Performance Rating… ok I’ll stop 😉
At this point I do not understand why I need a more powerful cpu anyway. Certainly I can see that there are applications where faster CPUs are required, but not on the desktop. Somewhere about 1.5-2 GHz I draw the line and say, Spell Check is fast enough.
Don’t give me mips/watt. Give me a passivly cooled chip that runs about the same as a Althlon 1800+ (give or take a little).
When they showed off that they were gonna use Intel chips, they told us that they would do it because of performance/watt.
So, the question is, does Apple already know whast Intel will show in a few days? Because the CPU:s that Intel uses today DOES NOT give you any great performance/watt. I still don´t get it why Apple din´t choose AMD as supplier.
So, the question is, does Apple already know whast Intel will show in a few days?
Yes, most of it anyway, I’d suspect.
Because the CPU:s that Intel uses today DOES NOT give you any great performance/watt.
Where have you been? Those new processors won’t be based on the power-wasting Netburst (aka Pentium 4) design but on the good old P6 that had its latest incarnation in the very efficient Pentium M.
I still don´t get it why Apple din´t choose AMD as supplier.
Perhaps Intel offered higher discounts or Apple had more trust in Intel’s ability to meet demand.
In any case, Apple can always switch later, or at least threaten Intel with doing so.
“Because the CPU:s that Intel uses today DOES NOT give you any great performance/watt.”
P-M anyone?
They are here since many years.
Who prevent a “innovative” company to customize some lines of mobo to put it in the formats they prefer, ATX, baby ATX, notebook, subnotebooks, teddybear (like ITX customization)…
And next Intel’s lines of processor are showing even better p/w rate and are P-M derived, it’s not a secret (they are speaking, and showing samples, since at least 3 years ago).
Welcome in 2005.
Its a very common practice in industry to share info about upcoming products to existing and possible BIG customers. Non-disclosure agreements are always involved so you can’t go post the info on the web, or talk about it at conferences. My guess is that Apple not only knows everything that will be announced, but probably had their hand in some of the design decisions.
It may be new to us, but not to Apple!
Apple probably went with Intel for several reasons:
– Intel can supply the entire chipset along with the processor, AMD only has the CPU
– The Pentium M is a great CPU for laptops and the Centrino platform has everything else they need
– Far more technical and marketing support than anything AMD could provide
– Apple won’t have to pay as much in the way of R&D as Intel will have done the CPU and chipset designs
– Intel is a very large company and definitely won’t be dissappearing anytime soon
– And as others have pointed out, they probably got a great deal
I agree, and i would add in favour of Intel:
– high production capabilities;
– thinking to far, possible, future Intel have xscale, Itanium and other more specifical processors and chips, in a word has more experience and resources than AMD to come out with a performance, power and cost-wise architecture, both for cpu and chipset;
– AMD R&D is influenced by IBM and Apple sure will not like to stay too near to IBM for some time…
– now that Apple need it, Intel have a ready and mature low power x86 cpu and a long time announced roadmap based on it, AMD is struggling to do a similary efficient CPU only in recent times.
It’s all about marketing.
I’ll stick with a CPU design that performs thanyou.
I absolutely applaud Intel’s decision to focus on performance per watt. Computers have become sufficiently fast for pretty much every need and the processor heating is becoming an increasing problem so this is a great. I just hope that car manufacturers do the same thing …. instead of focusing on how powerful and luxurious a car is, try to make it actually save gas, save repair bills and save general money.
Performance = Number of operations per second
Where “operations” is some algorithm that involves many kinds of basic tasks, like multiplying numbers, reading and writing from memory, etc. This should be an algorithm easily writeable in assembler and translateable to many different processors so they can be benchmarked.
Then, Performance/Watt = Operations/Joule
This kind of rating would be useful to phone and PDA designers as an estimate of the quality that can be expected for a certain battery capacity.
“operations” is not a very good metric either, even with the mix that you suggest.
You could have vastly different “operations” workload for servers, desktops, etc. and would you want several different op/joule numbers to keep track of in order to find out which cpu is best for the task that you want it to perform?
I’m pretty certain that Intel don’t want to have that… And it’s not a good idea to use a weighted average of all the workloads either since that still doesn’t tell you what the performance is for the task you want it to perform.
It will be interesting to see exactly what Intel reveals next week.
Sure looks that way:
http://www.sun.com/processors/throughput/
32-thread platform @57watts – with rocking specjbb.
cool.
If you can’t beat’m you change the rules…
“If you can’t beat’m you change the rules…”
Just what I wanted to say. For a long time now Intel cpu’s had really been some real heaters (well, pentium-m’s are an exception but you can hardly see them in desktop machines). On the high performance plane amd long ago proved that very fast cpus can be built with lower power consumption than intel’s cpus, and on the relatively low performance plane via and transmeta also proved that a lot can be achieved.
I personally don’t really mind if my work machine feeds on a gazillion watts, since 1). I don’t pay the bill, 2). it provides the computing power I need. But for home use I’d very much prefer a slower cpu without (!) cooling fan, which via can provide us with for years now.
So, this (yet again) change in performance rating at intel is nothing but a new way to produce some numbers which somewhat describe their cpus in a way to conceal somewhat from 6packjoe customers the high wattage of their cpus. Well, next year they may come out with some really new cool cpus, but it’s not next year now, is it.
I say, until Intel can’t come out with a multicore cpu as well (or better) architectured, as high performance and as low wattage as the Amd X2 series, I’m not buying from them thankyouverymuch.
Power inefficient CPUs are the main reason that is holding me back from upgrading my home box! Those elaborate cooling rigs you see nowadays are so… backward. It would be nice to have a reasonably powerful machine (no need to break speed records here, just a good compromise) which is highly power efficient, so a simple CPU fan or even a high-volume PSU-mounted fan could do the job with no fuss. Or “buzz” rather
This is bang for your watt – performance – be it int/fpu – its about delivering the most amount of grunt for the least amount of wattage and energy consumed – its all about efficiency.
About the only downside I see is this; don’t expect massive performance leaps – performance is getting to the point of deminishing returns – for every leap, the actualy performance increase is declining.
But with that being said, SUN has realised one thing – there isn’t a über CPU that can do it all; whilst workstations want a balance between integer and fpu performance, servers on the other hand simply want grunty throughput – suck as much data as possibly, crunch it, then spit it back out in the quickest possible way.
About the only downside I see is this; don’t expect massive performance leaps – performance is getting to the point of deminishing returns – for every leap, the actualy performance increase is declining.
Multicores do deliver massive performance leaps, as soon as applications fully utilise them anyway. While clock rates do seem to have hit a ceiling, there’s still plenty of room for improvement by adding more cores and bigger faster caches.
Sorry, thats what I meant by performance – I too get a little confused between clock speed and actualy performance.
Regarding performance, it is also a two way street; developers and processor developers must work hand in hand to deliver to the end customer, the performance gains that are made via processor changes.
I think that you’ll find, however, with SUN working *VERY* closely with AMD and Intel working *VERY* closely with Apple, you should start to see some big leaps, and hopefully, with changes made in FreeBSD 6.0, these changes will find their way back to MacOS X.
As for me, I’m going to wait till the Intel based PowerMac’s are released before I upgrade – that gives me a window of atleast 3 years to enjoy my current iMac G5 – it’ll be interesting to see.
There are also inclingings that dual core as just the beginning, Intel are also looking at Quad core as well – it’ll be interesting times ahead – it seems that SGI and SUN were right, the future isn’t fast processors but lots of cores/processors connected to get with low latency, massive bandwidth connections.
With that being said, not all applications can or will benefit from it, but it’ll be interesting to see the landscape of CPU’s in 3-4 years time.
No marketting trick can change the fact the Intel processors still suck bollocks
The newest AMD X2’s are awsome and relatively consume less resources.
When Intel aquired DEC/Digital they also got the StrongARM design, which they ‘evolved’ into the xScale. As many know, Apple’s most successful product is the iPod, which ofcourse features an ARM CPU.
My guess is Intel has taken a good look at the high performance ARM design and is perhaps now able to implement some of its power efficiency into a new / forthcoming Pentium line.
The ARM’s power efficiency comes from a short pipeline and simple in-order design. While that’s great for embedded systems it doesn’t scale to performance levels people are used to on desktops.
I read some reviews about different via processors.
The main problem with them is that all were especially weak when doing something like ripping dvds or so.
And thats really bad since these tasks require quite a lot of cpu power. But they are perfect for making some sort of pvr – some of them even fanless!
Some people also use them for building car computers.
Yeah, the issue is with the FPU running not at the full speed. However, the new c7-M is suppose to address many of those issues. Granted, Via even admits they are not going for a performance contest with the Pentium-M. They built it for battery life, according to them a 2GHz c7-M is 10% “slower” than a 2GHz Pentium M but has 20% better battery life. Those are, afterall, biased benchmarks but I would love to see actual benchmarks from a 3rd party.
power requirements are becoming ridiculous. a 400W power supply is a good place to start for a top-of-the-line system these days. insane…
Now let’s hope that the makers of other components join in. Hard drives and GPU also draw a lot of power these days.
I remember when computers not only ran passively cooled but didn’t even require heat sinks on their chips. Ah, those were the days of silent computing!
“power requirements are becoming ridiculous. a 400W power supply is a good place to start for a top-of-the-line system these days. insane…”
I have a 350W PSU for my P4 3.6 GHz, 2GB RAM, 3 x Maxtor DiamondMax 10 SATA disks, ATI X850 XT PE, Audigy 2 ZS and some Conexant modem card.
You are at the limit of your 350 W power supply capabilities with that configuration!
I wouldn’t run a 24:7, fully loaded, server or render/encoding workstation with that configuration (well, however it may work for some times).
And think about if you would build a biprocessor workstation (two phisical processors), you’ll need other 110W… and if you want more, other tents of W for a simmetric video card… or if you want to build a 4x or 8x CPU midrange server… 500, 600 and 660 W redundant power supply are not so uncommon, if you want also more you should consider racking.
Why accept this if you can have awesome performances with P-M at <30W keeping small form factor (or tighter rack package)? Intel has understood it and adapted the roadmap, finally!
“You are at the limit of your 350 W power supply capabilities with that configuration!”
At the limit maybe, but not over it, it’s a stable system and I use it all day. The PSU race is driven by home-built Athlon systems, not Intel.
“Why accept this if you can have awesome performances with P-M at <30W keeping small form factor (or tighter rack package)?”
It isn’t as good for scientific calculations or media encoding (the Athlon 64 isn’t either, but that doesn’t stop the AMD fanboys here apparently). Two quotes from the GamePC benchmarks:
“While Intel’s “Netburst” architecture of the Pentium 4 processors may not be the most efficient, it does scream in terms of content creation applications like Photoshop and Flash MX. In these tests, the Pentium 4 holds a clear performance advantage over the Pentium-M processor lineup. The high-end Pentium-M 2.0 GHz chip performs well enough to compete with AMD’s higher-end Athlon64 processors, but is simply outmatched by the Pentium 4.”
“Media encoding doesn’t look particularly great for the Pentium-M processor lineup either. The top Pentium-M at 2.0 GHz can, again, compete with AMD’s Athlon64 processors in this realm, but is easily bested by the Pentium 4 in video encoding.”
There is more to CPU performance than the UT2004 and Doom 3 benchmarks.
“The PSU race is driven by home-built Athlon systems, not Intel. ”
mmm… intel based servers or multiCPU workstations from IBM and HP (not amd based, not homemade) have a powerful PSUs, as i said before.
“It isn’t as good for scientific calculations or media encoding (the Athlon 64 isn’t either, but that doesn’t stop the AMD fanboys here apparently). ”
mmm… try an application with a huge lookup matrix and you will se P-M and Athlon eat P4 like a candy.
And also thinking to applications the P4 runs efficiently, you should think that you can run 5 centrinos with the same power… do you think that, say, a 4x CPU @ 2,23 GHz P-M based machine would be outperformed by a 3,8 GHz P4 if we had an application optimized not only for netburst but also for parallel execution?
Moreover, i’m running (rock-stable) my P-M undervolted at 0.988V @1GHz, and even to 1,4 GHz (but I don’t like to risk) so i suppose and i can do quite better for next gen P-M based machines if it just want to utterly destroy P4 and Athlons in performance per watt!
The Pentium4 is a failure, so is the Itanic. The Pentium M arrives 5 years too late. AMD won the processor war, get over it.
seems only 10% more efficient than winchestor
Intel just took the cue (stole the idea) from Steve Jobs ‘reason’ for switching to its processors and has started a new marketing strategy. Makes it harder to compare to its own previous processors and to the competing processors-which is fine with Intel. But enthusiasts-like AnandTech-will do the tests and we will know how the new architecture will fare in the real world.
“The Pentium4 is a failure, so is the Itanic. The Pentium M arrives 5 years too late. AMD won the processor war, get over it.”
Yet Intel still outsells AMD by a factor of 9:1, has greater margins on their chips, and has far more engineers. You my friend are an idiot and a troll. I think the processor war has just begun.
I think that the major problem is the use of the CPU for many operations that should be handled by specialist hardware. In the past when CPUs were much slower PCs relied on proper hardware for sound, modems, video encoding etc.
Another approach would be to 4 or more cool slow CPUs (or cores). If BeOS had been successful we could be using this approach now.
I’d rather a 100 fps mpeg proper hardware encoder than a 5GHZ 500 watt CPU anytime.
SuperH advertises it offers one of the highest performance/watt ratio, used on the DC, 16-bit RISC ISA
http://www.superh.com
What is it…3 Billion 2 million calculations per second or something? As if that isnt fast enough. Now when you’re heat sink and fan is the size of an apple ( the fruit ) and your CPU is the size of a cheez-it then you should probably re-think the power thing a little more.
Transmeta chips have been doing this for years. Anyone taking bets on whos gonna buy them?
-nX