“At the Research@Intel day last week, Intel had a huge array of technologies and active research initiatives on display for press and analysts. As I toured the company’s Santa Clara offices, I was able to piece together a few major themes and directions by stepping back and looking at the places where Intel is currently focusing its forward-looking research. In my next few articles, starting with this one, I’ll take an in-depth look at each of these themes and at what it tells us about where computing is headed in the next decade.”
I wonder, if instead of raw power and raw speed, people start growing more interested in less power and less heat in the future.
It would certainly make for interesting mods, at least…
“Haha! I just got Halo 4 running on two AA batteries!”
“But you’re still using batteries. My iMac 2012 is now running on a pinwheel I set up outside.”
“I got you both beat. I’m running Ubuntu Zesty Zorilla off of a potato.”
No my friend, it will probably be more like this:
“In future, Earth is dominated by sentient machines. Humans are grown in pods and are connected by cybernetic implants to an artificial reality called the Matrix, which keeps their minds under control while the machines use the bioelectricity and thermal energy of their bodies as an energy source.” –Source:W–
Ok, just kidding
There’s already plenty of interest in lower power computing; both because of green concerns and for quieter computing. Lower power consumption means less heat, which means less need for noisy cooling fans. Essential for a living room media centre PC, or workstation used for AV tasks.
It may not be mainstream yet, but there are plenty of products aimed at this market. For example, low power CPUs like the Via C7, desktop motherboards designed to use mobile CPUs, or very high efficiency PSUs.
It doesn’t surprise me that both Intel and AMD would be pursuing reduced power consumption along with faster speed. Some people are clearly willing to pay a premium for more power efficient products. It’s definitely becoming a selling point when comparing different products.
The future looks pretty bright for computer users who care about more than benchmark results and gaming frame rates.
I’d quite like to see the thermal movie of a tile of processors being idle/busy switching jobs around. I can imagine the simulation predicting that 1 idle core will cool a few degrees below its busy neighbor, but I don’t quite buy it in the real device.
I always imagine that the individual cores of a chip are so close that there can’t be much of a thermal difference across the array and that the overall temp will blur over. Still it makes some sense to moves jobs around to keep most nodes about the same level of activity over all, like wear leveling sort of thing, so just randomly move jobs around although that in itself uses energy. Or run each processor at a clock rate that keeps its own temperature at the median.
The 15Gbps serial links look good too, very adaptive to conditions and out of the view of software.
Apart from that it seems that we are not likely to ever get back to a processor that genuinely runs close to 1W or uncooled again with OSes like Vista raising the demand on performance.
I’m wating for the day i can just rely on a solar power cell to get enough juice to run my laptop. Like those little calculators you can buy for $1 nowadays.
btw, can you make that a quantum cpu please?
What, and how many universes will you have to destroy just to play Halo?
I think the progression of how things will get more power-efficient for a given level of computation will take a combination of factors, much like the combination of factors that got us to where we are today: largely wasteful processors (power-wise) despite improvements in manufacturing processes.
The Pentium 4 NetBurst architecture was the counterexample of what is practical with the processes we currently have in terms of performance scaling at a given power/thermal envelope, and the TransMeta processors are a fairly good example of how to make things work with the same or similar processes, but both have their issues.
In terms of the Pentium 4 NetBurst, it involved very long pipelines that could stall and cost a huge powered circuitry penalty, but – it’s fairly easy to write code that’s fast for such a beast, at least compared to the TransMeta CPU VLIW technique, which is also similar to how the Itanium operates as well, which requires some major twists and leaps from compiler writers to wring all the performance out of the simple-but-many instructions that can be executed in a given clock cycle that are all stuck in a VLIW-from-hell. In both cases, the Pentium 4 can be thought of as something that evolved largely from CISC and mutated into a largely RISC processor, and the VLIW TransMeta and Itanium, oddly enough, can be thought of as also starting with CISC (lots of simple instructions all smashed together in one long instruction making it effectively easily as complex) and also being RISC at the core for the simple instructions that each instruction word is composed of.
One bit of technology that has some technical promise to it is what University of Austin, Texas is doing with their TRIPS processor, which rethinks how code is written a bit (or 32) and instead focuses more on how the data is modified and its dependencies, and encodes that into the hardware, and thus the software, more than how the VLIW or the RISC processors operate, and can be thought of as a processor that computes conditional dependencies in a block of data and operations at a time, using dataflow. The processor doesn’t have complex look-ahead logic involved that eats power, and if data + operations don’t result in dependent operations being done, the CPU logic doesn’t throw away computations, because they aren’t done. However, I’m not certain that the TRIPS processor as-is will make it into the commercial market to take over the x86 compatible scene, for perhaps the same reason the TransMeta processors didn’t take the world by storm: the inertia of x86-based software is very hard to overcome, even if the hardware that emulates an x86 can do so very fast. This is both a software ecosystem inertia as well as a hardware ecosystem inertia: only having a processor without all the support logic being cheaply available by economies of scale rather limits the probability that people will take the leap, unless the benefits are hugely greater than the risks/costs involved.
Where do I think the TRIPS processor will be used? I think that IBM will be the most likely one to integrate some of the technology developed as part of the project into future POWER platform chips, since they’re a large partner involved in the project, and like many technologies (such as virtual memory management units, floating point, etc.) it will first start out on expensive Big Iron, and may eventually work its way down to common consumer devices.
I have been running and compiling Slackware for the low-power, high-speed MIPS-compatible Loongson processor over the last two months. It consumes 4W at 700 MHz. Clock for clock it’s competitive with Intel and AMD processors with a dramatic decrease in power consumption.
Several companies are already creating and selling systems based on this processor such as Lemote with their Fu Long and Sinomaniac with their laptops.
The next version (Loongson 2F), that is due in a few months, will be even faster, running from 1 to 1.2 GHz at 2W power consumption. Loongson 3 that is to come in 2008 will have 16 cores, making it much more attractive than any multicore Intel/AMD processor, except where absolute single-threaded performance counts.
AMCC will launch a dual-core 2 GHz PowerPC processor at the end of the year that will only consume 2.5 per core. It’s based on Intrinsity’s Fast14 technology that they had already implemented for MIPS.
I honestly don’t know what Intel and AMD are doing at the moment pushing their power-hungry processors when they could cut down on power consumption by 90% if they really wanted to.
Edited 2007-06-30 12:32 UTC
Just goes to show what a dead end the x86 architecture is – it has only survived due to accidents of history and ubiquity. There are vastly superior processor technologies available now, yet chances are they will be adopted slowly simply because of the x86 market dominance. The only way I see these chips taking off is if Microsoft port Windows to it (technically shouldn’t be too hard, as Windows CE runs on MIPS architecture).
Imagine silent, passively cooled PCs that take up a fraction of the space of current PCs, and use a fraction of the power… One can only hope someone with the wherewithal steps up to the plate and starts marketing the bejesus out of these CPUs
Edited 2007-07-01 00:24
No it doesn’t. x86 is nothing but an instruction set. The architecture has changed many, many times. I can see no major advantage in terms of power consumption between MIPS and x86 instruction sets (indeed, x86 tends to be more compact thus requiring less bus cycles for program access, this is traded off against a smaller number of internal registers which means more external data access is required… chickens and eggs).
Just as a point of comparison the Pentium M 733 ULV requires maximally 5 watts to run at 1.1Ghz. This chip is fast, very fast.
While the other poster’s reaction was perhaps too little nuanced, it is in fact true that x86 is architecturally a dead end. It’s only with x86_64 that it has got a new lease on life, fixing many of the shortcomings of the crippled architecture but not all.
I agree that MIPS binaries tend to be larger than those for x86, mostly by 30% to 50% from what I have seen here, so more memory (bandwidth) and larger caches are very welcome and maybe even necessary to maintain a high enough level of performance.
The difference is that MIPS opcodes translate almost 1:1 to internal micro-ops and don’t have a lot of decoding going on, which means simpler circuits. x86 on the other hand needs more complicated decoders for the various addressing modes and instruction lengths, so that takes up some precious die space, though not as much as it used to be when the Pentium Pro was launched.
While Intel/AMD do have low-power versions of their processors available these are not readily available on the market at the low prices these alternative processors sell at by default.
And I don’t think they will be able to concentrate as much on low power across the whole line since they have to ensure that resource hogs such as Windows Vista and Mac OS X keep running at acceptable performance on their processors.
In this respect ARM processors are doing even better with 0.5W, a 90% reduction in power consumption even compared to your example, and using the Thumb instruction set code density is attainable that is as good as x86. These are the numbers that all parties should really aim at.
As for architecturally sound you should only have a look at the Alpha architecture which trumps any other architecture in terms of performance potential and was designed explicitly to contain all the advantages of the various RISC processors and none of the drawbacks of them. It’s the only architecture Linus will admit is better than x86 and it’s a shame it was killed untimely.
There is only hope that the overhauled micro-architecture of the Poulson and/or Kittson IA64 processors somehow resembles or builds on Alpha’s. That would at least ensure that not all of Alpha’s progress is lost to those who don’t settle for good enough.
These are exciting times because the surge of FLOSS operating systems and applications has spurred competition across architectures to consumers’ benefit. It’s not enough anymore for Intel and AMD to only keep watching each other and Via because there is serious competition coming from other companies they hadn’t reckoned with.