Intel has just announced its new line of processors, called Ivy Bridge, which uses a new type of transistor to reach the 22nm production process as well as maintain Moore’s Law. They call it the 3D transistor, and in all honesty, this stuff goes way over my head. Even the incredibly cheesy n00b-video from Intel doesn’t really make any lightbulbs appear in my head. So…
…that makes this kind of hard to talk about, even though it’s kind of a big deal. Intel claims that the new type of transistor will lead to serious reductions in power consumption, while delivering more power. It allows Intel to extend Moore’s Law, despite doubts over whether the industry would be able to or not. These things have been in development for nearly a decade.
“The performance gains and power savings of Intel’s unique 3-D Tri-Gate transistors are like nothing we’ve seen before,” said Mark Bohr, Intel Senior Fellow, “This milestone is going further than simply keeping up with Moore’s Law. The low-voltage and low-power benefits far exceed what we typically see from one process generation to the next. It will give product designers the flexibility to make current devices smarter and wholly new ones possible. We believe this breakthrough will extend Intel’s lead even further over the rest of the semiconductor industry.”
And, here is the cheesy video.
Intel is plans to ship the new type of transistor in the second half of this year, and they will be deployed across the entire spectrum of Intel processors – so not just in the top range.
Intel is basically doing this: http://www.eecs.berkeley.edu/~hu/PUBLICATIONS/PAPERS/700.pdf
It always amazes me how we can work on that level, create working components like that, consistently.
[edit]
And by we I don’t mean me…
Edited 2011-05-06 15:54 UTC
This new innovation will give the Atom processor another birth. With Windows moving to ARM processors, I thought no one is going to make Atom processor netbook or small laptop … seems like that is not the case now. Future tablet PCs (slate) will have choose for processors Intel or ARM.
I think, AMD will start manufacturing ARM processors or otherwise they should come up with less than 22nm transitors to survive.
“I think, AMD will start manufacturing ARM processors or otherwise they should come up with less than 22nm transitors to survive.”
I wouldn’t say that. They don’t need to compete in that space, though it would likely be in their best interest.
Example: I have a microwave. The company that makes it has not made a portable hand held microwave and yet they are still surviving. (it’s not a great direct comparison, as the market isn’t heading towards “hand help microwaves” but still).
AMD will adapt, they always do
Does not work. If your microwave make competitor make a microwave that cook faster every 2 week, you will replace your own with a faster one. Same goes for the power, you wont use you cooking oven to eat up popcorn. It’s just too much for absolutely no benefits, not even power/cost.
The microwave analogy is flawed, I think.
No one needs a microwave which heats things up slightly faster than existing ones. You wouldn’t sell it before it starts to heat things up at least an order of magnitude faster (yay, 6 seconds heat-up times !). Nowadays, people are more likely to choose a microwave based on design or usability criteria, since from a performance point of view they’re all more or less the same.
However, IIRC again, the difference in power/performance ratio between an Atom chip and an ARM chip is less than an order of magnitude, but it’s quite a big deal already because reducing the battery life of a mobile device by a factor of 2 or 3 without benefits in another area is already something pretty much unacceptable unless you can come up with genius marketing to justify that.
Edited 2011-05-05 06:27 UTC
AMD where the first to display a 22nm chip in 2008 and have already developed a similar 3D transistor tech called FinFET, they just haven’t announced a product using these techs yet. With IBM working on similar stuff and MIPS making a comeback, it looks like ARM have to work hard to retain the embedded crown.
This s**t just got real 🙂
ARM is not standing still.
1) ARM has already did 20nm test chip.
The company talked about their CP (Common Platform) 20nm SOC test chip based on a Coretex-M0. This core is .2mm x .2mm and contains 8K gates, 20K if you count the entire processor subsystem.
Not for production yet, of course.
2) ARM is a partner of AMD’s Global foundries
3) 14nm ARMs is not so far
http://armdevices.net/2011/01/21/arm-and-ibm-develop-32nm-28nm-22nm…
I don’t think the main story is that such a transistor was designed, but that they figured out how to mass produce it reliably.
I wish they had a video on that.
Ah well, die-shrink, power-drop, and sometime in the next year or so Intel will deliver something maybe on-par with Arm?
(Actually a shame, because if you could get an Arm chip on this process, it would really kick ass…)
I think I’ve read somewhere that the difference in power-performance ratio between Atom chips and ARM ones is no longer something like an order of magnitude, but more like a factor of two or three. If Intel have gone this far, they can easily go ahead and reach parity with ARM, which would be an amazing achievement considering the difference in complexity between the crowded x86 world and ARM’s “cars with no standard steering wheel” approach.
But again, I’m not sure of my initial information…
The power draw issue has lot more to with the complete package than merely the CPUs. The current Atom SoCs from Intel depend on a multichip design. In the case of ARM you could get designs that have everything in one chip (with wifi, gsm, AV, etc) with a significantly smaller power draw. Furthermore, I’ve yet to see Atom designs with comparable GPU power found in the recent ARM SoCs.
I’ve found a 4x factor mentioned on Wikipedia for the difference in total package consumption (http://en.wikipedia.org/wiki/Intel_atom#Competition), but the Tegra 2 was not even out by then so the situation has probably changed since…
About GPU power, I won’t argue, though as Intel GPUs have significantly improved on the desktop area with Sandy Bridge & Ivy Bridge, we can probably expect the next Atom to perform better in this area too.
Edited 2011-05-05 11:38 UTC
Actually the news came yesterday that Intel would be manufacturing some ARM chips.
For Apple.
What if this technology is used only on Apple-designed ARM components ? I may not matter to Intel, which would evidently get its part of the benefits, and Apple products being limited to a portion of the market, the low-cost Android-based models may run as well on Intel as on standard ARM chips.
Atom has merit of compatibility while ARM would not run all the Windows applications compiled for x86. I thought that is why Atom was adopted into netbook market so widely in the first place. Developers might release applications in both x86 and ARM in the future but I doubt if all the major companies would do that.
Or they might just release them in a processor agnostic format that is compiled into a native format, just in time like, say Java/Android psuedo Java/dotNet.
or the os developers may device a format that contains both x86 and arm binaries in the same executable and change the popular development tools to automatically produce them. (Fat binaries, anyone?)
Its not as difficult as it seems.
What a shame. Imagine using this process to create ARM processors. Would be able to run 4 or 8 cores at higher clock speeds with same battery life.
Honey I shrunk the Nerd !
Yeah, aside from not even pretending to yell loudly, they didn’t do the needed ‘echo’ audio effect to demonstrate how tiny he was. But at least he should have an easier time working on new transistors now that he’s smaller.
Ivy Bridge is looking good – it is tempting to wait till it is released and the Mac line up is updated next year some time to see how everything fits together. Hopefully by that time Microsoft is at a stage where Windows 8 is able to be shown publicly in beta form with an estimated shipment date as well
I wonder though whether Intel is willing to licence said technology or whether they’ll keep it an Intel only one – and off on a tangent I wonder the same thing when it comes to Thunderbolt as well.
http://arstechnica.com/business/news/2011/05/intel-re-invents-the-m…
Alright, if I understand the video correctly (I haven’t had the chance to meet MOSFETs in courses yet, only JFETs), they have simply managed to put the active layers inside of the metal electrode, to ensure that there’s a larger contact between both.
Sounds smart, but I wonder why we couldn’t do this before. At first look, it doesn’t seem like a huge modification of the process technology…
I imagine the major difficulty is being able to etch and deposit materials for such a feature reliably in production. This has been in research for a long time and this paper,
http://www.eecs.berkeley.edu/~hu/PUBLICATIONS/PAPERS/700.pdf
Suggests its use at 50nm and but scalable to 20nm. I imagine it is easily doable with molecular beam epitaxy (MBE) as opposed to cvd. However MBE is not practical for use in mass production environments.
I smell Socket 1154 coming … suddenly my 12 month old computer seems, well, old, antiquated even.