Linked by fran on Tue 23rd Nov 2010 22:26 UTC
Hardware, Embedded Systems The CPU industy is working on 16nm chips to debut by around 2013, but how much smaller can it go? According to the smart guys, not much smaller, stating that at 11nm they hit a problem relating to a 'quanting tunneling' phenomena. So what's next? Yes, they can still add core after core, but this might reach a plato by around 2020. AMD's CTO predicts the 'core wars' will subside by 2020 (there seems to be life left in adding cores as Intel demonstrated a few days ago, the feasibility of a 1000 core processor.) A Silicon.com feature discusses some potential technologies that can enhance or supersede silicon.
Thread beginning with comment 450974
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[3]: quantum tunneling
by onetwo on Wed 24th Nov 2010 08:18 UTC in reply to "RE[2]: quantum tunneling"
onetwo
Member since:
2009-01-22

@thavith_osn: As a human kind we only have "leaky" transistors. We never had any other kinds. Thus, leakage has never been solved: it is due to the manufacturing process on one side and on the other side it is due to the our understanding of quantum phenomena (not quanting phenomena, might I add, thus stating my original intent of pointing the wikipedia article, which in it's own right is rather poorly written). The problem now is exacerbated by the power/heat you input/output for the retention you get/spread of information-entropy per bit per unit time. For example, for the next generation of FLASH cells you need 1 electron/years retention rate. I could have put 1 year, 2 year, 10 years: with any modern VLSI it still sounds, let's say, a jot absurd.

One more thing worth noting: quantum phenomena are observed even at the 130nm node, there however, one just doesn't care. However these phenomena do not just magically disappear at different scales. Quantum phenomena are in daily life, when one boils eggs or when tries to walk through walls. Quantum phenomena, however, become just improbable as in the latter case.

@Neolander: I agree with you. But you have to ask yourself: what is the "speed limit" of processors. I hope you are referring to a current technological "speed limit". But even so. Why wouldn't the same "lean" engineering be applied to hardware engineering thus alleviating the "stagnation"?

Thus I arrive at my point (at last). My view is that the bifurcation of the computational science demanded by the commodity-driven industry is the problem; sometimes it is less observable, sometimes it is more. I should note, however, that it is a natural bifurcation, an evolutionary one. It is a necessity stemming from the conceptualization of the creative process: language, grammar conceptualization per unit time for the successful creation of ye working thing that could be purchased. It is far easy to do it on "a sheet of paper" rather than on couple of million of transistors specially designed for a purpose. M?

But none-the-less, it is an evolutionary process, our technological advance. It cannot stagnate. It only stagnates when it is anthropomorphized in the context of "global economy".

Reply Parent Score: 1

RE[4]: quantum tunneling
by Neolander on Wed 24th Nov 2010 09:10 in reply to "RE[3]: quantum tunneling"
Neolander Member since:
2010-03-08

@Neolander: I agree with you. But you have to ask yourself: what is the "speed limit" of processors. I hope you are referring to a current technological "speed limit". But even so. Why wouldn't the same "lean" engineering be applied to hardware engineering thus alleviating the "stagnation"?

If we can't shrink transistors due to the tunnel effect, nor make chips bigger due to electric currents (or light in recent designs from Intel) having a finite propagation speed, we'll reach a maximal amount of transistors per independent processor.

If we continue to put transistors in the same manner inside processors, we'll hence reach a speed limit.

If we put these transistors together differently or use them more efficiently, as an example by switching to a "leaner" processor architecture as I mentioned, we can reach higher speed. But it's not due to improvements in transistor technology, in the way we cut silicon, or things like that. It's a more abstract progress.

Thus I arrive at my point (at last). My view is that the bifurcation of the computational science demanded by the commodity-driven industry is the problem; sometimes it is less observable, sometimes it is more. I should note, however, that it is a natural bifurcation, an evolutionary one. It is a necessity stemming from the conceptualization of the creative process: language, grammar conceptualization per unit time for the successful creation of ye working thing that could be purchased. It is far easy to do it on "a sheet of paper" rather than on couple of million of transistors specially designed for a purpose. M?

But none-the-less, it is an evolutionary process, our technological advance. It cannot stagnate. It only stagnates when it is anthropomorphized in the context of "global economy".
[/q]
Not sure I understand this part, and it looks cut off in the middle ("M?"). Can you please try to explain it differently ?

Reply Parent Score: 2

RE[5]: quantum tunneling
by ndrw on Wed 24th Nov 2010 13:35 in reply to "RE[4]: quantum tunneling"
ndrw Member since:
2009-06-30

Actually the whole progress (although not as fast as Moore's law predicts) continues due to decreasing transistor sizes and some system level techniques like integration, supply voltage gating etc.

Moore's law additionally dictates that speed of single transistors should grow and their power consumption should fall but that has not been possible since 130~90nm nodes (~7 years ago) and only small amount of progress has been made since then.

This limit has nothing to do with minimum sizes or quantum effects - it's simply due to the fact that we are no longer able to decrease the supply voltage due to the physics of transistors themselves. Transconductance of MOS transistors (in subthreshold range) is at around decade/100mV (and will never be better than decade/60mV of BJTs).

So if you want to have 5 decades span between Ion and Ioff currents (first for high switching speed, second for low leakage), a 0.25V margin to compensate for process variability and another 0.25V for putting the transistor in saturation&linear ranges (for Ion) you'll wind that you need a gate driving voltage (and thus supply voltage) of around 1V. There is no way to decrease this voltage without cutting corners, that is compromising on either switching speed or leakage power.

Development in last 5 years was heading in the direction of higher quantity rather than quality. Making 2x faster CPU costs 8x more power? Well, let's just make 2 CPUs, and flood them with cache memory, add whatever peripheries on chip we can think of, etc. That's not as good as the Moore's law but it is still pushing things forward.

Reply Parent Score: 1