Linked by fran on Tue 23rd Nov 2010 22:26 UTC
Hardware, Embedded Systems The CPU industy is working on 16nm chips to debut by around 2013, but how much smaller can it go? According to the smart guys, not much smaller, stating that at 11nm they hit a problem relating to a 'quanting tunneling' phenomena. So what's next? Yes, they can still add core after core, but this might reach a plato by around 2020. AMD's CTO predicts the 'core wars' will subside by 2020 (there seems to be life left in adding cores as Intel demonstrated a few days ago, the feasibility of a 1000 core processor.) A Silicon.com feature discusses some potential technologies that can enhance or supersede silicon.
Thread beginning with comment 450972
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[2]: quantum tunneling
by Neolander on Wed 24th Nov 2010 06:41 UTC in reply to "RE: quantum tunneling"
Neolander
Member since:
2010-03-08

Well, this is not the same as the cd-rom issue, interestingly enough.

If you reach the maximal speed which a cd-rom drive can spin at, the fight is over. There's no way you can make a standardized optical data storage technology go any faster, you have to make a new optical storage medium with increased data storage density and thus reduce the need for a drive that spins fast. This new storage medium will be incompatible with existing drives, so its adoption rate will be quite slow.

With processors, on the other hand, once you've reached the speed limit of usual processors, all you have to do is to put several ones in the same chip. This way, you can reliabily claim that you have packed N times the usual processing power in that chip.

If normal people with tasks that don't scale well accross multicore chips start complaining that they don't get N times the performance, you can then blame the software developers for reaching the limits of the algorithmic way of thinking of the human mind.

Then, as N grows and CPUs can't shrink any further, buses will increasingly become the bottleneck of computing performance. Issues with the speed of light limit and congestion in memory access will be more and more serious. So the hardware manufacturers will adopt a decentralized memory model where cores don't even share memory with each other, basically becoming independent computers except for inter-core IO. The amount of software which can't scale well accross multiple cores will grow even further. HW manufacturers will still be able to claim that they've reached a higher theoretical performance and that SW manufacturers are to blame for not being able to reach it.

Unless we find a new way to reach higher performance in normal software (not server software in the perfect situation where tasks are CPU-bound), its performance will stagnate. We'll then have to learn again how to write lean code, or to design new programming models that work across the new hardware but imply totally different ways of thinking. Or we'll create new CPU architectures which allow higher performance without improving silicon technology, but will take years to be widely used.

One thing is for sure : for the next decade, improvements in the performance of usual software won't come from improvements in silicon technology. Actually, I think it's a good thing.

Reply Parent Score: 3

RE[3]: quantum tunneling
by onetwo on Wed 24th Nov 2010 08:18 in reply to "RE[2]: quantum tunneling"
onetwo Member since:
2009-01-22

@thavith_osn: As a human kind we only have "leaky" transistors. We never had any other kinds. Thus, leakage has never been solved: it is due to the manufacturing process on one side and on the other side it is due to the our understanding of quantum phenomena (not quanting phenomena, might I add, thus stating my original intent of pointing the wikipedia article, which in it's own right is rather poorly written). The problem now is exacerbated by the power/heat you input/output for the retention you get/spread of information-entropy per bit per unit time. For example, for the next generation of FLASH cells you need 1 electron/years retention rate. I could have put 1 year, 2 year, 10 years: with any modern VLSI it still sounds, let's say, a jot absurd.

One more thing worth noting: quantum phenomena are observed even at the 130nm node, there however, one just doesn't care. However these phenomena do not just magically disappear at different scales. Quantum phenomena are in daily life, when one boils eggs or when tries to walk through walls. Quantum phenomena, however, become just improbable as in the latter case.

@Neolander: I agree with you. But you have to ask yourself: what is the "speed limit" of processors. I hope you are referring to a current technological "speed limit". But even so. Why wouldn't the same "lean" engineering be applied to hardware engineering thus alleviating the "stagnation"?

Thus I arrive at my point (at last). My view is that the bifurcation of the computational science demanded by the commodity-driven industry is the problem; sometimes it is less observable, sometimes it is more. I should note, however, that it is a natural bifurcation, an evolutionary one. It is a necessity stemming from the conceptualization of the creative process: language, grammar conceptualization per unit time for the successful creation of ye working thing that could be purchased. It is far easy to do it on "a sheet of paper" rather than on couple of million of transistors specially designed for a purpose. M?

But none-the-less, it is an evolutionary process, our technological advance. It cannot stagnate. It only stagnates when it is anthropomorphized in the context of "global economy".

Reply Parent Score: 1

RE[4]: quantum tunneling
by Neolander on Wed 24th Nov 2010 09:10 in reply to "RE[3]: quantum tunneling"
Neolander Member since:
2010-03-08

@Neolander: I agree with you. But you have to ask yourself: what is the "speed limit" of processors. I hope you are referring to a current technological "speed limit". But even so. Why wouldn't the same "lean" engineering be applied to hardware engineering thus alleviating the "stagnation"?

If we can't shrink transistors due to the tunnel effect, nor make chips bigger due to electric currents (or light in recent designs from Intel) having a finite propagation speed, we'll reach a maximal amount of transistors per independent processor.

If we continue to put transistors in the same manner inside processors, we'll hence reach a speed limit.

If we put these transistors together differently or use them more efficiently, as an example by switching to a "leaner" processor architecture as I mentioned, we can reach higher speed. But it's not due to improvements in transistor technology, in the way we cut silicon, or things like that. It's a more abstract progress.

Thus I arrive at my point (at last). My view is that the bifurcation of the computational science demanded by the commodity-driven industry is the problem; sometimes it is less observable, sometimes it is more. I should note, however, that it is a natural bifurcation, an evolutionary one. It is a necessity stemming from the conceptualization of the creative process: language, grammar conceptualization per unit time for the successful creation of ye working thing that could be purchased. It is far easy to do it on "a sheet of paper" rather than on couple of million of transistors specially designed for a purpose. M?

But none-the-less, it is an evolutionary process, our technological advance. It cannot stagnate. It only stagnates when it is anthropomorphized in the context of "global economy".
[/q]
Not sure I understand this part, and it looks cut off in the middle ("M?"). Can you please try to explain it differently ?

Reply Parent Score: 2