I just should begin by saying that this article is different from the others. While my other articles can be quite useful to aid you in the design or purchase of your next computer system, this article is simply a fun look at things to come.
Introduction
As a physics major in my fifth year at the University of Washington, I am often interested by the applications of the principles I am studying. My focus, modern physics, is quite exciting to me as I am often studying theories and experiments performed only months earlier. In a Quantum Mechanics class, I was given the opportunity to give a short lecture on the application of quantum mechanics to the area of computing, and this article will be focused on that exciting subject. The topic of my lecture and of this article is Quantum-Dot Cellular Automata — a medium which may support super-small computers in as little as the next ten years!
QCA — What Is It?
We must begin by describing a quantum dot, which is a small grouping of conducting atoms — very small. Approximately 4 billion quantum dots will fit on the head of a pin. A Quantum-Dot Cellular Automata (QCA) cell consists of four quantum dots, in a square arrangement, as pictured below. Each QCA cell contains two additional electrons, which assume opposite diagonals due to an electric repulsion. Thus, a QCA cell has two states, which we can call state -1 and +1. The electrons can “flip” from state to state through a quantum phenomenon called tunneling. Incidentally, it is the properties of quantum tunneling that give each state its stability. So, what we have here is a very, very small switch. This switch is the equivalent of today’s 1 (voltage high) and 0 (voltage low) in computer hardware.
Simple QCA Uses?
The most straightforward use of a QCA cell is in the transmission of information. If you put two cells side by side, the electric interaction of the electrons will cause cells next to each other to assume the same state. Therefore, if you have a long line of cells, and if you define the state of one cell, the entire line will quickly assume the same state.
This is called a QCA Wire. Since 4 billion quantum dots can fit on the head of a pin, this means that you can have literally thousands of wires in something as thin as a strand of hair. Amazing! On top of that, the data transmission is very reliable. In today’s wiring, great care must be taken to prevent signal degradation. With QCA, quantum tunneling effects tend to stabilze the system. Remember, those electrons want to stay as far apart as possible, and even a slight nudge will send them all the way to the corners. Therefore, a QCA system is, by nature, not vulnerable to signal degradation.
More Complex Uses
Of course, a QCA wire is just the beginning. By being creative in the joining of multiple wires in different orientations, we are able to create functioning logic gates. Logic gates are the building blocks of computer hardware. Let’s take an example. A majority gate, also known as an AND gate, looks at two inputs, and only outputs a 1 if both inputs are 1. Under the QCA model, we have a similar function. A QCA majority gate will look at three inputs, and return a +1 if at least two inputs are +1. This gate is pictured below.
With similar arragements, we are able to construct all the same logic devices that are used in today’s computer hardware. We are able to create inverters, AND gates, OR gates, XOR gates, and even gates that are programmable in function! For more information on these functions, Notre Dame has a great resource, which you can access by clicking here.
Problems
The main problem with this entire concept is thermal effects. With such a small system, thermal energy can be enough to send an electron out of its position, throwing the entire system off. It is in this problem, however, that we realize even greater potential. Since the two electrons in a QCA cell will repel with an even greater force when closer together, we can increase the stability of the sytem by making it even smaller! Unfortunately, we simply do not have the fabrication techniques to do that today. With the smallest QCA arrays today, we are capable of obtaining stable operation at no higher than 7 degrees Kelvin — a temperature that is not only fatal to humans, but very expensive to maintain.
If the QCA cell size could be made as small a few Angstroms (on the order of the size of a molecule), the repulsion energy of the electrons would be comparable to atomic energy levels — several electron volts. This is not feasible with the semiconductor implementations that we have been thinking about, but may ultimately be attainable in molecular electronics. For example, if we can engineer a single molecule to be in the shape of a QCA cell, as pictured to the left, then the thermal effects would be dismissable. In fact, with a QCA cell of that size, the maximum operating temperature increases to 700K — a temperature that is hot enough to melt humans. In other words, thermal effects would no longer be a problem!
Coming Soon?
Clearly, we have a ways to go. We will likely see the first quantum computers in cryogenic environments, but as we are able to produce smaller and smaller arrays, we should be up to room temperature and beyond in no time. The end result will be a quantum computer that is much faster than today’s computers, and too small to see without a magnifying glass! Just how fast will they be? Systems operating at room temperature will attain speeds 10 to 100 times faster than current systems, will be denser by a factor of 5 to 100, and will have a lower power consumption by a factor of more than 50. Logic devices will operate at over 5.0GHz, and a QCA wire transmission rate is expected to achieve 15 terabits per second!
About the Author:
Jon Bach is the founder and owner of Puget Sound Systems, a company in Redmond, WA that specializes in custom built computer systems. He appreciates all comments and feedback, and encourages you to drop him an email at [email protected]!
Well I have been in the semiconductor industry for over 20yrs as a VLSI designer, I have seen too many projections that NMOS, then CMOS were near or at the end of their life. In the early 80’s, Bell Labs Phds predicted the 64K DRAM would probably be the end of the line, today there are 1G prototypes & nobody in that biz can figure how to make money.
The economics drives the technology to refine itself continuously every 1 or 2 yrs. Companies come & go, but the biz keeps going, unfortunately the cost of fabs is out of this world & will keep going up.
Nobody can predict exactly how CMOS devices will be fabbed in 20yrs but the industry believes that it probably has another 20yrs following Moores Law to a lesser degree. Intel expects to see a 10GHz Pentium & TSMC (worlds no 1 open foundry) expects 0.05u in a few years. 0.13 is where the P4 is now, so 0.05u is nearly 10x denser and a few times faster perhaps.
The main issue for the future is about getting rid of heat, followed by reliability, manufacturabilty, & the wicked economics. I personally only want a couple of tiny cpus that do real work at about 1GHz or so and are cool enough to be fanless & quiet. Check out “mini ITX” from Via or SV40/50, that is where general computing should be going for the next few years. PCs will finally look & assemble more like HiFi units with simple serial cables (sata, FireWire, USBx etc). For more speed, various parallel & reconfigurable computing (FPGAs) schemes will be used. Big ass hot & noisy ATX cases will be history.
Personally I don’t buy into the Quantum Dot hype one little bit. I have seen Magnetic Bubble memories, Optical computing devices, Multi Level Logic, Asyncronous logic, FED displays, Nanotubes, Josephson Junctions, & Super conducting technology hyped up & fail miserably in the market. Technologies that only work in theory at low K temps, have never ever had any commercial success. If I had to bet on just one exotic technology, I would pick nanotubes, they just get more & more interesting for lots of uses.
I never believe University pundits, I prefer to believe those companies that deliver the goods.
Just my $2B worth (a new fab)
JJ
this would signal the end of the clock-speed race… as i understand it there would be no such thing as clock speed, and the ancient von-neumann architecture of computers would be blown away. (but still the x86 ISA will linger like dry rot)
however, i don’t think that this will be in consumers’ hands in the next ten years. i’ll leave that for the next 100 years. even then it will be 10^n times more efficient and powerful than semiconductors, but 10^n times more expensive too.
and it will take the last refuge of the hobbyist away. its difficult enough working with surface-mount components and BGAs, it’ll be impossible with these quantum chips…
just wanted to say that
Thank you. That was interesting. I don’t know enough about the science involved, but I can see a whole lot of uses for something like this.
Zmai.
but will it run beos?
I’ve no doubt that one of these advanced technologies will eventually replace silicon but while companies can still make Fabs and make money there’s no pressing need for such a radicle shift.
If someone gets one of these new technologies to work and finds a way to produce them in volume at low cost then we might see a shift.
>but will it run beos?
sure, but only on an emulator 🙁
I mean, look at the petrolium product industry. Cars still burn fuels, pollute and waste resources when there are several far better technologies to use. As long as there is money to be made by keeping the status quo, businesses will keep it. New technology be damned, it’s the buck that matters.
There are probably more than a 100,000 engineers & scientists all over the world developing, refining the so called “old crap”. And you know what, it bloody works, and works better each generation. The only thing wrong is again heat generation and very high investment costs. FPGAs can substitute the latter for higher unit costs & more flexibility. And the hobyist can design their own chips for free with smaller FPGAs & downloadable tools, see Xilinx & Altera sites. A $10 SpartanII + free tools is equiv to what a smaller company might have done as an Asic 5yrs ago for a few $100K.. You still need to know the basic EE stuff, but it isn’t rocket science. Microprocessor design can be done in the basement.
There are probably less than a 100 Phds working on Quantum dots, (I would guess a few at IBM). What ever they do is irrelevant unlesss they come up with a few dozen serious miracles.
Nobody is trying to keep the status quo, there is no evil hand at play here. Silicon just happen to be one of the most wonderful materials the human race has ever manipulated. Silicon is to electronics what carbon is to life. Check the periodic table to see why!
Also Clocks are unlikely to go away either, designing without clocks has not been proven on any commercial chips, although there is a clockless ARM project. Most EEs don’t buy into that one either.
If Quantom dots ever come to a store near you, you can bet they will be made on silicon with the same type of fab equipment used for CMOS, only even more expensive. They won’t be made by faeries & magic dust. Also the low K refrigerator will be 100s x bigger than your current PC.
I am sure if OBOS pulls it off, it will run on HW way into the future even better than MS because of it’s pervasive threading which will suit the multi cpus that will also be pervasive long before CMOS is replaced by ???. One might wonder if we will all end up being emulated?
Remember Cold Fusion!!!!
in a short response to JJ:
In regards to asynchronous logic (clockless) computing: Many companies have produced clockless cpu’s, but the problem is not in the technology. The problem was selling them to people. People have become so attached to the MHz mesure of performance, that they have difficulty understanding why a higher MHz is not necessarily better, let alone understanding a cpu without MHz mesure AT ALL. Intel has been playing with asynchronous logic for a while. They built an asynchronous version of the original pentium that ran faster, cooler, and used less power than it’s clocked bretheren. The P4 gain much of it’s speed not from it’s high clock rate, but from the fact that many of its internal parts run asynchronously without a clock:
http://www.iht.com/articles/42087.html
Also, SUN, IBM, and Motorola are all using and investigating the integration of clockless technology to help increase the performance of their CPUs and other logic circuts:
http://www.theregister.co.uk/content/2/17356.html
http://www.cs.man.ac.uk/amulet/
Reading JJ’s post on Quantum Dots, and some of the other posts too, suggests that some of us have forgotten one of the essential differences between research and engineering.
In a nutshell, the engineers job is to use the best currently available technology to get the job done cheaply and fast, as soon as possible; the researchers job is to look way ahead to what some day might be possible, without regard to practicality right this instant.
By it’s very nature, most of the ideas on the cutting edge of research will never become practical. When you’re exploring far-out possibilities, most of them will never pan out. But the few ones that *do* work out, those will become the best available technology for the next generation of engineers, and drive the next wave of engineering advances. So the “failure” of many research developments (such as magnetic bubble memories) is not really a failure at all – it’s business as usual for research, where there are more blind alleys than throughways.
I’ve been fortunate to have spent a couple of years working on far-out research projects involving squeezed light, and a couple of years working as an electronics engineer for an audio electronics company, so I’ve had a chance to see both sides of this discussion. Both jobs were challenging, both are required for human progress, and both need the other to survive!
If we could only understand this, maybe we could appreciate what both engineers and researchers bring to the development of human civilization, and appreciate both roles.
-JV
Sir, not only are you an excellent and most entertaining science fiction novelist, you make a brilliant point that most people miss. Applause for Mr. Verne.
I like to see wild eyed, speculative articles like this every once in a while on some exciting new possibility, simply because it’s fun to stretch the imagination a bit. And of course, speculative operating systems and speculative hardware are like peanut butter and jelly. A lot of the amazing research and experimental OS ideas out there are not “available” either. You can’t find the ideas used in them in current Windows or MacOS or Linux or *BSD until at least another major release (sometimes 2 or more). Remember web services? The information superhighway? Weren’t we supposed to be doing this stuff 7 years ago? But the ideas remained, waiting for the researchers to get them right, or at least get them cheap, and now, they’re almost here…maybe…
Thanks to Mr. Bach and Eugenia & Co. for a treat.
>>In regards to asynchronous logic (clockless) computing: Many companies have produced clockless >>cpu’s, but the problem is not in the technology. The problem was selling them to people. People
I don’t think this is true at all, & I am quite familiar with Furbers work on the ARM & Amulet & his reasoning. If a CPU could be built entirely without clocks, & would perform better, I wouldn’t have a problem buying one as long as it worked. The technology is still the problem. Clocked design is just the lesser of 2 evils.
Almost all complex chips today are synthesized so that all the longest paths are about the same from clock to clock. Now a really simple student cpu design with ripple adder delay as it’s main longest path could benefit from self timing since the cycle time could vary a great deal & some ops could speed up during simple cycles, but few chips are so naively simple.
And with the movement to FPGAs, designers will have even less reason to go clockless since clockless design can only be executed by transistor circuit design in an ASIC flow.
Also DSP is becoming more & more pervasive as the bulk of the silicon in telecoms, & media devices etc, yet DSP is fundamentally based on precise sampling (clocking) & rigid computation sequences.
>>>>The P4 gain much of it’s speed not from it’s high clock rate, but from the fact that many of its >>>>internal parts run asynchronously without a clock:
I very much doubt that, care to give me specific URLs from Intel that describe this. I have never seen any Intel references to this & I worked for Intel “indirectly”. As far as I know, they really do push the circuits by double pumping the ALU, and it is all fully synchronous to the main clock. Now some remote blocks could be designed in a more loosely coupled fashion resyncing on completion or using elastic fifos. There is alot to be said for writing in a spec that so & so operation at x MHz will complete in precisely n clocks in a precisely predetermined state, not when it feels like & subject literally to the weather. Ultimately this is why EEs can’t buy into clockless design, it can’t be specified too well.
Research vs Engineering
Well I do support all the various research projects, even the wacky stuff, I just wouldn’t want to see regular peoples hopes raised due to over hypeing. QDs can be introduced to the public when they have more practical results to show, say a room temp functioning risc core. IBM was able to show serious cpu project results with Josephson Junctions before killing that off.
I will always remember the cold fusion circus, the media has a way of running out of control on some of these amazing stories, I wouldn’t want to see that happen again as it can tarnish what it hypes.