Reporting from the VLSI Symposium

This very short summary will look at the “Future of Computing” from a more fundamental level, that being what researchers and research is being done in the area of Very Large Scale Integration. In particular, what is on the horizon, at least at the college research level, and why we must eventually change direction.

For those who are not in the “loop” of things, there are conferences and symposiums in the field of electrical engineering and computer science practically year round. There are so many subgroups of the aforementioned disciplines that even the ACM has their own “special interest groups” group. This year we were lucky to have the IEEE computer society annual symposium on VLSI (Very Large Scale Integration) “Emerging Trends in VLSI Systems Design” [1]in town, and I attended. Here I highlight a few presentations given and share my thoughts on at least what people in research think of things to come in regards to semiconductors and layouts.


First of all I wish to point out that I will not be going over the gory details, nor will I present a recap of all of the papers and presentations, as I am sure that most of them would bore the normal computer user and even most uber geeks to no end. I will however, present some ideas proposed, recurring themes that were mentioned throughout the symposium and discuss the key speakers presentation.


The key speaker for the symposium was Dr. Kamran Eshraghian (Director and Distinguished Professor, Electron Science Research Institute, Edith Cowan University, Australia). For anyone who has taken any VLSI or CMOS course, this name is familiar. Indeed, some call him the grandfather of CMOS as he wrote the book on it[2]. His presentation was entitled “Towards Integrated Intelligent MicroPhotonic Systems”. Before we get to my comments I would like to say that I have had many professors in the past give presentations, and his was most striking. He was truly enthusiastic about it (without being overly so), which gave a sense of realism and excitement about his topic. He also sprinkled his talk with humor (not of the super-geeky nature either) which is something many professors can learn from. He had our attention for the duration of the speech, which is quite a feat in a room full of engineers with laptops and food.


Dr. Eshraghian’s introduction was so thought provoking that it was hard to concentrate on what he was saying for a few minutes after it. He began be recalling when he started in his journey on VLSI and CMOS technology, and in writing his book and his research. He told of the mistaken future projections that were made, specifically the idea in the early 1980’s that MOS transistors (which is what every CPU today is built upon) would hit a brick wall at 140nm. (0.14 micron processing), even going so far as to say that we will never be able to get to that size. As the reader is probably aware, Intel, AMD and most others are working in the 130nm range now, and are ironing out the technical details of getting good yields and changing equipment for the 90nm range. For those who really do not know what this means see the bottom notes “Length and Width: confusion” at the end of this writing. After this he then tells the audience that he has yet another prediction. VLSI on the semiconductor is dead. He predicts that in or around 2040, there will be an end to MOS technology in terms of shrinking the transistor any more. Of course, being a physics major I have to agree. There will be a point in time at which we can go no smaller. For example, a single electron transistor. There is no way to make a standard electrical transistor smaller than this. This is a hard reality. This is a harder reality for the PhD. student studying VLSI. Of course, Dr. Eshraghian admits that his previous “great prediction” was wrong before, but it is inevitable that we will reach the physical limitation of making transistors smaller, and indeed major companies are making them smaller just as fast as they can. And they have a LOT of resources to do so.


He then broke into what he is working on now, which is the combination of nanoscale photonics and electronics. In the past people have tried to get full switching and combinational logic from light alone. In many (if not all) situations, this presents serious problems. Most of the past devices have been mechanically or electrically controlled reflectors of some sort or another. His research is more in tune to electronics which interface with nanochemistry and are either controlled by light or are light controllers. Of course, at this scale, one can dream up many future uses for such technology (molecular study, cancer fighting agents, smart dust, etc, etc). All of these will, of course, be tied into a large network. Walls and in fact entire buildings will be “online”. Wearable sensors can give vital stats or change the way a pacemaker works on the fly. Energy management could be smart due to the huge array of sensors spread throughout the entire energy network (due to the small size, and network ability of them). Hoe you say does all of this come together? Well, for the most part, the details were hidden to many by a PowerPoint presentation which of course shields us from the gory details[3], but in a nutshell, the interfacing of light with other materials can be used to build precision measuring equipment, fine process tooling, and diagnostic equipment. If photonics, microelectronics and chemistry are stirred in the pot, the resulting research can build devices that can help mankind in all areas of life (does that sound like marketing?).

Many of the remaining speakers for the morning were also presenting papers on technologies which were not fully or even remotely MOS based. A paper on nanotubes used as large PLAs (programmable logic arrays) was given which was most interesting, but it did have a few “bugs” to work out. A professor from Georgia Tech presented a paper on a nanoscale computational building block which also looked very promising as a replacement for standard MOS. His paper also noted a recourring theme which I have seen around the net and in papers as of late which is the automatic generation of cells by a chemical or biological growth. Basically, a standard cell is grown in the lab, and then it is repeated or made to grow into an array. The cell itself can perform a basic function, and of course when put into an array or added to many others, we can provide basic logic functions from the arrangement of them. After all, this is how the basic building block of any large scale circuit is done. We build a basic block with simple functionality, and figure out how to get greater functionality through a combination of them. Then a mere optimization of the basic block will result in speed increases or power reduction or whatever your target is for the entire circuit (of course, this is given that the function is optimized from the start).


To summarize, most of the presentations were on “the usual” if I can even say that, meaning they were on power optimization, layout, routing, area and cost, pipelining, etc. Of course, all of them were great in their respective fields, but again, most people would be bored unless you are into each one specifically. If you would like to get a copy of the papers presented see [1].


My thoughts on the future of design: Of course we will reach an end to shrinking transistors, but this is not the end for the designer. It has been noted that the general purpose CPU accounts for only 2%-10%of total CPU sales. The remaining 90-98% are in embedded systems [4][5]. The future of computing has been theorized and continues to be, but the reality of the now is that the majority of devices use small CPUs which are not as powerful as their desktop brothers. It seems this trend is making its way into the personal arena in terms of computing also (wearables, PDAs with more and more functionality, smaller and smaller laptops, etc). The designer of tomorrow still has choices, they can go into emerging technologies such as nono, optical and biological (or a meld of any of them) or delve into getting more out of less with embedded systems and SOCs (System on a Chip). Also, the separation of software and hardware engineers is becoming smaller as we move into the embedded space. Hardware is useless without software and vice-versa. But in the embedded arena, memory and CPU time is as it was many years ago, precious. Those devices are less powerful and have less memory (for example, the extreme of smart dust research, or the more popular wearable computer). Programmers will have to revert back to optimizing everything (as it has been noted that in todays personal computers we have plenty of power, and programmers can be a little bit more lazy given the average hardware), not that I am saying todays programmers are lazy, just that in the embedded space or days of old, they were forced to be optimal in their ways.


Length and Width: confusion


I have seen many news sites and others try to explain the number that CPU manufacturers give when they talk about a CPU technology. Some are correct, and some are just confusing. I must admit, however, that having been a student of Physics and taken solid state physics, I myself get confused due to the fact that what part the engineers call the width, we call the length and vice-versa. However, I will let you in on it right here and now. When a manufacturer says that they are using a 90 nanometer process, or equivalently a 0.09 micrometer process, what they mean is that the channel length of the transistor is 90nm. Or what is commonly stated is “the smallest device I can create on the substrate is 90 nanometers”. This is for the simple fact that to create a transistor on a substrate we must implant the substrate with a “P” or an “N” well, and then pass a polysilicon line over it (creating the gate of the transistor). The gate of the transistor dictates the minimum size of the transistor (or conversely, the minimum distance in between two ends of a transistor, which is where the gate is :). So fundamentally, the minimum feature size is the gate of the transistor, and this is the number they are referring to. It is not the entire transistor, although one could argue that the gate is where all the action is.




In the above oversimplified handmade diagram is a cross section shot of a transistor with the width labeled W and the length labeled L. It also shows two N wells, the gate oxide under the gate, and the substrate. To put that huge figure I presented into perspective, (if the numbers I have read are correct) the AMD Opteron contains 110 million such transistors (although some are P and some are N types) and the latest offering from the ATI radeon line is reported to contrain 150 million transistors.


References:


[1] Catalog
[2] Principles of CMOS VLSI Design, A systems Perspective. Neil H. E. Weste, Karmran Eshraghian, 2nd edition, 1995.
[3] Story
[4] http://www.cs.virginia.edu/~pnn7f/vest/
[5] http://www.nd.edu/~codes/index.html

5 Comments

  1. 2004-03-06 6:38 am
  2. 2004-03-06 1:46 pm
  3. 2004-03-06 4:02 pm
  4. 2004-03-07 1:02 pm
  5. 2004-03-08 9:30 am