Home > Hardware > The State of Supercomputers The State of Supercomputers David Adams 2003-08-21 Hardware 11 Comments The Supercomputer industry is alive and thriving, as researchers design simulations and other processor-intensive tasks to drive demand. A recent ZDNet article covers some of the latest advances in supercomputing application. About The Author David Adams Follow me on Twitter @david_adams 11 Comments 2003-08-21 4:53 pm Anonymous A non-article not worth the read. No discussion at any technical level and only the most superficial coverage of what supercomputers are used for. For example there is no mention of the burgeoning field of Bio-informatics which is a rapidly growing market for supercomputers, nor is there any mention of the use of supecomputers by the oil industry another major customer for these machines. Above all for readers from this site there is no discussion of operating systems. For example there is no mention of the emergence of Linux as a major operating system in supercomputing. Nor is it mentioned that the forthcoming Red Storm supercomputer for the Sandia National Lab discussed in the article will use Linux as its operating system. 2003-08-21 5:14 pm Anonymous So fifty years from now we will all be running Supercomputers as desktop machines and we’ll still be b*tching about how KDE-30.X.P is 2 nano seconds slower that GNOME-29.U.X when simulating individual weather patterns in every individual city it the world, that takes approximately 5 nano secs to compute. Hmm..interesting. /me continues to dream. 2003-08-21 6:25 pm Anonymous KDE is 2 nano sconds faster for a process which takes 5 seconds on Gnome?? thats 66% improvement.. Thats something I’d like to brag about… 2003-08-21 6:59 pm Anonymous we will all be running Supercomputers as desktop machines and we’ll still be b*tching about how KDE-30.X.P is 2 nano seconds slower that GNOME-29.U.X when simulating individual weather Looks like it … 2003-08-21 9:10 pm Anonymous Correction the interactive and io nodes will run linux the compute nodes run pumaOS which is a far more interresting topic for this site…. 2003-08-21 10:12 pm Anonymous Unlike this thread… 2003-08-21 10:35 pm Anonymous Feeling a “leeetle” jaded this morning? 😉 Your lowly computer all weighted down with GUI? 1975’s … (Show of hands …how many of you were born at this point?) supercomputer is dusted by what you are probably looking at, right where you sit. Try a little experiment – WIPE your drive, install a stripped down version of UNIX®, DOS©, Be™, Compute Seti@home or something, observe. Look! you HAVE a super’puter. NOW jump up and down screaming this over and over. Observe. 2003-08-22 2:14 am Anonymous [i]”Correction the interactive and io nodes will run linux the compute nodes run pumaOS which is a far more interresting topic for this site….”<.i> Exactly – I didn’t know this and this was not mentioned in the article. If they had found this it would be the sort of jounalistic digging that should have been done to make an article worth reading. I haven’t managed to find anything very concrete on this with a quick Google search – but it clear that they intend to use a LWK for the compute nodes and given the influence of ASCI RED and the connection of the Sandia Lab with the Puma OS,this seems likely. Have you got a good URL on this? 2003-08-22 2:16 am Anonymous this is better “Correction the interactive and io nodes will run linux the compute nodes run pumaOS which is a far more interresting topic for this site….” Exactly – I didn’t know this and this was not mentioned in the article. If they had found this it would be the sort of jounalistic digging that should have been done to make an article worth reading. I haven’t managed to find anything very concrete on this with a quick Google search – but it clear that they intend to use a LWK for the compute nodes and given the influence of ASCI RED and the connection of the Sandia Lab with the Puma OS,this seems likely. Have you got a good URL on this? 2003-08-22 5:01 am Anonymous So the article didn’t say anything so I will. Some other tidbits for you. Avg PC uses one hot x86 maybe running at 0.5-2.5GHz & putting out upto 70W of heat. Not a very efficient design for a computer that needs 100K nodes (Power station needed). It uses DRAM that has true random access times of 60-100ns even if cpu can run way below <1ns cycle. Memory (& heat) engineering is the biggest challenge to big iron cpus, caches are woefully small at <<1MByte. Disk drives are a joke except for backing up huge multi Gig DRAM drives. A company in UK called PicoChip has a cpu chip that has 430 individual 16bit integer only risc cores each with a few k of DRAM all on one chip and it runs at about 100MHz. No FPU though and it’s meant for the wireless baseband stations. Micron & Infineon have RLDRAMs that will full cycle in 20ns true random access with 8 banks multiplexed on 400MHz timeshared separate Input & Output buses, meant for the Ciscos but great for high speed DRAM L3 caches etc. Comes in 256M parts for <$30, trouble is no mobo is going to see it. It makes the Rambus & QDR stuff look silly. IBM uses 5ns DRAM internally for Level3 Cache in Power series no wonder those G5s smoke. Another new company (EET) has reinvented the origianl old 3 transister DRAM that Intel 1st produced 30yrs ago. Big idea is that by going back to 3t from current 1t cell, the read & write are separate again & random access is back down to 1.3-1.8ns. The 1t cell is great for huge and slow cheap DRAMs but useless for speed. The 3t DRAM is a simplification of the 6t SRAM cell that is normally used for cache that has always been too expensive to use for main memory except for Cray cpus. So memories can be fast but not if you think in terms of popping Opterons with DDR DRAMs on familiar looking mobos in a warehouse. Another fella (Jan Gray) has a risc cpu that fits in a small piece of any cheap FPGA. He often cites that 1000 can fit into the biggest FPGAs (about same price as a fastest P4). Trouble is no FPU and no memory bus or communication between the cpus so not very useful. Xilinx boast that their 32b risc core (comparable to an Arm perhaps) runs at 130MHz or so, uses about $1.5 worth of FPGA. Same problem, no FPU and no way to deploy hundreds of em in 1 chip at least not yet. So its easy to get a vision of 100K cpus in a very small space if you can get at least 16-64 per chip. And there mustn’t be loads of support logic that we have on mobos. Irony of irony is that it was all done 20+ years ago with Transputers, but then only 1 could fit per chip back then, it had FPU, it had networking links, process scheduling, local memory, and was glueless, you literally packed as many as possible onto a PCB. But Inmos couldn’t get the next one done on time and most of the goodness was lost. If a new Transputer were to come along, it wouldn’t be so difficult to get 100 or so on each chip. If I had the Sun contract for $50M I could even afford to design a small production run of such a chip otherwise I am stuck with FPGAs & no FPU. A cpu for 1M node cpu is not as far off as you would think, but it won’t look like an x86 or Sparc or PPC. 2003-08-22 10:29 pm Anonymous This is the kind of coverage one would expect from ZDNet. It looks as if it was a report written by a peeping tom : they just had a glance at something and made a quick story.