Linked by Thom Holwerda on Thu 3rd Dec 2009 22:25 UTC
Intel "Intel's experimental 48-core 'single-chip cloud computer' is the latest step in multicore processing. Intel officials say such a chip will greatly improve performance and efficiency in future data centers, and will lead to new applications and interfaces between users and computers. Intel plans to have 100 or more chips in the hands of IT companies and research institutes next year to foster research into new software. Intel, AMD and others are rapidly growing the number of cores on a single silicon chip, with both companies looking to put eight or more cores on a chip in 2010."
Thread beginning with comment 398136
To read all comments associated with this story, please click here.
One word: Stream
by deathshadow on Sat 5th Dec 2009 22:16 UTC
Member since:

As someone who's writing C code that compiles to CUDA, you'll excuse me if the idea of a mere 48 cores on one die isn't exactly blowing my skirt up... The 'crappy' little Ge8800GTS driving my secondary displays has more cores than that - My primary GTX260 has 216 stream cores, and that new ATI 5970 everyone's raging about has what, 1600 stream processor cores per die? Sure, they are "stream processor cores", but that's still basically a processor unto itself (albeit a very data-oriented one) as evidenced by how much can be done in them from CUDA.

I really see this as Intel's response to losing share in the high performance arena due to things like CUDA - much like ATI's "Stream" if that ever becomes anything more than stillborn. (since I don't know ANYONE actually writing code to support ATI Stream) - or more specifically nVidia's Tesla.

Hell, look at the C2050 Tesla, 512 thread processors on one die - GPGPU's are threatening Intel's relevance - increasingly I suspect that as more and more computing is offloaded VIA technologies like CUDA we are likely to see x86 or even normal RISC CPU's relegated to being little more than legacy support and glorified traffic cops and I/O handlers, with parallel processing GPGPU's handling the real grunt work.

We have historical precedence for this approach too - look at the move from 8 bit to 16 bit with the Z80 and 68K. Trash-80 model 16 used a Z80 to handle keyboard/floppy/ports as well as running older Model II software, while the included 68K handled the gruntwork of running actual userspace programs. The Lisa and original mac were similarly divided, though they didn't use their Z80 for legacy... Or even look a few years later in the game console world where the Sega Genesis was a 68K and a Z80, the Z80 used to handle the VDP, two sound chips, and provide legacy support back to the SMS/SG-1000, while new games used the 68k for it's userland.

As such Intel is late to the party on using lots of simpler processors on one die - time will tell if that's going to be too little, too late. If they work it out so you could mount it alongside x86 and give it a compatibility layer for CUDA code (which nVidia more than approves of other companies doing) they might make a showing.

If not, it's going to end up just as stillborn as ATI's "stream"

Reply Score: 2