Linked by Thom Holwerda on Mon 20th Jun 2011 23:34 UTC
Hardware, Embedded Systems SGI hopes by 2018 to build supercomputers 500 times faster than the most powerful today, using specially designed accelerator chips made by Intel. SGI hopes to bring a massive performance boost to its supercomputers through highly parallel processors based on Intel's MIC (many integrated cores) architecture.
Thread beginning with comment 477926
To view parent comment, click here.
To read all comments associated with this story, please click here.
Aragorn992
Member since:
2007-05-27

Some famous computer scientist no long ago pointed out something interesting about supercomputers. We're already at the point where the majority of the energy used in parallel computing is in communication between processors, and the relative proportion of compute-related energy is declining rapidly.


Yes, the key problem with supercomputing for a long time now has been NUMA: http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access

Unfortunately supercomputing is/has been a case of diminishing returns (what isn't I guess?)... more processors simply remain starved due to NUMA (simplest/worst solution is all processors wait, every time, for the slowest possible memory access).

It's hoped that with multicores hitting the desktops in the last years, there is going to be much more money/research into ways to mitigate this.

Reply Parent Score: 1

Neolander Member since:
2010-03-08

I wouldn't be as optimistic. Why should the emergence of multicore chips in desktops lead to higher research fundings in the area ? Lots of common desktop tasks already worked well on the Pentium 3, and the sole high-performance task of a desktop which I can think of that is not a niche is gaming.

If anything, I'd say that the PS3 and the Xbox 360 could have done more for the improvement of HPC that multicore desktops. I mean... Lots of CPU cores struggling to access the same tiny RAM chip, outdated GPUs, and that hardware must be pushed to its limits before it's replaced... Now you have room for research funding ;)

Edited 2011-06-21 06:38 UTC

Reply Parent Score: 1

ncc4100 Member since:
2006-05-10

The NUMA problems that are referring to assume that you only want to run a single application on the whole system. NUMA issues can be greatly reduced when you use something like CPUSETS to confine an application to a set of cpus/cores and memory. With a larger number of cpus/cores and CPUSETS, you can run a fair number of applications (depending upon the required number of cpus/cores) efficiently and with predictable completion times. Consequently, NUMA isn't a huge stumbling block. It all depends upon how you use the system.

Reply Parent Score: 1