Linked by Thom Holwerda on Mon 20th Jun 2011 23:34 UTC
Hardware, Embedded Systems SGI hopes by 2018 to build supercomputers 500 times faster than the most powerful today, using specially designed accelerator chips made by Intel. SGI hopes to bring a massive performance boost to its supercomputers through highly parallel processors based on Intel's MIC (many integrated cores) architecture.
Order by: Score:
supe
by garyd on Tue 21st Jun 2011 00:40 UTC
garyd
Member since:
2008-10-22

Currently they only have four systems in the top fifty super computers -- and their highest is number seven; http://www.top500.org/list/2011/06/100. I'll be curious to see how well they compete with the other architectures. Interestingly, the new number one is based on SPARC64 but it runs Linux.

Reply Score: 2

RE: supe
by Kebabbert on Tue 21st Jun 2011 10:30 UTC in reply to "supe"
Kebabbert Member since:
2007-07-27

Yes, these HPC super computers mostly runs Linux. These super computers is basically a cluster on a fast switch. It is similar to Googles network; Google have 10.000 PCs on a network. Just add a node and start up Linux, and you have increased the performance of the network. Linux scales well horizontally (on large networks with 1000s of PCs). For instance, SGI has a ALTIX server with as many as 1024 cores, it is a bunch of blades in a rack with a fast switch.



The opposite is a single large SMP server, weighing 1,000kg or so. They are built totally different. For instance, IBM's has released their new and might P795 Unix server, it has as many as 32 POWER7 cpus. IBM's largest Mainframe has as many as 24 Mainframe z196 cpus. Oracle has a Solaris server M9000 with as many as 64 cpus. Some years ago, Sun sold a Solaris server with as many as 144 cpus. This is vertical scaling, it is not a network with a bunch of blade PCs.

Linux scales bad vertically. Linux has problems going over 32 cores. 48 cores is not handled well by Linux. This can be seen in for instance, SAP benchmarks on 48 cores. Where Linux used faster AMD cpus, and faster DRAM than the Solaris server. Linux achieved only 87% cpu utilization whereas Solaris achieved 99%, and that is the reason Solaris was faster even though it used slower hardware. To scale well vertically, you need a mature Enterprise Unix. Linux can not do it, it takes decades of experience and tuning. Until recently, Linux had Big Kernel Lock!

Ted Tso, ext4 creator, just recently explained that until now, 32 cores was considered exotic and expensive hardware to Linux developers but now that is changing and that is the reason Ted is now working on to scale up to as many as 32 cores. But Solaris/AIX/HP-UX/etc Kernel devs have for decades had access to large servers with many cpus. Linux devs just recently has got access to 32 cores. Not 32 cpus, but 32 cores. After a decade(?), Linux might handle 32 cpus too.

Reply Score: 3

RE[2]: supe
by JAlexoid on Tue 21st Jun 2011 11:09 UTC in reply to "RE: supe"
JAlexoid Member since:
2009-05-19

Strange... Linux on Z works perfectly well on 60 CPUs assigned to it. And with high utilisation...

Edited 2011-06-21 11:20 UTC

Reply Score: 2

RE[2]: supe
by Kivada on Tue 21st Jun 2011 15:13 UTC in reply to "RE: supe"
Kivada Member since:
2010-07-07

n00b question then, so whats that mean for even AMD's next lineup of server CPUs? They're going to be 16 cores a socket, commonly up to 4 sockets per board.

So Linux wont be able to make full use of a single box anymore then?

Reply Score: 1

Compute is free, data movement is expensive
by theosib on Tue 21st Jun 2011 04:45 UTC
theosib
Member since:
2006-03-02

Some famous computer scientist no long ago pointed out something interesting about supercomputers. We're already at the point where the majority of the energy used in parallel computing is in communication between processors, and the relative proportion of compute-related energy is declining rapidly.

Reply Score: 2

Neolander Member since:
2010-03-08

Maybe Intel's idea of using optical interconnects down to inside CPUs could help, then ;) Also, if communication becomes a critical concern, those distributed designs could also be put to work.

Myself, I've always wondered why motherboard buses still use copper wires. AFAIK, in high-performance applications, CPU buses are already a major bottleneck. Aren't integrated photonics already ready for the job ?

Edited 2011-06-21 05:50 UTC

Reply Score: 1

flanque Member since:
2005-12-15

At some point it has to be converted to electrical current and i thought therein lays additional expenses applying terminating and/or conversion components.

Reply Score: 2

Neolander Member since:
2010-03-08

Well, it'll certainly eat more power due to the energy conversions each time a laser or photodetector is involved, but in HPC the goal is raw performance and not so much performance/watt, right ?

I mean, I understand that there could be laser/photodetector size problems for Intel's on-die photonics idea, but motherboard buses aren't much miniaturized, or are they ?

Edited 2011-06-21 12:06 UTC

Reply Score: 1

Aragorn992 Member since:
2007-05-27

Some famous computer scientist no long ago pointed out something interesting about supercomputers. We're already at the point where the majority of the energy used in parallel computing is in communication between processors, and the relative proportion of compute-related energy is declining rapidly.


Yes, the key problem with supercomputing for a long time now has been NUMA: http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access

Unfortunately supercomputing is/has been a case of diminishing returns (what isn't I guess?)... more processors simply remain starved due to NUMA (simplest/worst solution is all processors wait, every time, for the slowest possible memory access).

It's hoped that with multicores hitting the desktops in the last years, there is going to be much more money/research into ways to mitigate this.

Reply Score: 1

Neolander Member since:
2010-03-08

I wouldn't be as optimistic. Why should the emergence of multicore chips in desktops lead to higher research fundings in the area ? Lots of common desktop tasks already worked well on the Pentium 3, and the sole high-performance task of a desktop which I can think of that is not a niche is gaming.

If anything, I'd say that the PS3 and the Xbox 360 could have done more for the improvement of HPC that multicore desktops. I mean... Lots of CPU cores struggling to access the same tiny RAM chip, outdated GPUs, and that hardware must be pushed to its limits before it's replaced... Now you have room for research funding ;)

Edited 2011-06-21 06:38 UTC

Reply Score: 1

ncc4100 Member since:
2006-05-10

The NUMA problems that are referring to assume that you only want to run a single application on the whole system. NUMA issues can be greatly reduced when you use something like CPUSETS to confine an application to a set of cpus/cores and memory. With a larger number of cpus/cores and CPUSETS, you can run a fair number of applications (depending upon the required number of cpus/cores) efficiently and with predictable completion times. Consequently, NUMA isn't a huge stumbling block. It all depends upon how you use the system.

Reply Score: 1

Once again
by vodoomoth on Tue 21st Jun 2011 10:46 UTC
vodoomoth
Member since:
2010-03-30

Each time I see on of these articles about higher performance, more speed, more whatnot, my first thoughts are "where will it stop?" and "is it needed?"

Reply Score: 3

RE: Once again
by wanker90210 on Tue 21st Jun 2011 12:12 UTC in reply to "Once again"
wanker90210 Member since:
2007-10-26

It will stop when we have photorealistic 3d and sexbots with convincing AI. I think that's what Kurtzweil means by "the singularity" - everyone will be single and connected to some virtual reality world as much as possible.

Reply Score: 1

RE[2]: Once again
by Neolander on Tue 21st Jun 2011 13:25 UTC in reply to "RE: Once again"
Neolander Member since:
2010-03-08

Photorealism ? Nah, no point in spending hours of human work and computing power on a 3D mesh if it's not better than the original photo ;)

Convincing sexbots, on the other hand, could be used in an interesting way... http://xkcd.com/632/

Reply Score: 1

The Need for Speed
by frajo on Wed 22nd Jun 2011 09:57 UTC in reply to "Once again"
frajo Member since:
2007-06-29

Each time I see on of these articles about higher performance, more speed, more whatnot, my first thoughts are "where will it stop?" and "is it needed?"

Just ask scientists working in meteorology (weather forecast), or astrophysics (modelling supernovae, string landscapes), or particle physics (LHC collisions), or cancer research, or neurology (brain modelling).

Reply Score: 1

RE: The Need for Speed
by Neolander on Wed 22nd Jun 2011 11:08 UTC in reply to "The Need for Speed"
Neolander Member since:
2010-03-08

Or numerical atomic bomb simulations ;)

Reply Score: 1

Five hundred times ...
by ameasures on Tue 21st Jun 2011 15:58 UTC
ameasures
Member since:
2006-01-09

Just noting that 2018 is 6 years away and Moore's Law suggests that CPU power doubles every 18 months.

So by 2018, other competitors progression will have taken them some way ... there may still be a 30 fold advantage if all goes to plan.

Just trying to get things in proportion, thats all.

Reply Score: 2