Linked by Thom Holwerda on Mon 20th Jun 2011 23:34 UTC
Hardware, Embedded Systems SGI hopes by 2018 to build supercomputers 500 times faster than the most powerful today, using specially designed accelerator chips made by Intel. SGI hopes to bring a massive performance boost to its supercomputers through highly parallel processors based on Intel's MIC (many integrated cores) architecture.
Thread beginning with comment 477896
To read all comments associated with this story, please click here.
supe
by garyd on Tue 21st Jun 2011 00:40 UTC
garyd
Member since:
2008-10-22

Currently they only have four systems in the top fifty super computers -- and their highest is number seven; http://www.top500.org/list/2011/06/100. I'll be curious to see how well they compete with the other architectures. Interestingly, the new number one is based on SPARC64 but it runs Linux.

Reply Score: 2

RE: supe
by Kebabbert on Tue 21st Jun 2011 10:30 in reply to "supe"
Kebabbert Member since:
2007-07-27

Yes, these HPC super computers mostly runs Linux. These super computers is basically a cluster on a fast switch. It is similar to Googles network; Google have 10.000 PCs on a network. Just add a node and start up Linux, and you have increased the performance of the network. Linux scales well horizontally (on large networks with 1000s of PCs). For instance, SGI has a ALTIX server with as many as 1024 cores, it is a bunch of blades in a rack with a fast switch.



The opposite is a single large SMP server, weighing 1,000kg or so. They are built totally different. For instance, IBM's has released their new and might P795 Unix server, it has as many as 32 POWER7 cpus. IBM's largest Mainframe has as many as 24 Mainframe z196 cpus. Oracle has a Solaris server M9000 with as many as 64 cpus. Some years ago, Sun sold a Solaris server with as many as 144 cpus. This is vertical scaling, it is not a network with a bunch of blade PCs.

Linux scales bad vertically. Linux has problems going over 32 cores. 48 cores is not handled well by Linux. This can be seen in for instance, SAP benchmarks on 48 cores. Where Linux used faster AMD cpus, and faster DRAM than the Solaris server. Linux achieved only 87% cpu utilization whereas Solaris achieved 99%, and that is the reason Solaris was faster even though it used slower hardware. To scale well vertically, you need a mature Enterprise Unix. Linux can not do it, it takes decades of experience and tuning. Until recently, Linux had Big Kernel Lock!

Ted Tso, ext4 creator, just recently explained that until now, 32 cores was considered exotic and expensive hardware to Linux developers but now that is changing and that is the reason Ted is now working on to scale up to as many as 32 cores. But Solaris/AIX/HP-UX/etc Kernel devs have for decades had access to large servers with many cpus. Linux devs just recently has got access to 32 cores. Not 32 cpus, but 32 cores. After a decade(?), Linux might handle 32 cpus too.

Reply Parent Score: 3

RE[2]: supe
by JAlexoid on Tue 21st Jun 2011 11:09 in reply to "RE: supe"
JAlexoid Member since:
2009-05-19

Strange... Linux on Z works perfectly well on 60 CPUs assigned to it. And with high utilisation...

Edited 2011-06-21 11:20 UTC

Reply Parent Score: 2

RE[2]: supe
by Kivada on Tue 21st Jun 2011 15:13 in reply to "RE: supe"
Kivada Member since:
2010-07-07

n00b question then, so whats that mean for even AMD's next lineup of server CPUs? They're going to be 16 cores a socket, commonly up to 4 sockets per board.

So Linux wont be able to make full use of a single box anymore then?

Reply Parent Score: 1