Linked by Thom Holwerda on Mon 20th Jun 2011 23:34 UTC
Hardware, Embedded Systems SGI hopes by 2018 to build supercomputers 500 times faster than the most powerful today, using specially designed accelerator chips made by Intel. SGI hopes to bring a massive performance boost to its supercomputers through highly parallel processors based on Intel's MIC (many integrated cores) architecture.
Thread beginning with comment 477952
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: supe
by Kebabbert on Tue 21st Jun 2011 10:30 UTC in reply to "supe"
Member since:

Yes, these HPC super computers mostly runs Linux. These super computers is basically a cluster on a fast switch. It is similar to Googles network; Google have 10.000 PCs on a network. Just add a node and start up Linux, and you have increased the performance of the network. Linux scales well horizontally (on large networks with 1000s of PCs). For instance, SGI has a ALTIX server with as many as 1024 cores, it is a bunch of blades in a rack with a fast switch.

The opposite is a single large SMP server, weighing 1,000kg or so. They are built totally different. For instance, IBM's has released their new and might P795 Unix server, it has as many as 32 POWER7 cpus. IBM's largest Mainframe has as many as 24 Mainframe z196 cpus. Oracle has a Solaris server M9000 with as many as 64 cpus. Some years ago, Sun sold a Solaris server with as many as 144 cpus. This is vertical scaling, it is not a network with a bunch of blade PCs.

Linux scales bad vertically. Linux has problems going over 32 cores. 48 cores is not handled well by Linux. This can be seen in for instance, SAP benchmarks on 48 cores. Where Linux used faster AMD cpus, and faster DRAM than the Solaris server. Linux achieved only 87% cpu utilization whereas Solaris achieved 99%, and that is the reason Solaris was faster even though it used slower hardware. To scale well vertically, you need a mature Enterprise Unix. Linux can not do it, it takes decades of experience and tuning. Until recently, Linux had Big Kernel Lock!

Ted Tso, ext4 creator, just recently explained that until now, 32 cores was considered exotic and expensive hardware to Linux developers but now that is changing and that is the reason Ted is now working on to scale up to as many as 32 cores. But Solaris/AIX/HP-UX/etc Kernel devs have for decades had access to large servers with many cpus. Linux devs just recently has got access to 32 cores. Not 32 cpus, but 32 cores. After a decade(?), Linux might handle 32 cpus too.

Reply Parent Score: 3

RE[2]: supe
by JAlexoid on Tue 21st Jun 2011 11:09 in reply to "RE: supe"
JAlexoid Member since:

Strange... Linux on Z works perfectly well on 60 CPUs assigned to it. And with high utilisation...

Edited 2011-06-21 11:20 UTC

Reply Parent Score: 2

RE[2]: supe
by Kivada on Tue 21st Jun 2011 15:13 in reply to "RE: supe"
Kivada Member since:

n00b question then, so whats that mean for even AMD's next lineup of server CPUs? They're going to be 16 cores a socket, commonly up to 4 sockets per board.

So Linux wont be able to make full use of a single box anymore then?

Reply Parent Score: 1