Linked by Thom Holwerda on Tue 12th Jun 2018 23:10 UTC
Hardware, Embedded Systems

Last week, the US Department of Energy and IBM unveiled Summit, America's latest supercomputer, which is expected to bring the title of the world's most powerful computer back to America from China, which currently holds the mantle with its Sunway TaihuLight supercomputer.

With a peak performance of 200 petaflops, or 200,000 trillion calculations per second, Summit more than doubles the top speeds of TaihuLight, which can reach 93 petaflops. Summit is also capable of over 3 billion billion mixed precision calculations per second, or 3.3 exaops, and more than 10 petabytes of memory, which has allowed researchers to run the world's first exascale scientific calculation.

The $200 million supercomputer is an IBM AC922 system utilizing 4,608 compute servers containing two 22-core IBM Power9 processors and six Nvidia Tesla V100 graphics processing unit accelerators each. Summit is also (relatively) energy-efficient, drawing just 13 megawatts of power, compared to the 15 megawatts TaihuLight pulls in.

There's something mesmerizing about supercomputers like these. I would love to just walk through this collection of machines.

Order by: Score:
Moore's Law
by kwan_e on Wed 13th Jun 2018 01:00 UTC
kwan_e
Member since:
2007-02-18

TianHe2, 2013, 34 petaflops
TaihuLight, 2016, 93 petaflops
Summit, 2018, 200 petaflops

The Law is holding, give or take, in the supercomputer space.

Reply Score: 2

Power Consumption
by Sauron on Wed 13th Jun 2018 02:39 UTC
Sauron
Member since:
2005-08-02

13 megawatts of power!
WOW! It needs it's own power station as a PSU.
Incredible computation power there though, makes you wonder what's coming next.

Reply Score: 2

RE: Power Consumption
by kwan_e on Wed 13th Jun 2018 05:42 UTC in reply to "Power Consumption"
kwan_e Member since:
2007-02-18

makes you wonder what's coming next.


400 petaflops in 2020.

Reply Score: 3

RE[2]: Power Consumption
by CodeMonkey on Wed 13th Jun 2018 14:59 UTC in reply to "RE: Power Consumption"
CodeMonkey Member since:
2005-09-22

"makes you wonder what's coming next.

400 petaflops in 2020.
"

Achieve 1 Exaflop (1000PF) in under 20 MegaWatts by 2021, that's the DoE Exascale Computing Program. The current roadmap is that will be a machine called Aurora developed by Intel and Cray at Argonne National Labs. Details are all locked under NDA for now though.

It's a combined hardware / software effort since just because the machines are there doesn't necessarily mean software can take advantage of it.

Edited 2018-06-13 14:59 UTC

Reply Score: 3

RE: Power Consumption
by Soulbender on Wed 13th Jun 2018 07:05 UTC in reply to "Power Consumption"
Soulbender Member since:
2005-08-18

makes you wonder what's coming next.


SkyNet.

Reply Score: 4

RE: Power Consumption
by Kochise on Wed 13th Jun 2018 07:17 UTC in reply to "Power Consumption"
Kochise Member since:
2006-03-03

At 1.21 gigawatt, they'll jump back to oct 26 1985.

Reply Score: 4

Correction
by Vanders on Wed 13th Jun 2018 11:44 UTC
Vanders
Member since:
2005-07-06

The world's fastest supercomputer, that you know about, is back in America

'cos the Alphabet Agencies around the world have clusters that are certainly not listed in the Top 500.

Reply Score: 3

RE: Correction
by CodeMonkey on Wed 13th Jun 2018 14:40 UTC in reply to "Correction"
CodeMonkey Member since:
2005-09-22


'cos the Alphabet Agencies around the world have clusters that are certainly not listed in the Top 500.

They're very different animals. The Alphabet type cluster resources are essentially massive compute farms that process hundreds of thousands of smaller problems simultaneously and tend to be loosely coupled. Supercomputers, on the other hand, are designed as giant single-purpose (sort of) machines. So while Google's compute farm may be processing millions of tasks all at once, each one consuming a few cores for a few seconds, systems like summit run physics calculations that involve a single "task" consuming all of the available CPU and GPU processing power for several days at a time.

Reply Score: 1

RE[2]: Correction
by tylerdurden on Wed 13th Jun 2018 17:37 UTC in reply to "RE: Correction"
tylerdurden Member since:
2009-03-17

"alphabet agencies" does not mean what you think it does.

Reply Score: 5

RE[3]: Correction
by Bill Shooter of Bul on Wed 13th Jun 2018 18:30 UTC in reply to "RE[2]: Correction"
Bill Shooter of Bul Member since:
2006-07-14

All tremble at the power of the LOC, super computer. Its archives contain a LOC of storage that has harnessed the most awesome power known in this universe: Books!


Reading: Its FUNdamental.

Reply Score: 2

Red-Hat Linux
by moondevil on Wed 13th Jun 2018 13:27 UTC
moondevil
Member since:
2005-07-08

In case someone wants to know which OS it is actually running.

https://www.networkworld.com/article/3279961/linux/red-hat-reaches-t...

Reply Score: 5

RE: Red-Hat Linux
by CodeMonkey on Wed 13th Jun 2018 14:49 UTC in reply to "Red-Hat Linux"
CodeMonkey Member since:
2005-09-22

I actually work in this field and have been using the summit development platform for the past two years while waiting for this new machine to come online so I can speak to it a bit:

* RHEL 7 ppc64le on the Power9 CPUs
* Despite the "beefy CPUs" essentially all of the compute capability is from the GPUs
* It mostly uses SPACK (https://github.com/spack/spack) for package management of tools and libraries that users develop against
* For compilers, the preferred is IBM XL, but GCC is also widely used. So is PGI, but less so.

Reply Score: 4

RE[2]: Red-Hat Linux
by kwan_e on Thu 14th Jun 2018 00:42 UTC in reply to "RE: Red-Hat Linux"
kwan_e Member since:
2007-02-18

* RHEL 7 ppc64le on the Power9 CPUs
.
.
.
* For compilers, the preferred is IBM XL, but GCC is also widely used. So is PGI, but less so.


IBM XL, eh? Are they still using some ported version of the old IBM XL for Linux on Power, or are they using the newer clang-based one?

Reply Score: 2

RE[3]: Red-Hat Linux
by CodeMonkey on Thu 14th Jun 2018 01:18 UTC in reply to "RE[2]: Red-Hat Linux"
CodeMonkey Member since:
2005-09-22

At this point it's diverged from the BigEndian compiler used on AIX and BlueGene systems. The first set on Linux ppc64le was ported from the UNIX compiler but the most recent two releases for Linux ppc64le are now based on clang4. IBM has a pretty big investment in the new platform so the users are regularly using beta releases as well to squeeze every bit of optimization out of the physics codes they run.

Edited 2018-06-14 01:22 UTC

Reply Score: 3

One day
by quackalist on Wed 13th Jun 2018 17:36 UTC
quackalist
Member since:
2007-08-27

Back, as in it's rightful abode or somesuch? Hardly, doubt it'll be long before it resides elsewhere and increasingly so.

Reply Score: 3

Comment by kurkosdr
by kurkosdr on Thu 14th Jun 2018 21:46 UTC
kurkosdr
Member since:
2011-04-11

Personally I am more interested in what computers with more traditional memory architectures can do. Because those supercomputers are essentially glorified high-speed mesh networks. There are no smarts involved, just line up existing boxes as far as the wallet allows.

Edited 2018-06-14 21:47 UTC

Reply Score: 1