Last week, the US Department of Energy and IBM unveiled Summit, America’s latest supercomputer, which is expected to bring the title of the world’s most powerful computer back to America from China, which currently holds the mantle with its Sunway TaihuLight supercomputer.
With a peak performance of 200 petaflops, or 200,000 trillion calculations per second, Summit more than doubles the top speeds of TaihuLight, which can reach 93 petaflops. Summit is also capable of over 3 billion billion mixed precision calculations per second, or 3.3 exaops, and more than 10 petabytes of memory, which has allowed researchers to run the world’s first exascale scientific calculation.
The $200 million supercomputer is an IBM AC922 system utilizing 4,608 compute servers containing two 22-core IBM Power9 processors and six Nvidia Tesla V100 graphics processing unit accelerators each. Summit is also (relatively) energy-efficient, drawing just 13 megawatts of power, compared to the 15 megawatts TaihuLight pulls in.
There’s something mesmerizing about supercomputers like these. I would love to just walk through this collection of machines.
TianHe2, 2013, 34 petaflops
TaihuLight, 2016, 93 petaflops
Summit, 2018, 200 petaflops
The Law is holding, give or take, in the supercomputer space.
13 megawatts of power!
WOW! It needs it’s own power station as a PSU.
Incredible computation power there though, makes you wonder what’s coming next.
400 petaflops in 2020.
Achieve 1 Exaflop (1000PF) in under 20 MegaWatts by 2021, that’s the DoE Exascale Computing Program. The current roadmap is that will be a machine called Aurora developed by Intel and Cray at Argonne National Labs. Details are all locked under NDA for now though.
It’s a combined hardware / software effort since just because the machines are there doesn’t necessarily mean software can take advantage of it.
Edited 2018-06-13 14:59 UTC
SkyNet.
At 1.21 gigawatt, they’ll jump back to oct 26 1985.
‘cos the Alphabet Agencies around the world have clusters that are certainly not listed in the Top 500.
They’re very different animals. The Alphabet type cluster resources are essentially massive compute farms that process hundreds of thousands of smaller problems simultaneously and tend to be loosely coupled. Supercomputers, on the other hand, are designed as giant single-purpose (sort of) machines. So while Google’s compute farm may be processing millions of tasks all at once, each one consuming a few cores for a few seconds, systems like summit run physics calculations that involve a single “task” consuming all of the available CPU and GPU processing power for several days at a time.
“alphabet agencies” does not mean what you think it does.
All tremble at the power of the LOC, super computer. Its archives contain a LOC of storage that has harnessed the most awesome power known in this universe: Books!
Reading: Its FUNdamental.
In case someone wants to know which OS it is actually running.
https://www.networkworld.com/article/3279961/linux/red-hat-reaches-t…
I actually work in this field and have been using the summit development platform for the past two years while waiting for this new machine to come online so I can speak to it a bit:
* RHEL 7 ppc64le on the Power9 CPUs
* Despite the “beefy CPUs” essentially all of the compute capability is from the GPUs
* It mostly uses SPACK (https://github.com/spack/spack) for package management of tools and libraries that users develop against
* For compilers, the preferred is IBM XL, but GCC is also widely used. So is PGI, but less so.
IBM XL, eh? Are they still using some ported version of the old IBM XL for Linux on Power, or are they using the newer clang-based one?
At this point it’s diverged from the BigEndian compiler used on AIX and BlueGene systems. The first set on Linux ppc64le was ported from the UNIX compiler but the most recent two releases for Linux ppc64le are now based on clang4. IBM has a pretty big investment in the new platform so the users are regularly using beta releases as well to squeeze every bit of optimization out of the physics codes they run.
Edited 2018-06-14 01:22 UTC
Back, as in it’s rightful abode or somesuch? Hardly, doubt it’ll be long before it resides elsewhere and increasingly so.
Personally I am more interested in what computers with more traditional memory architectures can do. Because those supercomputers are essentially glorified high-speed mesh networks. There are no smarts involved, just line up existing boxes as far as the wallet allows.
Edited 2018-06-14 21:47 UTC