Researchers grappling with today’s grand challenges are getting traction with accelerated computing, as showcased at ISC, Europe’s annual gathering of supercomputing experts.
Some are building digital twins to simulate new energy sources. Some use AI+HPC to peer deep into the human brain.
Others are taking HPC to the edge with highly sensitive instruments or accelerating simulations on hybrid quantum systems, said Ian Buck, vice president of accelerated computing at NVIDIA, at an ISC special address in Hamburg.
Delivering 10 AI Exaflops
For example, a new supercomputer at Los Alamos National Laboratory (LANL) called Venado will deliver 10 exaflops of AI performance to advance work in areas such as materials science and renewable energy.
LANL researchers target 30x speedups in their computational multi-physics applications with NVIDIA GPUs, CPUs and DPUs in the system, named after a peak in northern New Mexico.
Venado will use NVIDIA Grace Hopper Superchips to run workloads up to 3x faster than prior GPUs. It also packs NVIDIA Grace CPU Superchips to provide twice the performance per watt of traditional CPUs on a long tail of unaccelerated applications.
BlueField Gathers Momentum
The LANL system is among the latest of many around the world to embrace NVIDIA BlueField DPUs to offload and accelerate communications and storage tasks from host CPUs.
Similarly, the Texas Advanced Computing Center is adding BlueField-2 DPUs to the NVIDIA Quantum InfiniBand network on Lonestar6. It will become a development platform for cloud-native supercomputing, hosting multiple users and applications with bare-metal performance while securely isolating workloads.
“That’s the architecture of choice for next-generation supercomputing and HPC clouds,” said Buck.
Exascale in Europe
In Europe, NVIDIA and SiPearl are collaborating to expand the ecosystem of developers building exascale computing on Arm. The work will help the region’s users port applications to systems that use SiPearl’s Rhea and future Arm-based CPUs together with NVIDIA accelerated computing and networking technologies.
Japan’s Center for Computational Sciences, at the University of Tsukuba, is pairing NVIDIA H100 Tensor Core GPUs and x86 CPUs on an NVIDIA Quantum-2 InfiniBand platform. The new supercomputer will tackle jobs in climatology, astrophysics, big data, AI and more.
The new system will join the 71% on the latest TOP500 list of supercomputers that have adopted NVIDIA technologies. In addition, 80% of new systems on the list also use NVIDIA GPUs, networks or both and NVIDIA’s networking platform is the most popular interconnect for TOP500 systems.
HPC users adopt NVIDIA technologies because they deliver the highest application performance for established supercomputing workloads — simulation, machine learning, real-time edge processing — as well as emerging workloads like quantum simulations and digital twins.
Powering Up With Omniverse
Showing what these systems can do, Buck played a demo of a virtual fusion power plant that researchers in the U.K. Atomic Energy Authority and the University of Manchester are building in NVIDIA Omniverse. The digital twin aims to simulate in real time the entire power station, its robotic components — even the behavior of the fusion plasma at its core.
NVIDIA Omniverse, a 3D design collaboration and world simulation platform, lets distant researchers on the project work together in real time while using different 3D applications. They aim to enhance their work with NVIDIA Modulus, a framework for creating physics-informed AI models.
“It’s incredibly intricate work that’s paving the way for tomorrow’s clean renewable energy sources,” said Buck.
AI for Medical Imaging
Separately, Buck described how researchers created a library of 100,000 synthetic images of the human brain on NVIDIA Cambridge-1, a supercomputer dedicated to advances in healthcare with AI.
A team from King’s College London used MONAI, an AI framework for medical imaging, to generate lifelike images that can help researchers see how diseases like Parkinson’s develop.
“This is a great example of HPC+AI making a real contribution to the scientific and research community,” said Buck.
HPC at the Edge
Increasingly, HPC work extends beyond the supercomputer center. Observatories, satellites and new kinds of lab instruments need to stream and visualize data in real time.
For example, work in lightsheet microscopy at Lawrence Berkeley National Lab is using NVIDIA Clara Holoscan to see life in real time at nanometer scale, work that would require several days on CPUs.
To help bring supercomputing to the edge, NVIDIA is developing Holoscan for HPC, a highly scalable version of our imaging software to accelerate any scientific discovery. It will run across accelerated platforms from Jetson AGX modules and appliances to quad A100 servers.
“We can’t wait to see what researchers will do with this software,” said Buck.
Speeding Quantum Simulations
In yet another vector of supercomputing, Buck reported on the rapid adoption of NVIDIA cuQuantum, a software development kit to accelerate quantum circuit simulations on GPUs.
Dozens of organizations are already using it in research across many fields. It’s integrated into major quantum software frameworks so users can access GPU acceleration without any additional coding.
Most recently, AWS announced the availability of cuQuantum in its Braket service. And it demonstrated how cuQuantum can provide up to a 900x speedup on quantum machine learning workloads while reducing costs 3.5x.
“Quantum computing has tremendous potential, and simulating quantum computers on GPU supercomputers is essential to move us closer to valuable quantum computing” said Buck. “We’re really excited to be at the forefront of this work,” he added.
A video of the full address will be posted here Tuesday, March 31 at 9am PT.