Supercomputing technology comprises supercomputers, the fastest computers in the world. Supercomputers are made up of interconnects, I/O systems, memory and processor cores.
Unlike traditional computers, supercomputers use more than one central processing unit (CPU). These CPUs are grouped into compute nodes, comprising a processor or a group of processors—symmetric multiprocessing (SMP)—and a memory block. At scale, a supercomputer can contain tens of thousands of nodes. With interconnect communication capabilities, these nodes can collaborate on solving a specific problem. Nodes also use interconnects to communicate with I/O systems, like data storage and networking.
A matter to note, because of modern supercomputers' power consumption, data centers require cooling systems and suitable facilities to house it all.
Machine learning algorithms will help supply medical researchers with a comprehensive view of the US cancer population at a granular level of detail.
Deep learning could help scientists identify materials for better batteries, more resilient building materials and more efficient semiconductors.
Because supercomputers are often used to run artificial intelligence programs, supercomputing has become synonymous with AI. This regular use is because AI programs require high-performance computing that supercomputers offer. In other words, supercomputers can handle the types of workloads typically needed for AI applications.
For example, IBM built Summit and Sierra supercomputers with big data and AI workloads in mind. They're helping model supernovas, pioneer new materials, and explore cancer, genetics and the environment, using technologies available to all businesses.
Supercomputing is measured in floating-point operations per second (FLOPS). Petaflops are a measure of a computer's processing speed equal to a thousand trillion flops. And a 1-petaflop computer system can perform one quadrillion (1015) flops. From a different perspective, supercomputers can be one million times more processing power than the fastest laptop.
According to the TOP500 list (link resides outside of ibm.com), the world's fastest supercomputer is Japan's Fugaku at a speed of 442 petaflops as of June 2021. IBM supercomputers, Summit and Sierra, garner the second and third spots, clocking in at 148.8 and 94.6 petaflops, respectively. Summit is located at Oak Ridge National Laboratory, a US Department of Energy facility in Tennessee. Sierra is located at the Lawrence Livermore National Laboratory in California.
To put today's speeds into perspective, when Cray-1 was installed at Los Alamos National Laboratory in 1976, it managed a speed of around 160 megaflops. One megaflop can perform one million (106) flops.
The term supercomputing is sometimes used synonymously for other types of computing. But other times, the synonyms can be confusing. To clarify some similarities and differences between computing types, here are some common comparisons.
While supercomputing typically refers to the process of complex and large calculations used by supercomputers, high-performance computing (HPC) is the use of multiple supercomputers to process complex and large calculations. Both terms are often used interchangeably.
Supercomputers are sometimes called parallel computers because supercomputing can use parallel processing. Parallel processing is when multiple CPUs work on solving a single calculation at a given time. However, HPC scenarios use parallelism, too, without using a supercomputer necessarily.
Another exception is that supercomputers could use other processor systems, like vector processors, scalar processors or multithreaded processors.
Quantum computing is a computing model that harnesses the laws of quantum mechanics to process data, performing computations based on probabilities. It aims to solve complex problems the world's most powerful supercomputers can't solve and never will.
Supercomputing evolved over many years since the Colossus machine was put into operation at Bletchley Park in the 1940s. The Colossus was the first functional, electronic, digital computer designed by Tommy Flowers, a General Post Office (GPO) research telephone engineer.
The term supercomputer came into use in the early 1960s, when IBM rolled out the IBM 7030 Stretch, and Sperry Rand unveiled the UNIVAC LARC, the first two intentional supercomputers designed to be more powerful than the fastest commercial machines available at the time. Events that influenced the progress of supercomputing began in the late 1950s when the US government began regularly funding the development of cutting-edge, high-performance computer technology for military applications.
Although supercomputers were initially produced in limited quantities for the government, the technology developed would make its way into the industrial and commercial mainstreams. For example, two US companies, Control Data Corporation (CDC) and Cray Research, led the commercial supercomputer industry from the mid-1960s to the late 1970s. The CDC 6600, designed by Seymour Cray, is considered the first successful commercial supercomputer. IBM would later become a commercial industry leader from the 1990s through today.
HPC solutions help conquer the world's biggest challenges by combatting cancer and identifying next-gen materials.
To meet today's challenges and prepare for the future, you need AI solutions integrated with your infrastructure and data strategy.
Deliver the future of hybrid cloud with Power10, designed to improve energy efficiency, capacity and performance.
Eliminate I/O bottlenecks and share memory across GPUs and CPUs, yielding faster insights and more accurate models.
Optimize your IT infrastructure on-premises and in the cloud with the flexibility and control that comes with open-source development.
See how enterprises use supercomputing in their industries, like biologically remodeling liquid water (fluid dynamics) or bringing solar electricity and heat to remote locations.
Learn what open source means, how it compares to closed source software and how it has evolved.
Learn what artificial intelligence is, about its types, the difference between deep learning and machine learning, and how AI is applied.