What is supercomputing?

Supercomputing is a form of high-performance computing that determines or calculates using a powerful computer, a supercomputer, reducing overall time to solution.

Male athlete leaping through smoke

What is supercomputing technology?

Supercomputing technology comprises supercomputers, the fastest computers in the world. Supercomputers are made up of interconnects, I/O systems, memory and processor cores.

Unlike traditional computers, supercomputers use more than one central processing unit (CPU). These CPUs are grouped into compute nodes, comprising a processor or a group of processors—symmetric multiprocessing (SMP)—and a memory block. At scale, a supercomputer can contain tens of thousands of nodes. With interconnect communication capabilities, these nodes can collaborate on solving a specific problem. Nodes also use interconnects to communicate with I/O systems, like data storage and networking.

A matter to note, because of modern supercomputers' power consumption, data centers require cooling systems and suitable facilities to house it all.


Supercomputing and AI

Because supercomputers are often used to run artificial intelligence programs, supercomputing has become synonymous with AI. This regular use is because AI programs require high-performance computing that supercomputers offer. In other words, supercomputers can handle the types of workloads typically needed for AI applications.

For example, IBM built Summit and Sierra supercomputers with big data and AI workloads in mind. They're helping model supernovas, pioneer new materials, and explore cancer, genetics and the environment, using technologies available to all businesses.


How fast is supercomputing?

Supercomputing is measured in floating-point operations per second (FLOPS). Petaflops are a measure of a computer's processing speed equal to a thousand trillion flops. And a 1-petaflop computer system can perform one quadrillion (1015) flops. From a different perspective, supercomputers can be one million times more processing power than the fastest laptop.

What's the fastest supercomputer?

According to the TOP500 list (link resides outside of ibm.com), the world's fastest supercomputer is Japan's Fugaku at a speed of 442 petaflops as of June 2021. IBM supercomputers, Summit and Sierra, garner the second and third spots, clocking in at 148.8 and 94.6 petaflops, respectively. Summit is located at Oak Ridge National Laboratory, a US Department of Energy facility in Tennessee. Sierra is located at the Lawrence Livermore National Laboratory in California.

IBM is expected to launch Aurora, a supercomputer with exascale computing capability, later in 2021. Exascale computing is supercomputing above (1018) floating-point operations per second. This speed is equal to one quintillion flops or 1,000 petaflops.

To put today's speeds into perspective, when Cray-1 was installed at Los Alamos National Laboratory in 1976, it managed a speed of around 160 megaflops. One megaflop can perform one million (106) flops.


Supercomputing versus…

The term supercomputing is sometimes used synonymously for other types of computing. But other times, the synonyms can be confusing. To clarify some similarities and differences between computing types, here are some common comparisons.

Supercomputing vs. HPC

While supercomputing typically refers to the process of complex and large calculations used by supercomputers, high-performance computing (HPC) is the use of multiple supercomputers to process complex and large calculations. Both terms are often used interchangeably.

Supercomputing vs. parallel computing

Supercomputers are sometimes called parallel computers because supercomputing can use parallel processing. Parallel processing is when multiple CPUs work on solving a single calculation at a given time. However, HPC scenarios use parallelism, too, without using a supercomputer necessarily.

Another exception is that supercomputers could use other processor systems, like vector processors, scalar processors or multithreaded processors.

Quantum computing is a computing model that harnesses the laws of quantum mechanics to process data, performing computations based on probabilities. It aims to solve complex problems the world's most powerful supercomputers can't solve and never will.


History of supercomputing

When did supercomputing start?

Supercomputing evolved over many years since the Colossus machine was put into operation at Bletchley Park in the 1940s. The Colossus was the first functional, electronic, digital computer designed by Tommy Flowers, a General Post Office (GPO) research telephone engineer.

When was the supercomputer first invented?

The term supercomputer came into use in the early 1960s, when IBM rolled out the IBM 7030 Stretch, and Sperry Rand unveiled the UNIVAC LARC, the first two intentional supercomputers designed to be more powerful than the fastest commercial machines available at the time. Events that influenced the progress of supercomputing began in the late 1950s when the US government began regularly funding the development of cutting-edge, high-performance computer technology for military applications.

Although supercomputers were initially produced in limited quantities for the government, the technology developed would make its way into the industrial and commercial mainstreams. For example, two US companies, Control Data Corporation (CDC) and Cray Research, led the commercial supercomputer industry from the mid-1960s to the late 1970s. The CDC 6600, designed by Seymour Cray, is considered the first successful commercial supercomputer. IBM would later become a commercial industry leader from the 1990s through today.


Related solutions

HPC solutions

HPC solutions help conquer the world's biggest challenges by combatting cancer and identifying next-gen materials.

AI IT infrastructure solutions

To meet today's challenges and prepare for the future, you need AI solutions integrated with your infrastructure and data strategy.

Supercomputing with Power10 chips

Deliver the future of hybrid cloud with the Power10 processor chip, designed to improve energy efficiency, capacity and performance.

Accelerated computing servers

Eliminate I/O bottlenecks and share memory across GPUs and CPUs, yielding faster insights and more accurate models.

Linux servers and operating systems

Optimize your IT infrastructure on-premises and in the cloud with the flexibility and control that comes with open-source development.