What is high-performance computing (HPC)?
HPC processes massive amounts of data and solves today’s most complex computing problems in real time or near-real time.
Subscribe to the IBM Newsletter
Close up of Circuits
What is HPC?

HPC is technology that uses clusters of powerful processors, working in parallel, to process massive multi-dimensional datasets (big data) and solve complex problems at extremely high speeds. HPC systems typically perform at speeds more than one million times faster than the fastest commodity desktop, laptop or server systems.

For decades the HPC system paradigm was the supercomputer, a purpose-built computer that embodies millions of processors or processor cores. Supercomputers are still with us; at this writing, the fastest supercomputer is the US-based Frontier (link resides outside ibm.com), with a processing speed of 1.102 exaflops, or quintillion floating point operations per second (flops). But today, more and more organizations are running HPC solutions on clusters of high-speed computers servers, hosted on premises or in the cloud.

HPC workloads uncover important new insights that advance human knowledge and create significant competitive advantage. For example, HPC is used to sequence DNA, automate stock trading, and run artificial intelligence (AI) algorithms and simulations—like those enabling self-driving automobiles—that analyze terabytes of data streaming from IoT sensors, radar and GPS systems in real time to make split-second decisions.

How does HPC work?

A standard computing system solves problems primarily using serial computing—it divides the workload into a sequence of tasks, and then executes the tasks one after the other on the same processor.

In contrast, HPC leverages

  • Massively parallel computing. Parallel computing runs multiple tasks simultaneously on multiple computer servers or processors. Massively parallel computing is parallel computing using tens of thousands to millions of processors or processor cores.

  • Computer clusters (also called HPC clusters). An HPC cluster consists of multiple high-speed computer servers networked together, with a centralized scheduler that manages the parallel computing workload. The computers, called nodes, use either high-performance multi-core CPUs or, more likely today, GPUs (graphical processing units), which are well suited for rigorous mathematical calculations, machine learning models and graphics-intensive tasks. A single HPC cluster can include 100,000 or more nodes.

  • High-performance components: All the other computing resources in an HPC cluster—networking, memory, storage and file systems—are high-speed, high-throughput and low-latency components that can keep pace with the nodes and optimize the computing power and performance of the cluster.
HPC and cloud computing

As recently as a decade ago, the high cost of HPC—which involved owning or leasing a supercomputer or building and hosting an HPC cluster in an on-premises data center—put HPC out of reach for most organizations.

Today HPC in the cloud—sometimes called HPC as a service, or HPCaaS—offers a significantly faster, more scalable and more affordable way for companies to take advantage of HPC. HPCaaS typically includes access to HPC clusters and infrastructure hosted in a cloud service provider’s data center, plus ecosystem capabilities (such as AI and data analytics) and HPC expertise.

Today HPC in the cloud is driven by three converging trends:

  • Surging demand. Organizations across all industries are becoming increasingly dependent on the real-time insights and competitive advantage that results from solving the complex problems only HPC apps can solve. For example, credit card fraud detection—something virtually all of us rely on and most of us have experienced at one time or another—relies increasingly on HPC to identify fraud faster and reduce annoying false positives, even as fraud activity expands and fraudsters’ tactics change constantly.

  • Prevalence of lower-latency, higher-throughput RDMA networking. RDMA—remote direct memory access—enables one networked computer to access another networked computer’s memory without involving either computer’s operating system or interrupting either computer’s processing. This helps minimize latency and maximize throughput. Emerging high-performance RDMA fabrics—including Infiniband, Virtual Interface Architecture, and RDMA over converged ethernet (RoCE)—are essentially making cloud-based HPC possible.

  • Widespread public-cloud and private-cloud HPCaaS availability. Today every leading public cloud service provider offers HPC services. And while some organizations continue to run highly regulated or sensitive HPC workloads on-premises, many are adopting or migrating to private-cloud HPC solutions offered by hardware and solution vendors. 
HPC use cases

HPC applications have become synonymous with AI apps in general, and with machine learning and deep learning apps in particular; today most HPC systems are created with these workloads in mind. These HPC applications are driving continuous innovation in:

Healthcare, genomics and life sciences. The first attempt to sequence a human genome took 13 years; today, HPC systems can do the job in less than a day. Other HPC applications in healthcare and life sciences include drug discovery and design, rapid cancer diagnosis, and molecular modeling.

Financial services. In addition to automated trading and fraud detection (noted above), HPC powers applications in Monte Carlo simulation and other risk analysis methods.

Government and defense. Two growing HPC use cases in this area are weather forcasting and climate modeling, both of which involve processing vast amounts of historical meteorological data and millions of daily changes in climate-related data points. Other government and defense applications include energy research and intelligence work.

Energy. In some cases overlapping with government and defense, energy-related HPC applications include seismic data processing, reservoir simulation and modeling, geospatial analytics, wind simulation and terrain mapping.

Related solutions
High performance computing on IBM Cloud

Whether your workload requires a hybrid cloud environment or one contained in the cloud, IBM Cloud has the high-performance computing solution to meet your needs. 

Discover HPC on IBM Cloud
AI infrastructure

To meet today's challenges and prepare for the future, you need IBM AI solutions that integrate with your infrastructure and data strategy.

Discover AI infrastructure solutions
HPC workload management

The IBM Spectrum® LSF Suites portfolio redefines cluster virtualization and workload management by providing an integrated solution for mission-critical HPC environments. 

Explore Spectrum LSF Suites
Quantum computing systems

IBM is currently the only company offering the full quantum technology stack with the most advanced hardware, integrated systems and cloud services.

Explore quantum computing systems
Resources History of supercomputing at IBM

IBM has a rich history of supercomputing and is widely regarded as a pioneer in the field, with highlights such as the Apollo program, Deep Blue, Watson and more.

Latest in HPC

Get the latest in HPC insights, news and technical blog posts.

Roadmap for building an open quantum software ecosystem

Quantum computing is on the verge of sparking a paradigm shift. Software reliant on this nascent technology, rooted in nature's physical laws, could soon revolutionize computing forever.