What is high-performance computing (HPC)?
Explore IBM's HPC solution Subscribe for cloud updates
Illustration showing how high-power computing uses powerful processors to process massive amounts of data in real time
What is HPC?

High-performance computing (HPC) is technology that uses clusters of powerful processors that work in parallel to process massive multi-dimensional data sets, also known as big data, and solve complex problems at extremely high speeds. HPC solves some of today’s most complex computing problems in real time.

HPC systems typically run at speeds more than one million times faster than the fastest commodity desktop, laptop or server systems.

For decades, supercomputers—purpose-built computers that embody millions of processors or processor cores—were key in high-performance computing. Supercomputers are still with us; at this writing, the fastest supercomputer is the US-based Frontier (link resides outside ibm.com), with a processing speed of 1.102 exaflops, or quintillion floating point operations per second (flops). But today, more organizations are running HPC services on clusters of high-speed computer servers, hosted on premises or in the cloud.

HPC workloads uncover important new insights that advance human knowledge and create significant competitive advantages. For example, HPC sequences DNA, automates stock trading, and runs artificial intelligence (AI) algorithms and simulations—such as those enabling self-driving automobiles—that analyze terabytes of data streaming from IoT sensors, radar and GPS systems in real time to make split-second decisions.

Achieve workplace flexibility with DaaS

Read how desktop as a service (DaaS) enables enterprises to achieve the same level of performance and security as deploying the applications on premises.

Related content

Register for the guide on app modernization

How does HPC work?

A standard computing system solves problems primarily by using serial computing—it divides the workload into a sequence of tasks and then runs the tasks one after the other on the same processor.

In contrast, HPC uses:

Massively parallel computing

Parallel computing runs multiple tasks simultaneously on multiple computer servers or processors. Massively parallel computing is parallel computing that uses tens of thousands to millions of processors or processor cores.

Computer clusters (also called HPC clusters)

An HPC cluster consists of multiple high-speed computer servers networked together, with a centralized scheduler that manages the parallel computing workload. The computers, called nodes, use either high-performance multi-core CPUs or—more likely today—GPUs, which are well suited for rigorous mathematical calculations, machine learning models and graphics-intensive tasks. A single HPC cluster can include 100,000 or more nodes.

High-performance components

All the other computing resources in an HPC cluster—such as networking, memory, storage and file systems—are high-speed and high-throughput. They are also low-latency components that can keep pace with the nodes and optimize the computing power and performance of the cluster.

HPC and cloud computing

As recently as a decade ago, the high cost of HPC—which involved owning or leasing a supercomputer or building and hosting an HPC cluster in an on-premises data center—put HPC out of reach for most organizations.

Today HPC in the cloud—sometimes called HPC as a service, or HPCaaS—offers a significantly faster, more scalable and more affordable way for companies to take advantage of HPC. HPCaaS typically includes access to HPC clusters and infrastructure hosted in a cloud service provider’s data center, plus network capabilities (such as AI and data analytics) and HPC expertise.

Today, three converging trends drive HPC in the cloud:

Surging demand

Organizations across all industries are becoming increasingly dependent on the real-time insights and competitive advantage that results from solving the complex problems only HPC apps can solve. For example, credit card fraud detection—something all of us rely on and most of us have experienced at one time or another—relies increasingly on HPC to identify fraud faster and reduce annoying false positives, even as fraud activity expands and fraudsters’ tactics change constantly.

Prevalence of lower-latency, higher-throughput RDMA networking

Remote direct memory access (RDMA) enables one networked computer to access another networked computer’s memory without involving either computer’s operating system or interrupting either computer’s processing. This helps minimize latency and maximize throughput. Emerging high-performance RDMA fabrics—including InfiniBand, virtual interface architecture, and RDMA over converged Ethernet—are essentially making cloud-based HPC possible.

Widespread public-cloud and private-cloud HPCaaS availability

Today every leading public cloud service provider offers HPC services. And while some organizations continue to run highly regulated or sensitive HPC workloads on premises, many are adopting or migrating to private-cloud HPC services offered by hardware and solution vendors.

HPC use cases

HPC applications have become synonymous with AI apps in general, and with machine learning and deep learning apps in particular; today most HPC systems are created with these workloads in mind. These HPC applications are driving continuous innovation in:

Healthcare, genomics and life sciences

The first attempt to sequence a human genome took 13 years; today, HPC systems can do the job in less than a day. Other HPC applications in healthcare and life sciences include drug discovery and design, rapid cancer diagnosis and molecular modeling.

Financial services

In addition to automated trading and fraud detection, HPC powers applications in Monte Carlo simulation and other risk analysis methods.

Government and defense

Two growing HPC use cases in this area are weather forecasting and climate modeling, both of which involve processing vast amounts of historical meteorological data and millions of daily changes in climate-related data points. Other government and defense applications include energy research and intelligence work.

Energy

In some cases that overlap with government and defense, energy-related HPC applications include seismic data processing, reservoir simulation and modeling, geospatial analytics, wind simulation and terrain mapping.

Related solutions
High-performance computing on IBM Cloud®

Whether your workload requires a hybrid cloud environment or one contained in the cloud, IBM Cloud has the high-performance computing tools to meet your needs. 

Discover HPC on IBM Cloud
AI infrastructure

To meet today's challenges and prepare for the future, you need IBM® AI software that integrate with your infrastructure and data strategy.

Discover AI infrastructure solutions
HPC workload management

The IBM Spectrum® LSF Suites portfolio redefines cluster virtualization and workload management by providing an integrated system for mission-critical HPC environments. 

Explore IBM Spectrum LSF Suites
Quantum computing systems

IBM is currently the only company offering the full quantum technology stack with the most advanced hardware, integrated systems and cloud services.

Explore quantum computing systems
Resources What is supercomputing?

Supercomputing is a form of high-performance computing that determines or calculates by using a powerful computer, a supercomputer, which reduces overall time to solution.

IBM’s roadmap for building an open quantum software ecosystem

Quantum computing is on the verge of sparking a paradigm shift. Software that is reliant on this nascent technology, rooted in nature's physical laws, could soon revolutionize computing forever.

Take the next step

IBM offers a complete portfolio of integrated high-performance computing (HPC) solutions for hybrid cloud, which gives you the flexibility to manage compute-intensive workloads on premises or in the cloud. 

Explore HPC solutions