Skip to main content
Icons of Progress
 

Breaking the Petaflop Barrier

IBM100 Breaking the Petaflop Barrier iconic mark
 

Most people associate the word “flop” with something a rabbit’s ear does or what they themselves do on the sofa after a hard day at work. But in computing language, FLOPS is an acronym for FLoating point OPerations per Second—a critical measure of computing power and speed.

A gigaflop is a billion floating-point operations per second, a teraflop is one trillion, and a petaflop is a quadrillion. FLOPS particularly matter when you are talking about high-performance computing.

Ever since IBM started creating machines for business in the early days of the twentieth century, the goal has always been to help clients increase efficiency and speed to improve their bottom line. From punched cards, to tapes, to electric typewriters—it was all about enhancing productivity.

But when modern-day computers came on the scene, followed by the Internet, something new emerged: data. And since that time, the amounts of data and the number of data sources has grown exponentially every day, month and year—and there’s no sign of slowing down. This is the era of “Big Data.”

Fortunately, with the emergence of high-performance computing, we now have systems that are capable of handling staggering amounts of data in processing times that seemed unimaginable only a few years ago.

Long viewed as the next crucial milestone in high-performance computing, achieving the petaflop—one quadrillion, or a thousand trillion calculations per second—had been the goal of leading scientific, technical and military organizations in the United States, Japan, China and the European Union. In an increasingly data-driven world, each of these entities saw supercomputing prowess as a symbol of national economic competitiveness.

The very first to achieve this milestone was IBM’s Roadrunner supercomputer in 2008—so named for the state bird of New Mexico where the client, Los Alamos National Laboratory, is located. After being developed at a cost of US$133 million, the Roadrunner machine is being used principally to solve classified military problems to help the nation’s stockpile of nuclear weapons continue to work correctly as they age. It can also simulate the behavior of the weapons in the first fraction of a second during an explosion.

The Roadrunner machine was the very first “hybrid” supercomputer, meaning it contains two different types of processer architectures—the IBM PowerXCell™ 8i chip, an enhanced Cell Broadband Engine™ chip, which was originally developed for video game platforms—and x86 processors from another supplier. The 12,960 PowerXCell processors are used as accelerators, or turbochargers, for portions of calculations.

The Roadrunner development began in 2002, and went online in 2006. Given its unique design and complexity, it was constructed in three phases. Assembled and tested at the IBM facility in Poughkeepsie, NY, prior to shipping to the client, the Roadrunner supercomputer’s numerous IBM BladeCenter ® racks and cabinets occupied the floor space of a moderately sized warehouse. It would eventually require 21 tractor trailer trucks to deliver it to Los Alamos.

In an interesting contrast, however, compared to most traditional supercomputer designs, the Roadrunner machine’s hybrid format literally sips power (2.35 megawatts) and delivers 437 million calculations per watt. That was half the power required to operate its closest competitor at the time it reached the petaflop milestone.

Other supercomputers built by IBM and others have since hit the petaflop mark, but the Roadrunner supercomputer was the first.

Now the race is on to exascale computing—a thousand-fold increase to be achieved by using new technologies like light pulses or nanoscale carbon tubes to move beyond today’s chips and interconnects.

Recent announcements from IBM Research around CMOS Integrated Silicon Nanophotonics may help IBM take the lead in that race. This new exascale technology integrates electrical and optical devices on the same piece of silicon, enabling computer chips to communicate using pulses of light—instead of electrical signals—resulting in smaller, faster and more power-efficient chips than is possible with conventional technologies.

By integrating optical devices and functions directly onto a silicon chip, enabling 10 times the current processing power, the current rules and limitations concerning processing power and speed will be rewritten. A new generation of high-performance computing is being born, with IBM once again breaking new ground in the intersection of science and business.

 

The Team

Selected team members who contributed to this Icon of Progress:

  • Don Grice Chief Engineer, Scalable Parallel Supercomputers
  • Andrew Schram Project Executive
  • Adam Emerich Hardware Integration Lead
  • John Gunnels Hybrid Linpack Benchmark Development
  • Pat McCarthy Performance Team Lead
  • Mike Kistler Hybrid Linpack Benchmark Team Lead
  • Dan Brokenshire Programming Standards and Language Extensions
  • Chris Engel System Integration Team
  • Brad Benton Roadrunner Performance Optimization
  • Bill Brandmeyer System Integration
  • Camillie Mann System Integration
  • Peter Keller System Integration
  • Jeff Fritzjunker Hybrid Server Design
  • Phil Grady Hybrid Server Design
  • Bob Lytle Manufacturing
  • Dale Nelson Procurement
  • Prashant Manikal Project Management
  • Cait Crawford Leadership