Quantum Computing

Quantum Takes Flight: Moving from Laboratory Demonstrations to Building Systems

Share this post:

Last year we declared that in order to achieve quantum advantage within the next decade, we will need to at least double the Quantum Volume of our quantum computing systems every year. What better way to start 2020 than by announcing that we have added our fourth data point, a new 28-qubit backend, Raleigh, to our progress roadmap and achieved a system demonstrating Quantum Volume of 32. 

Quantum Volume progress

Quantum Volume (QV) is a hardware-agnostic metric that we defined to measure the performance of a real quantum computer. Each system we develop brings us along a path where complex problems will be more efficiently addressed by quantum computing; therefore, the need for system benchmarks is crucial, and simply counting qubits is not enough. As we have discussed in the past, Quantum Volume takes into account the number of qubits, connectivity, and gate and measurement errors. Material improvements to underlying physical hardware, such as increases in coherence times, reduction of device crosstalk, and software circuit compiler efficiency, can point to measurable progress in Quantum Volume, as long as all improvements happen at a similar pace.

Our achievement of QV 32 is significant, not just because it is another point on the curve, but because it confirms that quantum systems have matured into a new phase in which developmental improvements will drive better and better experimental quantum computing platforms to enable serious research, and bridge toward Quantum Advantage. The past year marked a number of remarkable achievements where we, as a community, solidly emerged into a new phase where quantum computing as a commercial business is not so far-fetched.

Although there is still a long way to go, in 2019 we saw:

  1. Multiple traditional cloud providers working towards quantum computing services
  2. Multiple 50-qubit systems that push the limits of what can be simulated
  3. Multiple physical backend systems, including trapped-ions and superconducting qubits
  4. Published quantum research from leading-edge Fortune 500 previously ‘non-quantum’ companies

Alongside this progress, it is also time for us to demonstrate a commensurate maturation out of a purely exploratory quantum research phase, and measure our progress within a roadmap culture for real systems. In the spirit of technological readiness, we must start thinking about quantum research and quantum systems development separately, but in sync with one another. Through a well-defined roadmap, we can observe and track generational progress in usable systems, and escape the myopia of trying to measure progress through isolated qubit experiments or lab demonstrations for glossy journals. While we were excited to see the improvements of our previous quantum systems along the roadmap, the path of our latest system reflects a new level of maturity.

Generational cycles of learning

Since we deployed our first system with five qubits in 2016, we have progressed to a family of 16-qubit systems, 20-qubit systems, and (most recently) the first 53-qubit system. Within these families of systems, roughly demarcated by the number of qubits (internally we code-name the individual systems by city names, and the development threads as different birds), we have chosen a few to drive generations of learning cycles (Canary, Albatross, Penguin, and Hummingbird).

We can look at the specific case for our 20-qubit systems (internally referred to as Penguin), shown in this figure:



Shown in the plots are the distributions of CNOT errors across all of the 20-qubit systems that have been deployed, to date. We can point to four distinct revisions of changes that we have integrated into these systems, from varying underlying physical device elements, to altering the connectivity and coupling configuration of the underlying qubits. Overall, the results are striking and visually beautiful, taking what was a wide distribution of errors down to a narrow set, all centered around ~1-2% for the Boeblingen system.Looking back at the original 5-qubit systems (called Canary), we are also able to see significant learning driven into the devices.

IBM Q Canary systems

As we have just released the first generation of our 53-qubit system (from our Hummingbird family), the error distributions are rather broad, and we anticipate successive revisions to improve its performance relatively quickly. Much like the days of CMOS, where new technology nodes often bring in a large set of new research features and capabilities, each new bird family exhibits a developmental roadmap mentality with overlapping cycles of learning. Furthermore, it is critical that all these generations of learning compound with cutting-edge, agile research.

IBM Q Rochester

Research feeds Development

Turning the crank of the roadmap with an eye towards details like reliability and reproducibility is one thing. However, the more critical part of the hardware effort is the research that feeds in at every juncture. An agile framework is essential for an effort of this magnitude to be successful.

There is a need for learning (and thus failing) fast! On the device front, for example, we have observed tremendous progress on both gate errors and their spread; on coherence improvements; and on crosstalk reduction. All these aspects need to work together optimally to get the highest possible performance and reliability from the device. The gateway to development leans heavily on fundamental research.

Evolution on the lattice connectivity and design, for example, had a strong impact on our gate errors and exposure to crosstalk. From a control hardware and infrastructure perspective, better cryogenic components, control electronics, and quantum-limited amplifiers also all require further research.

Any of these advances, of course, need to happen dynamically within a research environment, and be vetted and validated before becoming a roadmap for deployment. Typically, as systems grow, previous solutions may break down and require new refinements. Thus, progress makes its mark at different levels of development in the lab and in the cloud.

Introducing Raleigh

To hit our latest Quantum Volume milestone, we combined elements of learning developed along the generational development threads, together with new ideas from research. Last year we demonstrated advances in single-qubit coherence, pushing greater than 10 million quality factor on isolated devices. Through iteration and test, we started to implement similar techniques with our most advanced integration structures in the larger deployment devices.

Raleigh, our newest 28Q backend in the Falcon family, follows the hexagonal lattice structure of the 53-qubit Rochester. Along with some of the upgrades we have been building into the later-generation Penguin devices, it sends us across the QV 32 threshold for the first time. It is one more step on the curve, and an optimistic affirmation that we have a roadmap towards success for all to follow.

Furthermore, we are excited that there is room on top for this to continue. To compare at a qubit metric level with quality factor, we can see how Raleigh stacks up against Penguin backends, as well as our coherence development systems. As you can see with Raleigh, we have improved upon coherence aspects over some early devices, but we also have promising new directions and processes under test that we have just begun to explore on a new development system device.

IBM Q quality factor graph

The decade ahead

If we look back in time, we can demarcate the evolution of quantum computing by decade:

  • 1990s: fundamental theoretical concepts showed the potential of quantum computing
  • 2000s: experiments with qubits and multi-qubit gates demonstrated quantum computing could be possible
  • And the decade we just completed, the 2010s: evolution from gates to architectures and cloud access, revealing a path to a real demand for quantum computing systems

So where does that put us with the 2020s? The next ten years will be the decade of quantum systems, and the emergence of a real hardware ecosystem that will provide the foundation for improving coherence, gates, stability, cryogenics components, integration, and packaging.  Only with a systems development mindset will we as a community see quantum advantage in the 2020s.


IBM Quantum

Quantum starts here



Director of Quantum Hardware System Development, IBM Quantum

Jay Gambetta

IBM Fellow and Vice President, IBM Quantum

More Quantum Computing stories

We’ve moved! The IBM Research blog has a new home

In an effort better integrate the IBM Research blog with the IBM Research web experience, we have migrated to a new landing page: https://research.ibm.com/blog

Continue reading

Pushing the boundaries of human-AI interaction at IUI 2021

At the 2021 virtual edition of the ACM International Conference on Intelligent User Interfaces (IUI), researchers at IBM will present five full papers, two workshop papers, and two demos.

Continue reading

From HPC Consortium’s success to National Strategic Computing Reserve

Founded in March 2020 just as the pandemic’s wave was starting to wash over the world, the Consortium has brought together 43 members with supercomputing resources. Private and public enterprises, academia, government and technology companies, many of whom are typically rivals. “It is simply unprecedented,” said Dario Gil, Senior Vice President and Director of IBM Research, one of the founding organizations. “The outcomes we’ve achieved, the lessons we’ve learned, and the next steps we have to pursue are all the result of the collective efforts of these Consortium’s community.” The next step? Creating the National Strategic Computing Reserve to help the world be better prepared for future global emergencies.

Continue reading