Quantum Computing

Quantum Takes Flight: Moving from Laboratory Demonstrations to Building Systems

Share this post:

Last year we declared that in order to achieve quantum advantage within the next decade, we will need to at least double the Quantum Volume of our quantum computing systems every year. What better way to start 2020 than by announcing that we have added our fourth data point, a new 28-qubit backend, Raleigh, to our progress roadmap and achieved a system demonstrating Quantum Volume of 32. 

Quantum Volume progress

Quantum Volume (QV) is a hardware-agnostic metric that we defined to measure the performance of a real quantum computer. Each system we develop brings us along a path where complex problems will be more efficiently addressed by quantum computing; therefore, the need for system benchmarks is crucial, and simply counting qubits is not enough. As we have discussed in the past, Quantum Volume takes into account the number of qubits, connectivity, and gate and measurement errors. Material improvements to underlying physical hardware, such as increases in coherence times, reduction of device crosstalk, and software circuit compiler efficiency, can point to measurable progress in Quantum Volume, as long as all improvements happen at a similar pace.

Our achievement of QV 32 is significant, not just because it is another point on the curve, but because it confirms that quantum systems have matured into a new phase in which developmental improvements will drive better and better experimental quantum computing platforms to enable serious research, and bridge toward Quantum Advantage. The past year marked a number of remarkable achievements where we, as a community, solidly emerged into a new phase where quantum computing as a commercial business is not so far-fetched.

Although there is still a long way to go, in 2019 we saw:

  1. Multiple traditional cloud providers working towards quantum computing services
  2. Multiple 50-qubit systems that push the limits of what can be simulated
  3. Multiple physical backend systems, including trapped-ions and superconducting qubits
  4. Published quantum research from leading-edge Fortune 500 previously ‘non-quantum’ companies

Alongside this progress, it is also time for us to demonstrate a commensurate maturation out of a purely exploratory quantum research phase, and measure our progress within a roadmap culture for real systems. In the spirit of technological readiness, we must start thinking about quantum research and quantum systems development separately, but in sync with one another. Through a well-defined roadmap, we can observe and track generational progress in usable systems, and escape the myopia of trying to measure progress through isolated qubit experiments or lab demonstrations for glossy journals. While we were excited to see the improvements of our previous quantum systems along the roadmap, the path of our latest system reflects a new level of maturity.

Generational cycles of learning

Since we deployed our first system with five qubits in 2016, we have progressed to a family of 16-qubit systems, 20-qubit systems, and (most recently) the first 53-qubit system. Within these families of systems, roughly demarcated by the number of qubits (internally we code-name the individual systems by city names, and the development threads as different birds), we have chosen a few to drive generations of learning cycles (Canary, Albatross, Penguin, and Hummingbird).

We can look at the specific case for our 20-qubit systems (internally referred to as Penguin), shown in this figure:

penguin_graph

 

Shown in the plots are the distributions of CNOT errors across all of the 20-qubit systems that have been deployed, to date. We can point to four distinct revisions of changes that we have integrated into these systems, from varying underlying physical device elements, to altering the connectivity and coupling configuration of the underlying qubits. Overall, the results are striking and visually beautiful, taking what was a wide distribution of errors down to a narrow set, all centered around ~1-2% for the Boeblingen system.Looking back at the original 5-qubit systems (called Canary), we are also able to see significant learning driven into the devices.

IBM Q Canary systems

As we have just released the first generation of our 53-qubit system (from our Hummingbird family), the error distributions are rather broad, and we anticipate successive revisions to improve its performance relatively quickly. Much like the days of CMOS, where new technology nodes often bring in a large set of new research features and capabilities, each new bird family exhibits a developmental roadmap mentality with overlapping cycles of learning. Furthermore, it is critical that all these generations of learning compound with cutting-edge, agile research.

IBM Q Rochester

Research feeds Development

Turning the crank of the roadmap with an eye towards details like reliability and reproducibility is one thing. However, the more critical part of the hardware effort is the research that feeds in at every juncture. An agile framework is essential for an effort of this magnitude to be successful.

There is a need for learning (and thus failing) fast! On the device front, for example, we have observed tremendous progress on both gate errors and their spread; on coherence improvements; and on crosstalk reduction. All these aspects need to work together optimally to get the highest possible performance and reliability from the device. The gateway to development leans heavily on fundamental research.

Evolution on the lattice connectivity and design, for example, had a strong impact on our gate errors and exposure to crosstalk. From a control hardware and infrastructure perspective, better cryogenic components, control electronics, and quantum-limited amplifiers also all require further research.

Any of these advances, of course, need to happen dynamically within a research environment, and be vetted and validated before becoming a roadmap for deployment. Typically, as systems grow, previous solutions may break down and require new refinements. Thus, progress makes its mark at different levels of development in the lab and in the cloud.

Introducing Raleigh

To hit our latest Quantum Volume milestone, we combined elements of learning developed along the generational development threads, together with new ideas from research. Last year we demonstrated advances in single-qubit coherence, pushing greater than 10 million quality factor on isolated devices. Through iteration and test, we started to implement similar techniques with our most advanced integration structures in the larger deployment devices.

Raleigh, our newest 28Q backend in the Falcon family, follows the hexagonal lattice structure of the 53-qubit Rochester. Along with some of the upgrades we have been building into the later-generation Penguin devices, it sends us across the QV 32 threshold for the first time. It is one more step on the curve, and an optimistic affirmation that we have a roadmap towards success for all to follow.

Furthermore, we are excited that there is room on top for this to continue. To compare at a qubit metric level with quality factor, we can see how Raleigh stacks up against Penguin backends, as well as our coherence development systems. As you can see with Raleigh, we have improved upon coherence aspects over some early devices, but we also have promising new directions and processes under test that we have just begun to explore on a new development system device.

IBM Q quality factor graph

The decade ahead

If we look back in time, we can demarcate the evolution of quantum computing by decade:

  • 1990s: fundamental theoretical concepts showed the potential of quantum computing
  • 2000s: experiments with qubits and multi-qubit gates demonstrated quantum computing could be possible
  • And the decade we just completed, the 2010s: evolution from gates to architectures and cloud access, revealing a path to a real demand for quantum computing systems

So where does that put us with the 2020s? The next ten years will be the decade of quantum systems, and the emergence of a real hardware ecosystem that will provide the foundation for improving coherence, gates, stability, cryogenics components, integration, and packaging.  Only with a systems development mindset will we as a community see quantum advantage in the 2020s.

 

IBM Quantum

Quantum starts here

 

 

Director of Quantum Hardware System Development, IBM Quantum

Jay Gambetta

IBM Fellow and Vice President, IBM Quantum

More Quantum Computing stories

The Open Science Prize: Solve for SWAP gates and graph states

We're excited to announce the IBM Quantum Awards: Open Science Prize, an award totaling $100,000 for any person or team who can devise an open source solution to two important challenges at the forefront of quantum computing based on superconducting qubits: reducing gate errors, and measuring graph state fidelity.

Continue reading

Undergraduates: Apply to be a quantum intern with IBM and Princeton University

IBM and Princeton University are delighted to announce that we are now accepting applications for the 2021 Quantum Undergraduate Research at IBM and Princeton (QURIP) internship program.

Continue reading

Continuing the journey to frictionless quantum software: Qiskit Chemistry module & Gradients framework

We’ve taken another important step on our path towards frictionless quantum computing: A new release of Qiskit with a completely overhauled Qiskit Chemistry module, as well as a brand new Qiskit Gradients framework. Both enhancements pave the way for quantum application software that serves the needs of domain experts and quantum algorithm researchers.

Continue reading