Quantum Computing
Cramming More Power Into a Quantum Device
March 4, 2019  Written by: Jay Gambetta and Sarah Sheldon
Categorized: Quantum Computing  Thomas J Watson Research Center
Share this post:
Earlier this year at CES, we unveiled an industry milestone: the IBM Q System One, the world’s first integrated universal approximate quantum computing system for commercial use. It takes this potentially industrychanging technology out of the traditional laboratory setting and into the cloud data center. It is both the most technically advanced and highest performing quantum system we at IBM Research have ever built.
As we progress in the era of quantum computing, system performance will be key in achieving “Quantum Advantage” — when we can definitively demonstrate, in certain use cases, a significant performance advantage over today’s classical computers. By “significant,” we mean that a quantum computation is either hundreds or thousands of times faster than a classical computation, or needs a smaller fraction of the memory required by a classical computer, or makes something possible that simply isn’t possible now with a classical computer.
We have benchmarked IBM Q System One in detail and now are pleased to report some performance numbers in the context of our IBM Q Network systems “Tokyo” and “Poughkeepsie” and the publiclyavailable IBM Q Experience system “Tenerife.”
The performance of a particular quantum computer can be characterized on two levels: metrics associated with the underlying qubits in the chip—what we call the “quantum device”—and overall fullsystem performance.
This table compares fundamental metrics of the quantum devices in four recent IBM Q systems:
Tenerife (IBM Q Experience) 
Tokyo (IBM Q Network) 
Poughkeepsie (IBM Q Network) 
IBM Q System One (In preparation for the IBM Q Network) 

Relaxation (T1) in microseconds mean best worst 
51.1 57.7 42.3 
84.3 148.5 42.2 
73.2 123.3 39.4 
73.9 132.9 38.2 
Dephasing (T2) in microseconds mean best worst 
25.9 40.2 10.6 
49.6 78.4 24.3 
66.2 123.6 10.8 
69.1 100.8 39.2 
Twoqubit (CNOT) error rates x102 mean best worst 
4.02 2.24 5.76 
2.84 1.47 7.12 
2.25 1.11 6.61 
1.69 0.97 2.85 
Singlequbit error rates x103 mean best worst 
1.65 0.69 3.44 
1.99 0.64 6.09 
1.07 0.52 2.77 
0.41 0.19 0.82 
IBM Q System One’s performance is reflected in some of the best/lowest error rates we have ever measured. The average two qubit gate error is less than two percent, and the best gate has less than one percent error rate.
Our devices are close to being fundamentally limited by coherence times, which for IBM Q System One averages 73μs.
The mean twoqubit error rate is within a factor of two (x1.68) of the coherence limit, the theoretical limit set by the qubit T1 and T2 (74μs and 69μs on average for IBM Q System One). This indicates that the errors induced by our controls are quite small, and we are achieving close to the best possible qubit fidelities on this device.
To move beyond simple measurements, we developed Quantum Volume, a fullsystem performance metric that accounts for gate and measurement errors as well as device cross talk and connectivity, and circuit software compiler efficiency.
To achieve Quantum Advantage in the 2020s, we need to at least double Quantum Volume every year.
The Quantum Volume of our fivequbit device, Tenerife, which was first made available through the IBM Q Experience quantum cloud service in 2017, is 4. Current IBM Q 20qubit premium devices have a Quantum Volume of 8. Our latest results on the IBM Q System One indicate its performance is just over the threshold for 16. The IBM Q team has been able to double Quantum Volume annually since 2017.
This establishes a roadmap for quantum systems that double in power year over year, as measured by Quantum Volume.
Interestingly, you can compare the graph above with the one in Gordon Moore’s “Cramming more components onto integrated circuits,” Electronics, Volume 38, Number 8, April 19, 1965 (below):
To achieve error rates of 0.01% we will need to improve our coherence times to 15 milliseconds, a long future path with many exciting challenges to achieve this in a quantum system. Along with developing a system roadmap, we are studying the fundamental physics of devices and have measured individual superconducting transmon qubit T1 relaxation times as long as 0.5 milliseconds (500 microseconds, quality factor of 15 million), revealing no fundamental materials ceiling to these devices yet.
0.5 millisecond qubit relaxation time, experimental device
While Quantum Volume is useful as a single number characterizing overall device performance, we can use additional metrics, such as measuring how entangled qubits are on a device, to extract more information about system performance.
A simple metric of multiqubit entanglement is state tomography (the process by which an identical ensemble of unknown quantum states is completely characterized) of nqubit GreenbergerHorneZeilinger (GHZ) states, such as the 4qubit state.
We prepare the GHZ state, and through projections of individual qubits in different bases, we can reconstruct the state we created. The metric is then the fidelity of the experimentally implemented state with the targeted state.
State tomography is sensitive to measurement errors, so without techniques to remove the effect of those errors, our reconstructed 4qubit GHZ state has a fidelity of 0.66 and can be visually depicted as a density matrix like this:
4qubit GHZ state tomography — fidelity = 0.66
Fortunately, we can mitigate these errors by taking additional calibration measurements to determine the inverse of the measurement error and apply a measurement correction to the tomography data. The same data with measurement error mitigation has a fidelity of 0.98. Note that this value doesn’t include error bars, which will contain both statistical noise and systematic noise due to state preparation and measurement errors.
Qiskit Ignis is a framework for understanding and mitigating noise in quantum circuits and devices, and is part of Qiskit, IBM’s open source quantum development kit. Measurement error mitigation is included in Qiskit Ignis.
4qubit GHZ state tomography with measurement error mitigation — fidelity = 0.98
We also have preliminary measurements of genuine entangled states on the IBM Q System One that show up to 18 qubits entangled.
These preliminary results along with improvements in Quantum Volume and measurement error mitigation technique, along with work on new, fast, highfidelity qubit measurements with conditional operations, will be presented at the 2019 American Physical Society March Meeting in Boston.
As you can see, the Noisy IntermediateScale Quantum (NISQ) era of quantum computing is an exciting time on all fronts — from hardware, to software, to physics, to benchmarking. There is still much to investigate and apply to continue improving Quantum Volume on real systems. We plan to make quantum computing systems with this level of performance available in the second half of 2019, upon opening our new quantum computation center in Poughkeepsie, NY.
In 1965, Gordon Moore said, “The future of integrated electronics is the future of electronics itself.” We now believe the future of quantum computing to be the future of computing itself.
IBM Fellow and Vice President, IBM Quantum
Research Staff Member, Experimental Quantum Computing, IBM Quantum
Using SecDevOps to design and embed security and compliance into development workflows
IBM Research has initiated focused efforts called Code Risk Analyzer to bring security and compliance analytics to DevSecOps. Code Risk Analyzer is a new feature of IBM Cloud Continuous Delivery, a cloud service that helps provision toolchains, automate builds and tests, and control quality with analytics.
IBM Research and the Broad Institute Seek to Unravel the True Risks of Genetic Diseases
In 2019, IBM and the Broad Institute of MIT and Harvard started a multiyear collaborative research program to develop powerful predictive models that can potentially enable clinicians to identify patients at serious risk for cardiovascular disease (1, 2). At the start of our collaboration, we proposed an approach to develop AIbased models that combine and […]
Impact of the SQL Relational Model 50 years later
Fifty years ago this month, IBM researcher and computing pioneer Edgar Frank Codd published the seminal paper “A Relational Model of Data for Large Shared Data Banks,” which became the foundation of Structured Query Language (SQL), a language originally built to manage structured data with relational properties. Today SQL is one of the world’s most […]