Errors are a natural thing to occur in a computer: the quantum state should evolve as prescribed by the quantum circuit that is executed. However, the actual quantum state and quantum bits might evolve differently, causing errors in the calculation, due to various unavoidable disturbances in the outside environment or in the hardware itself, disturbances which we call noise. But quantum bit errors are more complex than classical bit errors. Not only can the qubit’s zero or one value change, but qubits also come with a phase — kind of like a direction that they point. We need to find a way to handle both of these kinds of errors at each level of the system: by improving our control of the computational hardware itself, and by building redundancy into the hardware so that even if one or a few qubits error out, we can still retrieve an accurate value for our calculations.

There are several different ways we handle these errors, but all the terminology can get confusing — and even within the field, there’s disagreement about what exactly each of these terms mean. We can break error handling into three core pieces, each with their own research and development considerations: error suppression, error mitigation, and error correction. Take note, the differences are subtle and not totally defined, especially between suppression and mitigation.

## Error suppression

Error suppression is the most basic level of error handling. Error suppression refers to techniques where you use knowledge about the undesirable effects to introduce customization that can anticipate and avoid the potential impacts of those effects. Most often, when we talk about error suppression, we’re talking about handling errors at the closest level to the hardware. These techniques often go unbeknownst to the user, and often consist of altering or adding control signals to ensure that the processor returns the desired result.

Error suppression techniques date back decades, and were developed with some of the first controllable quantum systems, such the nuclear magnetic resonance (NMR) devices at the heart of magnetic resonance imaging (MRI). Quantum computers have adopted some of these techniques, such as spin echos — a sequence of pulses that can help re-focus a qubit and allow it to maintain its quantum state longer. Spin echo is part of a class of techniques known as dynamic decoupling. Dynamic decoupling sends pulses to idle qubits to reset their value to their original states, essentially to undo any potential effects from nearby qubits that are being used in the calculation.

Derivative Removal by Adiabatic Gate (DRAG) adds a component to the standard pulse shape to reduce qubits entering states higher than the 0 and 1 states we use for calculations. There are numerous other techniques, developed over the course of decades, that we’re researching and implementing in our own hardware where it makes sense. Qiskit Pulse allows users to generate custom pulses in order to explore error suppression on their own. We encourage you to try it out.

## Error mitigation

Error mitigation, meanwhile, uses the outputs of ensembles of circuits to reduce or eliminate the effect of noise in estimating expectation values. We see error mitigation as the key to realizing useful quantum computers on the near term.

With fault tolerance the ultimate goal, error mitigation is the path that gets quantum computing to usefulness. Read more.

Our team is exploring and developing a portfolio of different error mitigation techniques. Probabilistic error cancellation, for example, samples from a collection of circuits that, on average, mimics a noise inverting channel to cancel out the noise. This process is a bit like how noise-cancelling headphones work, but it works in average as opposed to cancelling noise shot-by-shot. Zero-noise extrapolation (ZNE) reduces noise affecting the outcome of a noisy quantum circuit by extrapolating the measurement outcomes of the circuit at different noise strengths in order to determine what the actual value should be. Other methods like M3 and Twirled Readout Error eXtinction (TREX) are specifically focused on reducing noise of quantum measurement.

It’s important to note that each of these methods come with their own overhead, as well as their own level of accuracy. For example, the most powerful of these techniques comes with an exponential overhead — the amount of time they take to run increases exponentially with the size of the problem, defined by number of qubits and circuit depth. By having a portfolio of methods, users are able to choose which technique would make the most sense to their problem based on their accuracy demands and how much overhead they’re willing to accept.

We’re excited about error mitigation because the most powerful of these techniques allow us to calculate noise-free (unbiased) expectation values. Expectation values can encode a variety of properties or problems. For instance, they can encode magnetization or correlation functions of spin systems, energies of molecular configurations, or cost functions. But, they typically don’t represent probabilities to find a particular quantum state such as |00110⟩. We’re encouraging the research community and members of the IBM Quantum Network to develop new algorithms and applications that make use of operator estimation as a core primitive.

Despite the overhead, we hope that for qubits in the range of hundreds, with equivalent circuit depth, error mitigation can still be of practical interest. Beyond several hundreds of qubits with equivalent circuit depth, we envision potential hybrid quantum error correction and error mitigation techniques.

## Error correction

Error correction is how we hope to achieve our ultimate goal: fault-tolerant quantum computation, where we build up redundancies so that even if a few qubits experience errors, the system will still return accurate answers for whatever we try running on the processor. Error correction is a standard technique in classical computing where information is encoded with redundancy so that checks can be made on whether an error has occurred.

Quantum error correction is the same idea, with the caveat that we have to account for aforementioned new types of error. Additionally, we have to carefully measure the system to avoid collapsing our state. In quantum error correction, we encode single qubit values — called logical qubits — across multiple physical qubits, and implement gates that can treat a fabric of physical qubits as essentially error-free logical qubits. We perform a specific set of operations and measurements, together referred to as the error correction code, to detect and correct errors. According to the threshold theorem, we know that there’s a hardware-dependent minimum error rate our hardware must achieve before we can apply error correction.

Why this quantum pioneer thinks we need more people working on quantum algorithms: Read about Dorit Aharonov, one of the key architects of the quantum threshold theorem.

But error correction is more than just an engineering challenge; it’s a physics and mathematics problem. The current leading code, the surface code, requires a lot of physical qubits O(d^{2}) for each single logical qubit where d is a feature of the code called its distance that relates to the number of errors that can be corrected. For QEC codes to correct enough errors to achieve fault tolerance the distance d must be chosen high enough for the codes error correction capabilities to match the error rate of the quantum device. As current quantum devices are rather noisy, with error rates near 1e-3, this means that the number of qubits required to employ quantum error correction with the surface code is currently unrealistic — too many physical qubits are required for each logical qubit. To move forward we need to both reduce the physical error rate of devices, to say 1e-4, and to discover new codes that require far fewer physical qubits.

Theorists around the world are still devising different error correction strategies and qubit layouts to determine which ones will hold the most promise for the future, and they have made some major breakthroughs. Just last year, theorists discovered a way to beat that quadratic overhead O(d^{2}), uncovering a code that scales linearly with the robustness — doubling the robustness just means doubling the number of qubits. This could lead to error correction with significantly less overhead — but would require some redesigns to today’s quantum hardware. IBM Quantum recently held a summer school to spur further research into this area. Meanwhile, Hardware engineers are working alongside these theorists to ensure that they’re keeping up with theorist’s best ideas.

Error suppression, error mitigation, and error correction may sound similar, but each requires their own set of expertise and considerations, and each is crucial to ensuring that quantum computers bring about real value. While we know what kinds of problems we’d like quantum computers to solve, actually solving those problems requires handling errors and noise in a practical and scalable way — and it’s not yet clear how to get there. Any improvements in any of the areas of error suppression, mitigation and correction brings practical quantum computing closer to reality. These areas work in unison to provide a continuous path to lower the overhead for quantum computing. IBM Quantum is pushing forward in all of these areas in order to bring about the era of quantum-centric supercomputing.