September 2, 2020 | Written by: Guanyu Zhu and Andrew Cross
Categorized: Publications | Quantum Computing
Share this post:
Although we are currently in an era of quantum computers with tens of noisy qubits, it is likely that a decisive, practical quantum advantage can only be achieved with a scalable, fault-tolerant, error-corrected quantum computer. Therefore, development of quantum error correction is one of the central themes of the next five to ten years. Our article “Topological and subsystem codes on low-degree graphs with flag qubits” , published in Physical Review X, takes a bottom-up approach to quantum error correcting codes that are adapted to a heavy-hexagon lattice – a topology that all our new premium quantum processors use, including IBM Quantum Falcon (d=3) and Hummingbird (d=5).
A bottom-up approach
Many in the quantum error correction community pursue a top-down computer science approach, i.e., designing the best codes from an abstract perspective to achieve the smallest logical error rate with minimal resource. Along this path, the surface code is the most famous candidate for near-term demonstrations (as well as mid- to long-term applications) on a two-dimensional quantum computer chip. The surface code naturally requires a two-dimensional square lattice of qubits, where each qubit is coupled to four neighbors.
We started with the surface code architecture on our superconducting devices and demonstrated an error detection protocol as a building block of the surface code around 2015 . While the experimental team at IBM made steady progress with cross-resonance (CR) gates, achieving gate fidelities near 99%, an experimental obstacle appeared along the path of scaling up the surface code architecture. The specific way to operate the CR gates requires the control qubit frequency to be detuned from all its neighboring target qubits, such that the CNOT gates between any pair of control and target can be individually addressed.
The significant experimental challenges posed by this and other frequency constraints was a sign that the traditional top-down approach would need to be revised.
Due to the fixed, narrow windows of allowed frequencies for the superconducting qubits, a greater number of assigned frequencies lowers the success rate to fabricate chips, given that there are inevitable random fluctuations during fabrication. CR gates are best matched to a layout where qubits are located on the vertices of a low-degree graph, such as the so-called “heavy-hexagon” lattice pursued by our team. For example, IBM’s current device topology implements a heavy-hexagon lattice as shown in Fig. 1(a), where qubits are located on the nodes and edges of each hexagon. Each qubit has either two or three neighbors, meaning the graph has vertices of degree-2 or -3. As a consequence, only three different frequency assignments are necessary, which are shown as three different colors in Fig. 1(b), as opposed to a square lattice, which naturally requires at least five different frequencies for addressability. The heavy-hexagon lattice also greatly reduces crosstalk errors since, in principle, only qubits on the edges of the lattice need to be driven by CR drive tones.
This led us to ask the following questions: what quantum error correcting codes are hardware-optimal, in the sense that they are adapted to the heavy-hexagon lattice? To what degree can quantum error-correcting codes be hardware-aware?
Guided by this bottom-up principle, we developed two new classes of codes: subsystem codes called heavy-hexagon codes implemented on a heavy-hexagon lattice, and heavy-square surface codes implemented on a heavy-square lattice.
The IBM team is currently implementing these codes on the new quantum devices.
Fig. 1: (a) IBM Quantum’s 65-qubit topology design uses a heavy-hexagon lattice. (b) Illustration of the frequency assignments for implementing the cross-resonance gates on the heavy-hexagon lattice.
Constraints lead to art
One might think that hardware constraints will limit the creativity of code design, but what has happened is quite the opposite. Similar to the surface code, the new codes also require a 4-body syndrome measurement, as shown in Fig. 2, where four qubits on the legs could be measured by coupling them to a central auxiliary qubit.
To measure such a 4-body syndrome, we split the central vertex into two vertices, inserting auxiliary qubits for each, and successfully reduce the graph to degree-3. By adding another vertex (qubit) in the middle, we can measure the 4-body syndrome on the heavy-hexagon lattice with a mixture of degree-3 and -2 vertices. Meanwhile, the two extra qubits now function as so-called flag qubits, which can be used to significantly suppress error propagation in the measurement circuit.
Development of a corresponding decoder that can use this flag information is also part of the fault-tolerant quantum computing scheme. As a consequence, we have found competitive error-correcting codes and circuits, despite a constrained hardware layout. Perhaps even more intriguing is that the heavy-hexagon code belongs to a family of subsystem codes that is a hybrid of the surface code and the Bacon-Shor code, both of which are famous and widely studied examples in the quantum error correction community.
Although it might seem contradictory, constraints can lead to artistic creativity and freedom. Examples of amazing art emerging from constrained media are too numerous to name here, so we offer just two.
Bach’s Fugues exhibit counterpoint that constrains the melodic interactions of several melodies in such a way that they become beautiful polyphony when played together. Michael Keith’s 1995 poem “Near a Raven” is a retelling of Edgar Allen Poe’s eponymous poem in more than 700 words, wherein word lengths are constrained to be the digits of pi.
IBM scientists are similarly working together within the constraints of physics to create quantum computing devices, leading to qubit arrangements such as the heavy-hexagon lattice and subsequent enrichment of the family of existing quantum codes.
Fig. 2: A degree reduction procedure to measure a 4-body syndrome operator on the heavy-hexagon lattice. The degree is reduced from 4 to 3 and finally alternates between 2 and 3.
Co-design of quantum hardware and error-correcting codes
This hardware-aware, bottom-up approach provides a bridge between more abstract theory and practical quantum engineering. We have seen one example of how physical constraints can influence implementations of quantum error correction. This is an example of co-design, specifically of quantum hardware, error-correcting codes, and fault-tolerant operations. In essence, we willfully break abstraction layers to create more practical and better optimized microarchitectures for quantum computers.
Another example of co-design is when abstract error-correction theory suggests requirements for optimal hardware. The tension between ideal requirements and physical constraints couples the abstract and the practical. The concept of co-design in quantum engineering is likely to grow in importance as we move closer as a community to experimentally demonstrating fault-tolerant quantum error correction.
 C. Chamberland, G. Zhu, T. J. Yoder, J. B. Hertzberg, and A. W. Cross, Topological and Subsystem Codes on Low-Degree Graphs with Flag Qubits, Phys. Rev. X 10, 011022 (2020).
 A. D. Córcoles, E. Magesan, S. J. Srinivasan., A. W. Cross, M. Steffen, J. M. Gambetta & J. M. Chow, Demonstration of a quantum error detection code using a square lattice of four superconducting qubits, Nat Commun 6, 6979 (2015).
Quantum starts here