Quantum Computing

Researchers Put Machine Learning on Path to Quantum Advantage

Share this post:

There are high hopes that quantum computing’s tremendous processing power will someday unleash exponential advances in artificial intelligence. AI systems thrive when the machine learning algorithms used to train them are given massive amounts of data to ingest, classify and analyze. The more precisely that data can be classified according to specific characteristics, or features, the better the AI will perform. Quantum computers are expected to play a crucial role in machine learning, including the crucial aspect of accessing more computationally complex feature spaces – the fine-grain aspects of data that could lead to new insights.

In a new Nature research paper entitled “Supervised learning with quantum enhanced feature spaces,” my team at IBM Research, in collaboration with the MIT-IBM Watson AI Lab, describes developing and testing a quantum algorithm with the potential to enable machine learning on quantum computers in the near future. We’ve shown that as quantum computers become more powerful in the years to come, and their Quantum Volume increases, they will be able to perform feature mapping, a key component of machine learning, on highly complex data structures at a scale far beyond the reach of even the most powerful classical computers.

Our methods were also able to classify data with the use of short-depth circuits, which opens a path to dealing with decoherence. Just as significantly, our feature-mapping worked as predicted: no classification errors with our engineered data, even as the IBM Q systems’ processors experienced decoherence.

Bigger, Better Picture

Feature mapping is a way of disassembling data to get access to finer-grain aspects of that data. Both classical and quantum machine learning algorithms can break down a picture, for example, by pixels and place them in a grid based on each pixel’s color value. From there the algorithms map individual data points non-linearly to a high-dimensional space, breaking the data down according to its most essential features. In the much larger quantum state space, we can separate aspects and features of that data better than we could in a feature map created by a classical machine-learning algorithm. Ultimately, the more precisely that data can be classified according to specific characteristics, or features, the better the AI will perform.

The goal is to use quantum computers to create new classifiers that generate more sophisticated data maps. In doing that, researchers will be able to develop more effective AI that can, for example, identify patterns in data that are invisible to classical computers.

We’ve developed a blueprint with new quantum data classification algorithms and feature maps. That’s important for AI because, the larger and more diverse a data set is, the more difficult it is to separate that data out into meaningful classes for training a machine learning algorithm. Bad classification results from the machine learning process could introduce undesirable results; for example, impairing a medical device’s ability to identify cancer cells based on mammography data.

The Noise Problem

We found that even in the presence of noise, we could consistently classify our engineered data with perfect accuracy during our tests. Today’s quantum computers struggle to keep their qubits in a quantum state for more than a few hundred microseconds even in a highly controlled laboratory environment. That’s significant because qubits need to remain in that state for as long as possible in order to perform calculations.

Our algorithms demonstrating how entanglement can improve AI classification accuracy will be available as part of IBM’s Qiskit Aqua, an open-source library of quantum algorithms that developers, researchers and industry experts can use to access quantum computers via classical applications or common programming languages such as Python.

We are still far off from achieving Quantum Advantage for machine learning—the point at which quantum computers surpass classical computers in their ability to perform AI algorithms. Our research doesn’t yet demonstrate Quantum Advantage because we minimized the scope of the problem based on our current hardware capabilities, using only two qubits of quantum computing capacity, which can be simulated on a classical computer. Yet the feature mapping methods we’re advancing could soon be able to classify far more complex datasets than anything a classical computer could handle. What we’ve shown is a promising path forward.

Supervised learning with quantum enhanced feature spaces 

Vojtěch Havlíček, Antonio D. Córcoles, Kristan Temme, Aram W. Harrow, Abhinav Kandala, Jerry M. Chow, Jay M. Gambetta

doi: 10.1038/s41586-019-0980-2

IBM Research

Jay Gambetta

IBM Fellow and Vice President, Quantum Computing

More Quantum Computing stories

Quantum Computers Flip the Script on Spin Chemistry

Recent research by IBM and University of Notre Dame serves as a new use case for quantum computing, showing that qubit noise, typically an impediment to quantum computer use, can actually be an advantage over a classical computer for chemical simulations.

Continue reading

Rising above the noise: quantum-limited amplifiers empower the readout of IBM Quantum systems

A key pillar for deploying IBM Quantum systems into the cloud is the ability to read out their quantum states with high fidelity in real time. This critical capability is made possible using special kinds of low-noise microwave amplifiers, known as quantum-limited amplifiers.

Continue reading

Quantum Takes Flight: Moving from Laboratory Demonstrations to Building Systems

Last year we at IBM declared that in order to achieve quantum advantage within the next decade, we will need to at least double the Quantum Volume of our quantum computing systems every year. What better way to start this first full week of 2020 than by announcing that we have added our fourth data point to our progress road map and achieved a system demonstrating Quantum Volume of 32. 

Continue reading