Quantum Computing

Goldman Sachs & IBM researchers estimate quantum advantage for derivative pricing

Share this post:

The financial services industry is full of potential applications for quantum computing, including optimization, simulation and machine learning. But it’s not that easy to determine which applications are most likely to benefit from quantum advantage, and exactly how powerful quantum computers must be to run those applications significantly better than classical systems can.

That’s what we are trying to address. In a new preprint now on arXiv, “A Threshold for Quantum Advantage in Derivative Pricing”, our quantum research teams at IBM and Goldman Sachs provide the first detailed estimate of the quantum computing resources needed to achieve quantum advantage for derivative pricing – one of the most ubiquitous calculations in finance.

We describe the challenges in previous quantum approaches to this problem, and introduce a new method for overcoming those obstacles. The new approach – called the re-parameterization method – combines pre-trained quantum algorithms with approaches from fault-tolerant quantum computing to dramatically cut the estimated resource requirements for pricing financial derivatives using quantum computers.

Our resource estimates give a target performance threshold for quantum computers able to demonstrate advantage in derivative pricing. The benchmark use cases we examined need 7.5k logical qubits and a T-depth of 46 million (referring to the number of gates, or operations a qubit can perform before decoherence). We also estimate that quantum advantage in this scenario would need T-gates to run at 10Mhz or faster, assuming a target of 1 second for pricing certain types of derivatives.

Those resource requirements are out of reach of today’s systems, but we aim to provide a roadmap to further improve algorithms, circuit optimization, error correction and planned hardware architectures.

The challenges of calculating quantum advantage

Let’s unpack those numbers.

For starters, logical qubits will be built out of many physical qubits with a layer of error-correcting code(s) that buys these noisy, error-prone qubits enough coherence time to do meaningful work. A circuit consists of the logical qubits and the operations applied to them. Today’s qubits are restricted to only a few operations, or gates, before they reach their coherence time limit. That number of operations defines the circuit’s depth.

Often, when researchers talk about the power of quantum computers, they speak in theoretical terms about computational complexity. This refers to how the compute resources needed for a class of problems scales when the problems in the class get bigger. Complexity classes leave out the detailed estimate for specific instances of a problem. This can make projections about useful applications vague. Fuzzy timelines are understandable, given the nascent nature of the technology, but it is now time to get more specific. Our work provides an upper bound on the number of operations needed for advantage for specific benchmark instances.

We focused on derivate pricing, but our work could also apply to other kinds of risk calculations. Derivatives are a good place to start because enormous sums of derivatives are traded each year globally. A derivative contract is a financial asset whose estimated value is based on how the price of some underlying asset(s) – such as futures, options, stocks, currencies and commodities – change over time. The ability to more accurately price or assess the risk inherent in each of those contracts – even if the advantage is relatively small – could have a large impact on the financial services industry.

Derivatives are often calculated using Monte Carlo simulations on classical computers, where you randomly simulate how asset price change over time. You have to run a large number of these simulations to get results that will converge into a reasonable answer. Theoretically, the same simulations could be performed on a quantum computer to reach an answer much quicker. It’s been unclear, however, how much faster quantum computers will be, and how robust a quantum computer needs to be to outperform a classical computer for this particular application.

When calculating derivative prices using classical computers, if we want to improve an estimate’s precision by an order of magnitude, we would need to increase the number of samples in a Monte Carlo simulation by a factor of 100, which greatly slows down the process. On a quantum computer, if we want to improve by the same amount, we would increase samples by a factor of 10. This is what is known as a quadratic speedup.

That seems like a bargain, until you factor in that processing on a quantum computer is highly expensive (computationally speaking). For example, the clock rate of many planned quantum computers looks to be significantly slower than today’s classical processors. Much of this is because of the large overheads that look to be introduced by our current quantum error correcting codes. Simply using a quantum computer doesn’t guarantee you’ll outperform a classical computer. Part of our research is aimed at better understanding the conversion rate between the two different types of computation.

A roadmap towards quantum advantage

Our main goal was to show in as much concrete, quantifiable detail as possible what is needed for quantum advantage in derivative pricing to be both possible and meaningful, and highlight where the challenges remain in achieving quantum advantage. This sort of analysis is important because it identifies the specific bottlenecks we know of today, making it more likely that additional research will determine how to unplug those bottlenecks.

In transparently calculating our estimates, we enable our ongoing research – as well as other research teams – to examine every subroutine in this algorithm and in this estimate and determine how much each particular step matters for the overall runtime. We can say, for example, to an algorithm or error-correction researcher, these are the things you should be trying to improve that will have the most impact on reducing the resources necessary for quantum advantage in derivative pricing.

This is the kind of research that’s most valuable to the industries that will eventually adopt quantum computing – go as deep and technical as you can while also connecting the work back to the business use cases that provide value to your clients.


Shouvanik Chakrabarti, Rajiv Krishnakumar, Guglielmo Mazzola, Nikitas Stamatopoulos, Stefan Woerner, William J. Zeng, “A Threshold for Quantum Advantage in Derivative Pricing”, arXiv:2012.03819 [quant-ph]

IBM Quantum

Quantum starts here


Quantum Applications Lead, IBM Quantum

William Zeng

Head of Quantum Research, Goldman Sachs

More Quantum Computing stories

We’ve moved! The IBM Research blog has a new home

In an effort better integrate the IBM Research blog with the IBM Research web experience, we have migrated to a new landing page: https://research.ibm.com/blog

Continue reading

Pushing the boundaries of human-AI interaction at IUI 2021

At the 2021 virtual edition of the ACM International Conference on Intelligent User Interfaces (IUI), researchers at IBM will present five full papers, two workshop papers, and two demos.

Continue reading

From HPC Consortium’s success to National Strategic Computing Reserve

Founded in March 2020 just as the pandemic’s wave was starting to wash over the world, the Consortium has brought together 43 members with supercomputing resources. Private and public enterprises, academia, government and technology companies, many of whom are typically rivals. “It is simply unprecedented,” said Dario Gil, Senior Vice President and Director of IBM Research, one of the founding organizations. “The outcomes we’ve achieved, the lessons we’ve learned, and the next steps we have to pursue are all the result of the collective efforts of these Consortium’s community.” The next step? Creating the National Strategic Computing Reserve to help the world be better prepared for future global emergencies.

Continue reading