Quantum Computing

Goldman Sachs & IBM researchers estimate quantum advantage for derivative pricing

Share this post:

The financial services industry is full of potential applications for quantum computing, including optimization, simulation and machine learning. But it’s not that easy to determine which applications are most likely to benefit from quantum advantage, and exactly how powerful quantum computers must be to run those applications significantly better than classical systems can.

That’s what we are trying to address. In a new preprint now on arXiv, “A Threshold for Quantum Advantage in Derivative Pricing”, our quantum research teams at IBM and Goldman Sachs provide the first detailed estimate of the quantum computing resources needed to achieve quantum advantage for derivative pricing – one of the most ubiquitous calculations in finance.

We describe the challenges in previous quantum approaches to this problem, and introduce a new method for overcoming those obstacles. The new approach – called the re-parameterization method – combines pre-trained quantum algorithms with approaches from fault-tolerant quantum computing to dramatically cut the estimated resource requirements for pricing financial derivatives using quantum computers.

Our resource estimates give a target performance threshold for quantum computers able to demonstrate advantage in derivative pricing. The benchmark use cases we examined need 7.5k logical qubits and a T-depth of 46 million (referring to the number of gates, or operations a qubit can perform before decoherence). We also estimate that quantum advantage in this scenario would need T-gates to run at 10Mhz or faster, assuming a target of 1 second for pricing certain types of derivatives.

Those resource requirements are out of reach of today’s systems, but we aim to provide a roadmap to further improve algorithms, circuit optimization, error correction and planned hardware architectures.

The challenges of calculating quantum advantage

Let’s unpack those numbers.

For starters, logical qubits will be built out of many physical qubits with a layer of error-correcting code(s) that buys these noisy, error-prone qubits enough coherence time to do meaningful work. A circuit consists of the logical qubits and the operations applied to them. Today’s qubits are restricted to only a few operations, or gates, before they reach their coherence time limit. That number of operations defines the circuit’s depth.

Often, when researchers talk about the power of quantum computers, they speak in theoretical terms about computational complexity. This refers to how the compute resources needed for a class of problems scales when the problems in the class get bigger. Complexity classes leave out the detailed estimate for specific instances of a problem. This can make projections about useful applications vague. Fuzzy timelines are understandable, given the nascent nature of the technology, but it is now time to get more specific. Our work provides an upper bound on the number of operations needed for advantage for specific benchmark instances.

We focused on derivate pricing, but our work could also apply to other kinds of risk calculations. Derivatives are a good place to start because enormous sums of derivatives are traded each year globally. A derivative contract is a financial asset whose estimated value is based on how the price of some underlying asset(s) – such as futures, options, stocks, currencies and commodities – change over time. The ability to more accurately price or assess the risk inherent in each of those contracts – even if the advantage is relatively small – could have a large impact on the financial services industry.

Derivatives are often calculated using Monte Carlo simulations on classical computers, where you randomly simulate how asset price change over time. You have to run a large number of these simulations to get results that will converge into a reasonable answer. Theoretically, the same simulations could be performed on a quantum computer to reach an answer much quicker. It’s been unclear, however, how much faster quantum computers will be, and how robust a quantum computer needs to be to outperform a classical computer for this particular application.

When calculating derivative prices using classical computers, if we want to improve an estimate’s precision by an order of magnitude, we would need to increase the number of samples in a Monte Carlo simulation by a factor of 100, which greatly slows down the process. On a quantum computer, if we want to improve by the same amount, we would increase samples by a factor of 10. This is what is known as a quadratic speedup.

That seems like a bargain, until you factor in that processing on a quantum computer is highly expensive (computationally speaking). For example, the clock rate of many planned quantum computers looks to be significantly slower than today’s classical processors. Much of this is because of the large overheads that look to be introduced by our current quantum error correcting codes. Simply using a quantum computer doesn’t guarantee you’ll outperform a classical computer. Part of our research is aimed at better understanding the conversion rate between the two different types of computation.

A roadmap towards quantum advantage

Our main goal was to show in as much concrete, quantifiable detail as possible what is needed for quantum advantage in derivative pricing to be both possible and meaningful, and highlight where the challenges remain in achieving quantum advantage. This sort of analysis is important because it identifies the specific bottlenecks we know of today, making it more likely that additional research will determine how to unplug those bottlenecks.

In transparently calculating our estimates, we enable our ongoing research – as well as other research teams – to examine every subroutine in this algorithm and in this estimate and determine how much each particular step matters for the overall runtime. We can say, for example, to an algorithm or error-correction researcher, these are the things you should be trying to improve that will have the most impact on reducing the resources necessary for quantum advantage in derivative pricing.

This is the kind of research that’s most valuable to the industries that will eventually adopt quantum computing – go as deep and technical as you can while also connecting the work back to the business use cases that provide value to your clients.

 

Shouvanik Chakrabarti, Rajiv Krishnakumar, Guglielmo Mazzola, Nikitas Stamatopoulos, Stefan Woerner, William J. Zeng, “A Threshold for Quantum Advantage in Derivative Pricing”, arXiv:2012.03819 [quant-ph]

IBM Quantum

Quantum starts here

 

Quantum Applications Lead, IBM Quantum

William Zeng

Head of Quantum Research, Goldman Sachs

More Quantum Computing stories

New Qiskit design: Introducing Qiskit application modules

We’re pleased to announce the first steps to restructure Qiskit towards a true runtime environment that even better-reflects the needs of the developer community. We’re rearranging the framework’s original “elements” — Terra, Aer, Ignis and Aqua — into more focused application modules that target specific user groups, and plug into the tools used by the experts in different domains.

Continue reading

IBM Quantum systems accelerate discoveries in science

IBM's quantum systems powered 46 non-IBM presentations in order to help discover new algorithms, simulate condensed matter and many-body systems, explore the frontiers of quantum mechanics and particle physics, and push the field of quantum information science forward overall.

Continue reading

IBM offers quantum industry’s first developer certification

IBM is excited to announce the world's first ever developer certification for programming a quantum computer.

Continue reading