IBM RegTech Innovations

Sense and sensitivity: How IBM is driving innovation in XVA calculations to help banks outperform the competition

Share this post:

IBM RegTech Innovations.

Since the 2008 crash, the focus on the family of valuation adjustments known as XVA has intensified, and the demand for timely, accurate calculation is continually increasing. There’s a clear trajectory from daily to intra-day calculations, and in some cases even to real-time XVA calculations to help traders and risk management teams make better operational decisions.

Paying the price of progress

However, this all comes at a cost. Calculating large numbers of XVA sensitivities across the full range of a bank’s trading book is enormously computationally intensive. To comply with the new Basel regulations, risk teams will need to be able to perform around 300 sensitivity calculations to assess capital requirements. And a bank’s XVA desk may need to carry out many thousands of XVA sensitivity calculations in order to efficiently hedge risk.

To perform these calculations within the required timeframes, and to meet future XVA requirements, banks need to make significant investments in hardware. With standard calculation methods, a large farm of powerful servers is typically required to crunch the numbers.

For example, many metrics are currently calculated using a traditional “bump and run” approach, where bumping a risk factor means you need to re-run a full batch simulation to calculate updated XVA measures across the whole book.

Even worse, if you factor in the need to calculate Dynamic Initial Margin, there is a huge additional increase in computational requirement, far beyond the scope of problems that can be addressed using traditional approaches.

For this reason, banks have been investing in even more powerful server farms, and harnessing the latest GPU technology to boost performance by brute-forcing the problem. However, this is an expensive solution.

But what if the assumption that XVA sensitivity calculations require expensive specialized hardware were simply wrong? What if there was a way to make the calculations more efficient instead?

Game-changing innovation

At IBM, we’re focusing on solving that exact problem. We’ve developed a solution that uses a combination of advanced methodologies and innovative optimization techniques in order to gain the required performance acceleration without additional hardware.

With our solution, all pricing and aggregation functions are written in a proprietary language that is optimized and compiled on the fly to take advantage of whatever hardware is present. It’s designed for extreme cache locality: market and scenario data are pre-processed in memory so that the CPU can access them more efficiently.

Adjoint algorithmic differentiation is built into the language and the compiler, enabling a massive reduction in the amount of data that needs to be written to disk.  This boosts processing speed significantly. The solution also provides integrated curve-fitting methodologies for non-vanilla instruments, such as Least Square Monte Carlo (LSMC) and Thin Plate Splines. These run independently from the simulation engine to maximize flexibility and performance.

Above all, the solution takes a smart approach to simulation—it ensures that only affected trades are simulated and aggregated, reducing the overall scope of computation and saving considerable amounts of processing time.

Putting theory into practice

IBM demonstrated a prototype of this XVA sensitivities solution at the IBM Watson Financial Services Summit for Risk and Compliance in London in November 2017.

The IBM team ran the demo on a single MacBook, using ice packs to keep it cool; the code utilizes the processor so efficiently that it quickly overpowers the machine’s built-in cooling system.

This demo gives a good indication of the impressive performance improvement that can be achieved using the new technology. We used a sample set of anonymized data with 45,000 trades between 1,000 different legal entities, running 5,000 scenarios and 100 time-steps, and producing 630 sensitivity calculations, plus a main baseline exposure calculation.

A captivated audience witnessed the outcome: our ice-cool MacBook Pro completed all the calculations in just 9.5 minutes. The rapid runtime can be attributed to two main factors: a performance improvement of approximately 200x from the use of innovative simulation and aggregation technology (known as SLIMs v2); and a further 50x improvement due to the “smart simulation” approach, where only trades that are sensitive to a particular risk factor need to be re-simulated.

Innovation drives business transformation

Since that demo, we’ve been working on getting the XVA sensitivities solution ready for primetime, and we’re on the verge of releasing a production solution. The current version’s speed, smartness, advanced methodology, modular design and ability to compile and run on commodity hardware are potential game-changers—not just from a technical perspective, but in real business terms.

For example, the results we’re seeing suggest that a bank could expect to complete the calculation of thousands of XVA sensitivities within a relatively small batch window of one to two hours. That would make multiple intra-day calculations a very real option.

As a result, hedge traders on an XVA desk would be able to react much more rapidly as the market moves, giving them the ability to re-hedge their positions before their competitors even have time to assess the situation.

Similarly, by providing quick and easy profit and loss attribution analysis, the solution could give users greater visibility of the pricing factors that drive P&L changes, and help them make smarter decisions. And the solution would also enable banks to run many more stress tests, helping to determine the effect of a crisis in different geographical locations, for example.

By focusing on optimizing XVA sensitivity calculations, IBM researchers aren’t just unlocking a way to reduce hardware requirements and operational costs—we’re laying a foundation for banks to gain true competitive advantage.

Global Head, IBM Algo One Offering Manager, Financial Risk in Watson Financial Services

Mathew Dear

Offering Manager, Industry Platforms Services

More IBM RegTech Innovations stories

Ask the expert: 20 questions fraud fighters want to know

In our webinar yesterday, the first in the series AI Fraud Detection — Beyond the Textbooks, we ran out of time and weren’t able to address some great questions we had from the audience. Instead of waiting until the next episode, I hope these brief answers will be of help and interest to both those […]

Continue reading

Thinking in cloud microservices – Reshaping how firms architect risk and investment management

One of the privileges of the innovation team is that we get to evangelize concepts that are poised to reshape how our business will function. For the last year or two, we’ve been encouraging and enabling our organization to “think in microservices”. While fundamentally an architectural paradigm, technology designed as microservices has the potential to […]

Continue reading

Payments modernization: Addressing threats and assessing new opportunities

Payments modernization initiatives around the globe have long been a topic of huge interest and are at different stages of implementation in different countries. Time and again, it has proven to be a catalyst for transformation that has fueled innovation. Canada is now completely into this modernization party, driven by the modernization agenda and timeline […]

Continue reading