IBM RegTech Innovations

Sense and sensitivity: How IBM is driving innovation in XVA calculations to help banks outperform the competition

Share this post:

IBM RegTech Innovations.

Since the 2008 crash, the focus on the family of valuation adjustments known as XVA has intensified, and the demand for timely, accurate calculation is continually increasing. There’s a clear trajectory from daily to intra-day calculations, and in some cases even to real-time XVA calculations to help traders and risk management teams make better operational decisions.

Paying the price of progress

However, this all comes at a cost. Calculating large numbers of XVA sensitivities across the full range of a bank’s trading book is enormously computationally intensive. To comply with the new Basel regulations, risk teams will need to be able to perform around 300 sensitivity calculations to assess capital requirements. And a bank’s XVA desk may need to carry out many thousands of XVA sensitivity calculations in order to efficiently hedge risk.

To perform these calculations within the required timeframes, and to meet future XVA requirements, banks need to make significant investments in hardware. With standard calculation methods, a large farm of powerful servers is typically required to crunch the numbers.

For example, many metrics are currently calculated using a traditional “bump and run” approach, where bumping a risk factor means you need to re-run a full batch simulation to calculate updated XVA measures across the whole book.

Even worse, if you factor in the need to calculate Dynamic Initial Margin, there is a huge additional increase in computational requirement, far beyond the scope of problems that can be addressed using traditional approaches.

For this reason, banks have been investing in even more powerful server farms, and harnessing the latest GPU technology to boost performance by brute-forcing the problem. However, this is an expensive solution.

But what if the assumption that XVA sensitivity calculations require expensive specialized hardware were simply wrong? What if there was a way to make the calculations more efficient instead?

Game-changing innovation

At IBM, we’re focusing on solving that exact problem. We’ve developed a solution that uses a combination of advanced methodologies and innovative optimization techniques in order to gain the required performance acceleration without additional hardware.

With our solution, all pricing and aggregation functions are written in a proprietary language that is optimized and compiled on the fly to take advantage of whatever hardware is present. It’s designed for extreme cache locality: market and scenario data are pre-processed in memory so that the CPU can access them more efficiently.

Adjoint algorithmic differentiation is built into the language and the compiler, enabling a massive reduction in the amount of data that needs to be written to disk.  This boosts processing speed significantly. The solution also provides integrated curve-fitting methodologies for non-vanilla instruments, such as Least Square Monte Carlo (LSMC) and Thin Plate Splines. These run independently from the simulation engine to maximize flexibility and performance.

Above all, the solution takes a smart approach to simulation—it ensures that only affected trades are simulated and aggregated, reducing the overall scope of computation and saving considerable amounts of processing time.

Putting theory into practice

IBM demonstrated a prototype of this XVA sensitivities solution at the IBM Watson Financial Services Summit for Risk and Compliance in London in November 2017.

The IBM team ran the demo on a single MacBook, using ice packs to keep it cool; the code utilizes the processor so efficiently that it quickly overpowers the machine’s built-in cooling system.

This demo gives a good indication of the impressive performance improvement that can be achieved using the new technology. We used a sample set of anonymized data with 45,000 trades between 1,000 different legal entities, running 5,000 scenarios and 100 time-steps, and producing 630 sensitivity calculations, plus a main baseline exposure calculation.

A captivated audience witnessed the outcome: our ice-cool MacBook Pro completed all the calculations in just 9.5 minutes. The rapid runtime can be attributed to two main factors: a performance improvement of approximately 200x from the use of innovative simulation and aggregation technology (known as SLIMs v2); and a further 50x improvement due to the “smart simulation” approach, where only trades that are sensitive to a particular risk factor need to be re-simulated.

Innovation drives business transformation

Since that demo, we’ve been working on getting the XVA sensitivities solution ready for primetime, and we’re on the verge of releasing a production solution. The current version’s speed, smartness, advanced methodology, modular design and ability to compile and run on commodity hardware are potential game-changers—not just from a technical perspective, but in real business terms.

For example, the results we’re seeing suggest that a bank could expect to complete the calculation of thousands of XVA sensitivities within a relatively small batch window of one to two hours. That would make multiple intra-day calculations a very real option.

As a result, hedge traders on an XVA desk would be able to react much more rapidly as the market moves, giving them the ability to re-hedge their positions before their competitors even have time to assess the situation.

Similarly, by providing quick and easy profit and loss attribution analysis, the solution could give users greater visibility of the pricing factors that drive P&L changes, and help them make smarter decisions. And the solution would also enable banks to run many more stress tests, helping to determine the effect of a crisis in different geographical locations, for example.

By focusing on optimizing XVA sensitivity calculations, IBM researchers aren’t just unlocking a way to reduce hardware requirements and operational costs—we’re laying a foundation for banks to gain true competitive advantage.

Global Head, IBM Algo One Offering Manager, Financial Risk in Watson Financial Services

Mathew Dear

Offering Manager, Industry Platforms Services

More IBM RegTech Innovations stories

Risk.net names IBM for “Financial Crime Product of the Year” and “Best Vendor for Innovation”

While there is no prize for stopping financial crime (though, arguably there should be), occasionally there is recognition for being an integral and pioneering part of the overall financial crime management effort. Today, IBM received recognition by Risk.net for both “Financial Crime Product of the Year” for IBM Safer Payments as well as “Best Vendor […]

Continue reading

A new era of technology enabled financial risk management (Part 1)

In this series of blogs, we will focus on four transformative technologies with emerging risk applications that can help banks and financial institutions grow profitability and protect the enterprise. Each technology is at the start of an enormous adoption growth curve, and has been the subject of intense discussion. In this edition, read how these […]

Continue reading

The cloud advantage: Three approaches for implementing cloud for risk management

Compared to traditional data centers, I believe that cloud computing has several characteristics that make it an attractive platform for risk management. First of all, the compute requirements for risk management can vary over time. Cloud provides the ability to scale up resources during peak periods and scale them back down when they are not […]

Continue reading