IBM RegTech Innovations

Thinking in cloud microservices – Reshaping how firms architect risk and investment management

Share this post:

One of the privileges of the innovation team is that we get to evangelize concepts that are poised to reshape how our business will function. For the last year or two, we’ve been encouraging and enabling our organization to “think in microservices”. While fundamentally an architectural paradigm, technology designed as microservices has the potential to change both the way clients use the tools at their disposal and shift the way they do their jobs. Designing a microservices architecture is not without its challenges as there is no standard playbook to refer to. If implemented appropriately, firms can use microservices to have technology infrastructure that is simpler to manage, utilizes resources more efficiently, and therefore provides additional value at a lower cost.

What is a microservice?

Microservices are a variation of service-oriented architecture where applications function as a group of loosely coupled services. Microservices tend to do one thing and do it well, contrasted with large monolithic applications which encompass entire workflows. This is often reinforced by having a small, “two pizza team” behind a service. This results in more modular software such that workflows can be constructed through stacking multiple microservices together like Lego bricks. In lieu of designing a service to fit a particular workflow, microservices tend to be architected to perform a subset of a workflow such that they might be utilized across several disparate workflows which each require a similar function somewhere along the way.

It’s difficult to find oneself in a conversation about microservices that doesn’t involve the cloud. With the availability of [seemingly] limitless computational resources on the cloud, microservices architecture is the natural software complement to leverage that supply. Microservices enable portability; services with a smaller footprint can more easily be run on cloud container platforms like Kubernetes, removing the burden of the software team having to directly manage hardware. If a workflow is compartmentalized into microservices then perhaps only a subset of the workflow needs to be running at any given time. In a cloud-focused world where products are increasingly offered as pay-as-you-go, this is an opportunity to minimize costs down to the precise resources required to fulfill a task.

Our experience with microservices

We’ve been quite vocal about our own transition to a risk management microservices architecture. We’ve distilled portfolio optimization and economic scenario generation into user-friendly APIs, such that they can be leveraged as a stand-alone service. Each is designed to use the precise amount of computational resources – cores, memory and storage – needed to perform a job. Simulating analytics on financial instruments lends itself more toward parallelization so a lightweight microservice was created to use a large number of smaller instances to process larger jobs in lieu of managing a few larger instances. To enhance the user experience of each of these services, a separate microservice was built to manage financial data to remove the often heavy burden of doing so.

Each of the services can be used in a stand-alone fashion or can be coupled with other services to achieve a variety of risk management objectives. Want to know what happens to your portfolio if there were a big market move? Combine economic scenario generation with financial modeling. Don’t have data? Layer in the data service. Want to know what to do if the results fall beyond your risk limits? Add in the portfolio optimization service to suggest an optimal hedge to manage the risk of the projected outcome.

A challenge often arises when developing new features for such a platform and the need to “think in microservices” presents itself. There’s no standard as to how big and multi-faceted a service can get before it’s no longer considered a microservice. Each additional capability begets the team debate of “do we create a new service?” vs. “do we add a feature or an endpoint on one of our existing services?” Having iterated through this a few times, we’ve found our team gravitating toward the following rules of thumb:

  • Does it have value as a stand-alone product?

If it simply enhances usability of one existing service, then perhaps it’s best deployed as a feature. If many services could benefit from the same feature, it might have value as a microservice.

  • Does the feature set have to be maintained on a different schedule than other features of the same service?

This tends to crop up when discussing integration points. If you’re building hooks into 3rd party data services like portfolio management systems, they must be updated along with their source systems. This is also true of 3rd party communication platforms like messaging and chat platforms to properly isolate the logic analytics microservices which might need to span several distinct integration points from the dissemination of such results which is often tied to an external API.

  • Does the feature set have distinct variable costs?

Different tasks often use different types of resources. Simulations and analytics are compute intensive and therefore primarily consume compute resources, persistence consumes storage resources, and reporting consuming memory resources. Data as the natural resource might be charged by the item.

Combining two of the above into a single service may increase opacity to the end users. This allows room for hidden premiums or creates deadweight loss when clients are looking to purchase a solution with a pricing model that spans more than one cost model.

Thinking in Microservices

Aligning and exposing services by their primary cost driver starts to accrue some non-technical benefits. It allows for use cases where software might be coupled with cloud hardware which can be provisioned by the hour/minute/second. These use cases may provide for more attractive offerings compared with large, fixed subscriptions or licenses. The stickiness of incumbent solutions, buffered by the implicit risks and costs of “ripping and replacing” whole systems, gets eroded by solutions that can more dynamically fill gaps in a workflow when it might otherwise be impractical to replace a system for a few missing features.

As financial services firms start to think in microservices and embrace this more efficient pricing and deployment mechanism the industry could trend toward a more efficient allocation of technology resources. Total cost of ownership – once a three-way conversation to be had with software vendors, hardware vendors, and data providers – might be reduced through the transparency of each microservice offering.

If the total cost of implementation is known and made more granular through the implementation of microservices, it allows for larger scale analysis of industry processes. If simulations only use as much cloud hardware as they require, it’s easier to determine the exact costs of running a given simulation. You could take it a step further and understand the energy draw required to power the hardware required by that one simulation. The need for these simulations themselves often arise from attempting to answer a given question or get a more accurate bead on what the future might bring. If you extend the same logic to all firms in the financial services industry, you might be able to more accurately provide answers to really interesting questions like, “What’s the environmental impact of increasing the Monte Carlo burden within financial regulation from 10,000 to 100,000 scenarios?”

Thinking in microservices provides a more efficient foundation for building technological infrastructure, streamlining cost models to encompass total cost of ownership, and more insight to what actually goes into answering fundamental risk management questions both within a firm and across the financial services industry.

To read more about IBM’s point-of-view on applying emerging technologies like cloud, big data, advanced analytics and aggregation, and AI to help modernize your risk management capabilities and improve outcomes, download our white paper, “A new era of technology-enabled financial risk management”

Offering Manager, Watson Financial Services at IBM

More IBM RegTech Innovations stories

The challenge of managing hundreds of daily GRC alerts

Today we are inundated with alerts, breaking news, celebrity scandals and what their neighbor had for lunch.  The feeds are not limited to one medium; your computer, your television, your phone, your tablet and now even the watch you wear on your wrist. The flood of data that comes in can bury a person. How […]

Continue reading

GRC is everyone’s business

In my May 2019 blog, “Has GRC Reached Its Tipping Point? Observations From The Front Lines”, I described a set of common patterns that are driving business initiatives in Governance, Risk & Compliance. These highlighted that: Organizations are transforming their GRC frameworks, They are driving to realize greater benefits from already significant GRC investments, and […]

Continue reading

Ask the expert: 20 questions fraud fighters want to know

In our webinar yesterday, the first in the series AI Fraud Detection — Beyond the Textbooks, we ran out of time and weren’t able to address some great questions we had from the audience. Instead of waiting until the next episode, I hope these brief answers will be of help and interest to both those […]

Continue reading