$13 trillion USD is a huge sum of money, difficult to comprehend or compare. But over the next 10 years, that figure will equal the total economic impact that AI will add to the global economy, according to models run by McKinsey.
In financial services, AI is the future, one observer writes, after evaluating a new study conducted by Deloitte. For instance, AI models can incorporate big data—more types of data from multiple sources—and use it to make predictions about who should receive a loan or insurance policy. The Deloitte study noted that 70 percent of financial services firms responding were already using AI models in production.
Any firm using AI must revisit its approach to model risk management. The reason is that AI models are evolving faster than the rules-based models that were standard previously. If AI models perform inadequately, major operational losses can result quickly.
Relevance of SR-11–7 and other regulations
The Federal Reserve Board and the Office of Comptroller of the Currency issued a joint regulation for model risk management called “Supervisory Guidance on Model Risk Management” (SR 11–7) in 2011. This regulation has become the key guideline for financial institutions around the world that need to manage model risk. It initially targeted large institutions with more than $50 billion in assets.
Additional regulations have focused on smaller institutions. The Federal Deposit Insurance Commission (FDIC) announced its adoption of SR 11–7 in 2017 (FIL-22–2017) and reduced the minimum threshold for compliance to banks with only $1 billion in assets.
Issues when using AI models
With guidance such as SR 11-7 in mind, financial organizations face unique challenges when using AI models across the model lifecycle. These include:
Bias: If AI models were trained on data containing unwanted biases, they can result in biased decisions. This can be a major issue if the bias is not detected and mitigated. Loan models reflecting bias have resulted in minority groups receiving higher mortgage rates, for instance, and the organizations that used the models received substantial fines.
Explainability: Traditional statistical models are simpler to interpret and explain than today’s AI models. AI can be a black box, making it difficult for an organization to explain to internal users or customers why a particular loan application or underwriting decision was approved or denied. In regulated financial industries especially, AI model decisions need to be transparent.
Model drift: AI models can drift in accuracy within days once they are put into production, because production data inevitably differs from training data. This can heighten risk and affect business KPIs. Model drift must be detected and mitigated.
Increased documentation and governance: Organizations can change and adjust AI models more often than they do rules-based models. This is because AI technologies such as AutoAI are being improved and advanced quickly. To assist with governance, it’s critical to have automatic documentation of changes in models and data in place.
What does IBM Watson OpenScale do?
IBM Watson OpenScale was created to help organizations address challenges such as the ones described above. OpenScale helps validate and monitor AI models to enhance compliance and assist in mitigating business risk.
OpenScale can also explain AI outcomes in business-friendly language during runtime. It automatically detects the factors contributing to the model’s decision and shows these graphically. And it uses proprietary technology from IBM Research to provide a “contrastive explanation.” This term refers to a description of the minimum changes that would flip a transaction’s outcome. For example, a loan application might be changed from “risk” to “no risk” if the applicant increased a savings account balance from $500 to $1,000. This type of explanation can be helpful not only to customers, but also to internal stakeholders as they evaluate their AI models. See more about making AI explainable in this short video.
Watson OpenScale also can detect and correct AI model accuracy drift. When it exceeds an acceptable threshold during production, OpenScale generates an alert and identifies the transactions that are causing drift. They can be used to re-train the model and improve its accuracy. Learn more in this short video on mitigating drift.
AI model risk management: Join the Watson OpenScale private beta program
Financial services organizations have additional challenges in AI model risk management, and the Watson OpenScale product team is developing new features to help in this area. Would you like to test these features and offer feedback? Register here for the early access program today.