Protect against model drift and bias to ensure your AI is accurate, explainable and governed on any cloud
The AI models in your organization need to be trusted by stakeholders, customers and regulators. To help ensure your models are fair and accurate and manage their explainability and potential risk, you should be able to meet these four challenges:
- Can you explain outcomes?
- Are you sure your models are fair and don’t discriminate?
- Do your models stay accurate over time?
- Can you generate automated documentation for your models, data and testing?
Meeting the challenges above, according to a Forrester Total Economic Impact study of four major enterprises, can result in the following projected benefits:
- Increase total profits ranging from $4.1 million to $15.6 million over three years due to higher model productivity
- Reduce model monitoring efforts by 35% to 50% due to automated controls
- Increase model accuracy by 15% to 30% due to automated monitoring
Explainable AI and model monitoring capabilities (shown as Watson OpenScale) on IBM Cloud Pak for Data help you operationalize AI and ensure your models are trusted and transparent—on any cloud. Let’s take a closer look at each of the four capabilities.
1. Explain AI outcomes
A person applies for a bank loan but the application is denied. The bank’s AI model, trained on loan histories from thousands of applicants, has predicted the loan would be a risk. The applicant wants to know why, and regulations such as the Fair Credit Reporting Act and GDPR require that the bank be able to explain.
The problem is that AI models are opaque, and until now, explaining a prediction wasn’t easy. IBM makes an explanation visual, revealing graphically which factors influenced the prediction most. It also explains the prediction onscreen in business-friendly language. IBM-proprietary technology identifies the minimum changes an applicant could make to get the opposite prediction, in this case “no risk.” That feature enables a bank representative to discuss with the applicant specific changes that would help secure a desired loan. Watch the video below to see these capabilities in action:
When AI predictions can be examined easily, “you get more transparency,” comments a global analytics lead in the consulting services industry, as noted in the Forrester Total Economic Impact Study. “Explainable AI in Cloud Pak for Data helps you explain to the business lines the outcomes you’re getting and why. It saves time explaining these highly data-intensive outcomes, and it automates it in such a way that it’s easier to understand.”
2. Detect and mitigate AI model bias
An AI model can be only as fair as its training data, and training data can contain unintended bias that adversely affects its results. A bank that runs automated tests on its models noticed that a model was resulting in loan approvals for 80% of males but only 70% of females. In the background, Cloud Pak for Data checks for bias by changing a protected attribute such as “male” to “female.” It then keeps all other transaction information the same and re-runs the transaction through the model. If the prediction is different, bias is likely to be present.
The solution analyses the training data for this model and reveals it contained a smaller sample of loan histories for women than for men, leading to gender bias. It can also automatically create a debiased model that mitigates detected bias. See the video below for more details on bias mitigation. Learn more about AI fairness in this eBook.
3. Detect and mitigate a drift in accuracy
The accuracy of an AI model can degrade within days of deployment because production data differs from the model’s training data. This can lead to incorrect predictions and significant risk exposure. When a model’s accuracy decreases (or drifts) below a pre-set threshold, Cloud Pak for Data generates an alert. It also tracks which transactions caused the drift, enabling them to be re-labelled and used to retrain the model, restoring its predictive accuracy during runtime.
“Our models are now more accurate, which means we can better forecast our required cash reserve requirements,” notes a data scientist in the financial services industry in the Forrester Total Economic Impact Study. “A 1% improvement in accuracy frees up millions of dollars for us to lend or invest.”
See the video above for more details on mitigating a drift. And register to watch how to minimize AI model drift.
4. Automate model testing and synchronize with systems of record
AI models need to be tested periodically throughout their lifecycle. To automate the testing required for model risk management, Cloud Pak for Data enables you to
- Validate models in pre-production with tests such as detecting bias and drift
- Automatically execute tests and generate test reports
- Compare model performance of candidate models and challenger models side-by-side
- Transfer successful pre-deployment test configurations for a model to the deployed version of the model and continue automated testing
- Synchronize model, data and test result information with systems of record such as IBM OpenPages Model Risk Governance, and generate an AI FactSheet. See Figure 1.
Get a quick guided tour of how to automate model validation tests and documentation. For an overview of AI governance, explore the interactive guide “AI governance: Ensure your AI is transparent and trustworthy”.
Start managing your AI models on any cloud
Organizations are deploying their AI models throughout a hybrid, multicloud environment. Wherever your models are running, including on IBM Cloud, AWS, Microsoft Azure, or Google Cloud, you can use IBM Cloud Pak for Data to help monitor them, aiding you in ensuring explainability, accuracy and fairness while managing model risk.