September 19, 2018 | Written by: Ruchir Puri
Categorized: AI for the Enterprise
Share this post:
At IBM, we’ve seen our customers use AI as a catalyst to reimaging their workflows – transforming how customer call centers operate, how people complete their taxes, and how legal professionals make data privacy compliance decisions. However, many organizations still continue to struggle in deploying their AI into production environments across their existing applications.
For AI to thrive and for businesses to reap its benefits, executives need to trust their AI systems. They need capabilities to manage those systems and to detect and mitigate bias. It’s critical – oftentimes a legal imperative – that transparency is brought into AI decision-making. In the insurance industry, for example, claims adjusters may need to explain to a customer why their auto claim was rejected by an automated processing system.
It’s time to start breaking open the black box of AI to give organizations confidence in their ability to manage those systems and explain how decisions are being made.
This is why we’ve announced the release of IBM’s new Trust and Transparency capabilities for AI on the IBM Cloud. Built with technology from IBM Research, these capabilities provide visibility into how AI is making decisions and give recommendations on how to mitigate any potentially damaging bias. It features a visually clear dashboard that line-of-business users can easily understand, reducing the burden of accountability from data scientists and empowering business users.
Accelerating trusted AI across your business
Our goal is to empower businesses to infuse their AI with trust and transparency, thus building confidence to deploy AI systems in production.
Our new Trust and Transparency capabilities for AI on the IBM Cloud support models built in any IDE or with popular, open source ML frameworks like TensorFlow, Keras, and SparkML, among others. Once deployed, those models can be monitored and managed by our capabilities at runtime – while the AI decisions are being made.
So, let’s say you build and deploy a system of models supporting an API your business can call whenever you need a prediction. Our new capabilities hook into those models and instrument a layer that enables you to capture every input your models receive and every output your models produce.
Our new capabilities provide a level of transparency, auditability, and explainability by logging every individual transaction throughout a model’s operational life. The lineage of these models is presented to business users at runtime in a way that’s easy to understand – something that’s unattainable with the tools available today.
Fairness is a key concern among enterprises deploying AI into apps that make decisions based on user information. The reputational damage and legal impact resulting from demonstrated bias against any group of users can be seriously detrimental to businesses. AI models are only as good as the data used to train them, and developing a representative, effective training data set is very challenging.
In addition, even if such biases are identified during training, the model may still exhibit bias in runtime. This can result from incongruities in optimization caused by assignment of different weights to different features.
As it logs each transaction, the Trust and Transparency capabilities feed inputs and outputs into a battery of state-of-the-art, bias-checking algorithms developed by IBM Research, to track bias in runtime. If bias is detected, the capabilities provide bias mitigation recommendations in the form of corrective data, which can be used to incrementally retrain the model for redeployment.
We address the problem of bias detection by automatically analyzing transactions within adaptable bias thresholds, while values of other attributes remain exactly the same. Traditional methods of measuring bias at build time require techniques that are computationally prohibitive at runtime for a complex AI system.
These capabilities incorporate novel techniques developed by IBM Research that automatically synthesize data in order to compute bias on a continuous basis. These techniques combine symbolic execution with local explainability for generations of effective test cases to create highly flexible, efficient, and comprehensive bias detection capabilities.
Explainability enables business ownership
Our Trust and Transparency capabilities bridge the gap between data scientists, developers, and business users within an organization, providing them visibility into what’s happening in their AI systems.
Through an intuitive visual dashboard, businesses can access easy-to-understand explanations of transactions. They can simply type the transaction ID – which can be passed down from an application into the user interface – to get details about the features used to make a decision, the limits, the inputs passed, and most importantly, the confidence level of each factor contributing to the decision the AI helped make.
Thus, even though the business-process owner might have minimal understanding of how the model works, they can still gain insight into the decision-making process and can easily compare the model’s performance against a human decision. As a result, they can make decisions about AI model health and recognize when the system might need help from data scientists.
Through these capabilities, data scientists and developers can obtain insights into the real-time performance of their models, which can also be measured and understood by business users. This provides visibility into whether models are making biased decisions and the effect of those decisions on the enterprise. This includes the corrective feedback data scientists can incorporate into their models to address biased behavior. IBM’s Trust and Transparency capabilities are unique in helping users understand the decisions their AI models are making during runtime – something that was not possible before.
Get started with trusted AI
If AI is truly going to augment operational decision-making, it’s critical to make AI outcomes transparent and explainable – not only to data science teams, but also to the line-of-business user responsible for those decisions.
IBM’s new Trust and Transparency capabilities for AI are available on the IBM Cloud. Learn more here.
Automated Test Generation to Detect Individual Discrimination in AI Models
Efficiently Processing Workflow Provenance Queries on SPARK
Provenance in Context of Hadoop as a Service (HaaS) – State of the Art and Research Directions