What is explainable AI?
Explore IBM's explainable AI solution Subscribe to AI Topic Updates
Illustration with collage of pictograms of clouds, pie chart, graph pictograms on the following
What is explainable AI?

Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms. 

Explainable AI is used to describe an AI model, its expected impact and potential biases. It helps characterize model accuracy, fairness, transparency and outcomes in AI-powered decision making. Explainable AI is crucial for an organization in building trust and confidence when putting AI models into production. AI explainability also helps an organization adopt a responsible approach to AI development.

As AI becomes more advanced, humans are challenged to comprehend and retrace how the algorithm came to a result. The whole calculation process is turned into what is commonly referred to as a “black box" that is impossible to interpret. These black box models are created directly from the data. And, not even the engineers or data scientists who create the algorithm can understand or explain what exactly is happening inside them or how the AI algorithm arrived at a specific result.

There are many advantages to understanding how an AI-enabled system has led to a specific output.  Explainability can help developers ensure that the system is working as expected, it might be necessary to meet regulatory standards, or it might be important in allowing those affected by a decision to challenge or change that outcome.¹

Why AI governance is a business imperative for scaling enterprise AI

Learn about barriers to AI adoptions, particularly lack of AI governance and risk management solutions.

Related content

Read the guide for data leaders

Why explainable AI matters

It is crucial for an organization to have a full understanding of the AI decision-making processes with model monitoring and accountability of AI and not to trust them blindly. Explainable AI can help humans understand and explain machine learning (ML) algorithms, deep learning and neural networks.

ML models are often thought of as black boxes that are impossible to interpret.² Neural networks used in deep learning are some of the hardest for a human to understand. Bias, often based on race, gender, age or location, has been a long-standing risk in training AI models. Further, AI model performance can drift or degrade because production data differs from training data. This makes it crucial for a business to continuously monitor and manage models to promote AI explainability while measuring the business impact of using such algorithms. Explainable AI also helps promote end user trust, model auditability and productive use of AI. It also mitigates compliance, legal, security and reputational risks of production AI.

Explainable AI is one of the key requirements for implementing responsible AI, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability.³ To help adopt AI responsibly, organizations need to embed ethical principles into AI applications and processes by building AI systems based on trust and transparency.

Now available: watsonx.governance

Accelerate responsible, transparent and explainable workflows for generative AI built on third-party platforms

Learn more about AI ethics
How explainable AI works

With explainable AI – as well as interpretable machine learning – organizations can gain access to AI technology’s underlying decision-making and are empowered to make adjustments. Explainable AI can improve the user experience of a product or service by helping the end user trust that the AI is making good decisions. When do AI systems give enough confidence in the decision that you can trust it, and how can the AI system correct errors that arise?⁴

As AI becomes more advanced, ML processes still need to be understood and controlled to ensure AI model results are accurate. Let’s look at the difference between AI and XAI, the methods and techniques used to turn AI to XAI, and the difference between interpreting and explaining AI processes.

Comparing AI and XAI
What exactly is the difference between “regular” AI and explainable AI? XAI implements specific techniques and methods to ensure that each decision made during the ML process can be traced and explained. AI, on the other hand, often arrives at a result using an ML algorithm, but the architects of the AI systems do not fully understand how the algorithm reached that result. This makes it hard to check for accuracy and leads to loss of control, accountability and auditability.

Explainable AI techniques
The setup of XAI techniques consists of three main methods. Prediction accuracy and traceability address technology requirements while decision understanding addresses human needs. Explainable AI — especially explainable machine learning — will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.⁵

Prediction accuracy
Accuracy is a key component of how successful the use of AI is in everyday operation. By running simulations and comparing XAI output to the results in the training data set, the prediction accuracy can be determined. The most popular technique used for this is Local Interpretable Model-Agnostic Explanations (LIME), which explains the prediction of classifiers by the ML algorithm.

Traceability
Traceability is another key technique for accomplishing XAI. This is achieved, for example, by limiting the way decisions can be made and setting up a narrower scope for ML rules and features. An example of a traceability XAI technique is DeepLIFT (Deep Learning Important FeaTures), which compares the activation of each neuron to its reference neuron and shows a traceable link between each activated neuron and even shows dependencies between them.

Decision understanding
This is the human factor. Many people have a distrust in AI, yet to work with it efficiently, they need to learn to trust it. This is accomplished by educating the team working with the AI so they can understand how and why the AI makes decisions.

Explainability versus interpretability in AI

Interpretability is the degree to which an observer can understand the cause of a decision. It is the success rate that humans can predict for the result of an AI output, while explainability goes a step further and looks at how the AI arrived at the result.

How does explainable AI relate to responsible AI?

Explainable AI and responsible AI have similar objectives, yet different approaches. Here are the main differences between explainable and responsible AI:

  • Explainable AI looks at AI results after the results are computed.
  • Responsible AI looks at AI during the planning stages to make the AI algorithm responsible before the results are computed.
  • Explainable and responsible AI can work together to make better AI.
Continuous model evaluation

With explainable AI, a business can troubleshoot and improve model performance while helping stakeholders understand the behaviors of AI models. Investigating model behaviors through tracking model insights on deployment status, fairness, quality and drift is essential to scaling AI.

Continuous model evaluation empowers a business to compare model predictions, quantify model risk and optimize model performance. Displaying positive and negative values in model behaviors with data used to generate explanation speeds model evaluations. A data and AI platform can generate feature attributions for model predictions and empower teams to visually investigate model behavior with interactive charts and exportable documents.

Benefits of explainable AI
Operationalize AI with trust and confidence

Build trust in production AI. Rapidly bring your AI models to production. Ensure interpretability and explainability of AI models. Simplify the process of model evaluation while increasing model transparency and traceability.

Speed time to AI results

Systematically monitor and manage models to optimize business outcomes. Continually evaluate and improve model performance. Fine-tune model development efforts based on continuous evaluation.

Mitigate risk and cost of model governance

Keep your AI models explainable and transparent. Manage regulatory, compliance, risk and other requirements. Minimize overhead of manual inspection and costly errors. Mitigate risk of unintended bias.

Five considerations for explainable AI

To drive desirable outcomes with explainable AI, consider the following.

Fairness and debiasing: Manage and monitor fairness. Scan your deployment for potential biases. 

Model drift mitigation: Analyze your model and make recommendations based on the most logical outcome. Alert when models deviate from the intended outcomes.

Model risk management: Quantify and mitigate model risk. Get alerted when a model performs inadequately. Understand what happened when deviations persist.

Lifecycle automation: Build, run and manage models as part of integrated data and AI services. Unify the tools and processes on a platform to monitor models and share outcomes. Explain the dependencies of machine learning models.

Multicloud-ready: Deploy AI projects across hybrid clouds including public clouds, private clouds and on premises. Promote trust and confidence with explainable AI.

Use cases for explainable AI
  • Healthcare: Accelerate diagnostics, image analysis, resource optimization and medical diagnosis. Improve transparency and traceability in decision-making for patient care. Streamline the pharmaceutical approval process with explainable AI.
  • Financial services: Improve customer experiences with a transparent loan and credit approval process. Speed credit risk, wealth management and financial crime risk assessments. Accelerate resolution of potential complaints and issues. Increase confidence in pricing, product recommendations and investment services.
  • Criminal justice: Optimize processes for prediction and risk assessment. Accelerate resolutions using explainable AI on DNA analysis, prison population analysis and crime forecasting. Detect potential biases in training data and algorithms.
Related solutions 
watsonx.governance

IBM® watsonx.governance™ toolkit for AI governance allows you to direct, manage and monitor your organization’s AI activities, and employs software automation to strengthen your ability to mitigate risk, manage regulatory requirements and address ethical concerns for both generative AI and machine learning models.

Learn about watsonx.governance
IBM Cloud Pak® for Data

Modernize to automate the AI lifecycle. Add governance and security to data and AI services almost anywhere.

Learn more
IBM Watson® Studio

Build and scale AI with trust and transparency. Build, run and manage AI models with constant monitoring for explainable AI.

Learn more
IBM Knowledge Catalog

Govern data and AI models with an end-to-end data catalog backed by active metadata and policy management.

Learn more
Resources The urgency of AI governance

Read about a three-step approach to AI governance. Discover insights on how to build governance systems capable of monitoring ethical AI.

Prepare models for monitoring

Learn how to set up and enable model monitors. Use a credit risk sample model to select deployment and set the data type for payload logging.

Explore the value of explainable AI

Forrester Consulting examines the projected return on investment for enterprises that deploy explainable AI and model monitoring.

Scale AI with trust and transparency

Lufthansa improves the customer experience and airline efficiency with AI lifecycle automation and drift and bias mitigation.

Take the next step

Accelerate responsible, transparent and explainable AI workflows across the lifecycle for both generative and machine learning models. Direct, manage, and monitor your organization’s AI activities to better manage growing AI regulations and detect and mitigate risk.

Explore watsonx.governance Book a live demo
Footnotes

¹ ”Explainable AI” (link resides outside ibm.com), The Royal Society, 28 November 2019.

² ”Explainable Artificial Intelligence” (link resides outside ibm.com), Jaime Zornoza, 15 April 2020.

³ ”Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI” (link resides outside ibm.com), ScienceDirect, June 2020. 

⁴ ”Understanding Explainable AI” (link resides outside ibm.com), Ron Schmelzer, Forbes contributor, 23 July 2019.

⁵ ”Explainable Artificial Intelligence (XAI)” (link resides outside ibm.com), Dr. Matt Turek, The U.S. Defense Advanced Research Projects Agency (DARPA).