AI

Introducing AI Explainability 360

Share this post:

We are pleased to announce AI Explainability 360, a comprehensive open source toolkit of state-of-the-art algorithms that support the interpretability and explainability of machine learning models. We invite you to use it and contribute to it to help advance the theory and practice of responsible and trustworthy AI.

explainability

AI Explainability 360 Toolkit

Machine learning models are demonstrating impressive accuracy on various tasks and have gained widespread adoption. However, many of these models are not easily understood by the people that interact with them. This understanding, referred to as “explainability” or “interpretability,” allows users to gain insight into the machine’s decision-making process. Understanding how things work is essential to how we navigate the world around us and is essential to fostering trust and confidence in AI systems.

Further, AI explainability is increasingly important among business leaders and policymakers. In fact, 68 percent of business leaders believe that customers will demand more explainability from AI in the next three years, according to an IBM Institute for Business Value survey.

No single approach to explaining algorithms

To provide explanations in our daily lives, we rely on a rich and expressive vocabulary: we use examples and counterexamples, create rules and prototypes, and highlight important characteristics that are present and absent.

When interacting with algorithmic decisions, users will expect and demand the same level of expressiveness from AI. A doctor diagnosing a patient may benefit from seeing cases that are very similar or very different. An applicant whose loan was denied will want to understand the main reasons for the rejection and what she can do to reverse the decision. A regulator, on the other hand, will not probe into only one data point and decision, she will want to understand the behavior of the system as a whole to ensure that it complies with regulations. A developer may want to understand where the model is more or less confident as a means of improving its performance.

As a result, when it comes to explaining decisions made by algorithms, there is no single approach that works best. There are many ways to explain. The appropriate choice depends on the persona of the consumer and the requirements of the machine learning pipeline.

explainability

AI Explainability 360 Usage Diagram

 

AI Explainability 360 tackles explainability in a single interface

It is precisely to tackle this diversity of explanation that we’ve created AI Explainability 360 with algorithms for case-based reasoning, directly interpretable rules, post hoc local explanations, post hoc global explanations, and more. Given that there are so many different explanation options, we have created helpful resources in a single place:

  • explainability

    AI Explainability 360 Demo

    an interactive experience that provides a gentle introduction through a credit scoring application;

  • several detailed tutorials toeducate practitioners on how to inject explainability in other high-stakes applications such as clinical medicine, healthcare management and human resources;
  • documentation that guides the practitioner on choosing an appropriate explanation method.

The toolkit has been engineered with a common interface for all of the different ways of explaining (not an easy feat) and is extensible to accelerate innovation by the community advancing AI explainability. We are open sourcing it to help create a community of practice for data scientists, policymakers, and the general public that need to understand how algorithmic decision making affects them. AI Explainability 360 differs from other open source explainability offerings [1] through the diversity of its methods, focus on educating a variety of stakeholders, and extensibility via a common framework. Moreover, it interoperates with AI Fairness 360 and Adversarial Robustness 360, two other open-source toolboxes from IBM Research released in 2018, to support the development of holistic trustworthy machine learning pipelines.

The initial release contains eight algorithms recently created by IBM Research, and also includes metrics from the community that serve as quantitative proxies for the quality of explanations. Beyond the initial release, we encourage contributions of other algorithms from the broader research community.

explainability

AI Explainability 360 Decision Tree

We highlight two of the algorithms in particular. The first, Boolean Classification Rules via Column Generation, is an accurate and scalable method of directly interpretable machine learning that won the inaugural FICO Explainable Machine Learning Challenge. The second, Contrastive Explanations Method, is a local post hoc method that addresses the most important consideration of explainable AI that has been overlooked by researchers and practitioners: explaining why an event happened not in isolation, but why it happened instead of some other event.

AI Explainability 360 complements the ground-breaking algorithms developed by IBM Research that went into Watson OpenScale. Released last year, the platform helps clients manage AI transparently throughout the full AI lifecycle, regardless of where the AI applications were built or in which environment they run. OpenScale also detects and addresses bias across the spectrum of AI applications, as those applications are being run.

Our team includes members from IBM Research from around the globe. [2] We are a diverse group in terms of national origin, scientific discipline, gender identity, years of experience, appetite for vindaloo, and innumerable other characteristics, but we share a belief that the technology we create should uplift all of humanity and ensure the benefits of AI are available to all.

The toolkit includes algorithms and metrics from the following papers:

[1] E.g. Alibi, InterpretML, Skater, and tf-explain.

[2] AI Explainiablity 360 includes work from IBM Research – India, the IBM T. J. Watson Research Center and IBM Research – Cambridge in the United States, and the IBM Argentina SilverGate Team.Team members include Vijay Arya, Rachel Bellamy, Pin-Yu Chen, Amit Dhurandhar, Mike Hind, Sam Hoffman, Stephanie Houde, Vera Liao, Ronny Luss, Aleksandra (Saška) Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Moninder Singh, Karthik Shanmugam, Kush Varshney, Dennis Wei, and Yunfeng Zhang.

IBM Fellow, IBM Research

More AI stories

State-of-the-Art Results in Conversational Telephony Speech Recognition with a Single-Headed Attention-Based Sequence-to-Sequence Model

Powerful neural networks have enabled the use of “end-to-end” speech recognition models that directly map a sequence of acoustic features to a sequence of words. It is generally believed that direct sequence-to-sequence speech recognition models are competitive with traditional hybrid models only when a large amount of training data is used. However, in our recent […]

Continue reading

IBM Research at INTERSPEECH 2020

The 21st INTERSPEECH Conference will take place as a fully virtual conference from October 25 to October 29. INTERSPEECH is the world’s largest conference devoted to speech processing and applications, and is the premiere conference of the International Speech Communication Association. The current focus of speech technology research at IBM Research AI is around Spoken […]

Continue reading

You Have the Floor: IBM Watson AI Weighs In on Bloomberg’s “That’s Debatable”

IBM Research’s Project Debater team worked with the producers of “That’s Debatable” to integrate AI into the traditional debate format. With the help of a new natural language processing (NLP) feature called key point analysis, IBM Watson AI can synthesize thousands of submissions from the general public prior to each episode via ibm.com/debatable.

Continue reading