Introducing AI Fairness 360

Share this post:

We are pleased to announce AI Fairness 360 (AIF360), a comprehensive open-source toolkit of metrics to check for unwanted bias in datasets and machine learning models, and state-of-the-art algorithms to mitigate such bias. We invite you to use it and contribute to it to help engender trust in AI and make the world more equitable for all.

Mitigating bias throughout the AI lifecycle

Mitigating bias throughout the AI lifecycle

Machine learning models are increasingly used to inform high-stakes decisions about people. Although machine learning, by its very nature, is always a form of statistical discrimination, the discrimination becomes objectionable when it places certain privileged groups at systematic advantage and certain unprivileged groups at systematic disadvantage. Bias in training data, due to either prejudice in labels or under-/over-sampling, yields models with unwanted bias.

This initial release of the AIF360 Python package contains nine different algorithms, developed by the broader algorithmic fairness research community, to mitigate that unwanted bias. They can all be called in a standard way, very similar to scikit-learn’s fit/predict paradigm. In this way, we hope that the package is not only a way to bring all of us researchers together, but also a way to translate our collective research results to data scientists, data engineers, and developers deploying solutions in a variety of industries. AIF360 is a bit different from currently available open source efforts1 due its focus on bias mitigation (as opposed to simply on metrics), its focus on industrial usability, and its software engineering.

AIF360 contains three tutorials (with more to come soon) on credit scoring, predicting medical expenditures, and classifying face images by gender. I would like to highlight the medical expenditure example; we’ve worked in that domain for many years with many health insurance clients (without explicit fairness considerations), but it has not been considered in algorithmic fairness research before. (For background, here are two papers describing our earlier applied data science work in the domain.)

AI Fairness 360 demo

AI Fairness 360 interactive experience

AIF360 is not just a Python package. It is also an interactive experience that provides a gentle introduction to the concepts and capabilities of the toolkit. Being a comprehensive set of capabilities, it may be confusing to figure out which metrics and algorithms are most appropriate for a given use case. To help, we have created some guidance material that can be consulted as well.

Our team includes members from the IBM India Research Lab and the T. J. Watson Research Center in the United States2. We created the toolkit as a summer project this year. We are a diverse lot in terms of national origin, scientific discipline, gender identity, years of experience, palate for bitter gourd, and innumerable other characteristics, but we all believe that the technology we create should uplift all of humanity.

One of the reasons we decided to make AIF360 an open source project as a companion to the adversarial robustness toolbox is to encourage the contribution of researchers from around the world to add their metrics and algorithms. It would be really great if AIF360 becomes the hub of a flourishing community.

The currently implemented set of metrics and algorithms are described in the following list of papers, including one of ours.

1Some of the excellent repositories are Aequitas, Audit-AI, FairML, Fairness Comparison, Fairness Measures, FairTest, Themis™, and Themis-ML.

2AIF360 team members are Rachel Bellamy, Kuntal Dey, Mike Hind, Sam Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Saška Mojsilović, Seema Nagar, Karthi Natesan Ramamurthy, John Richards, Dipti Saha, Prasanna Sattigeri, Moninder Singh, Kush Varshney, Dakuo Wang, and Yunfeng Zhang.

Principal Research Staff Member and Manager, IBM Research

More AI stories

Introducing the AI chip leading the world in precision scaling

We’ve made strides in delivering the next-gen AI computational systems with cutting-edge performance and unparalleled energy efficiency.

Continue reading

IBM’s AI goes multilingual — with single language training

At AAAI, our team presented two new multilingual research techniques that enable AI to understand different languages while only trained on one.

Continue reading

IBM researchers check AI bias with counterfactual text

Our team has developed an AI that verifies other AIs’ ‘fairness’ by generating a set of counterfactual text samples and testing machine learning systems without supervision.

Continue reading