Introducing AI Fairness 360

Share this post:

We are pleased to announce AI Fairness 360 (AIF360), a comprehensive open-source toolkit of metrics to check for unwanted bias in datasets and machine learning models, and state-of-the-art algorithms to mitigate such bias. We invite you to use it and contribute to it to help engender trust in AI and make the world more equitable for all.

Mitigating bias throughout the AI lifecycle

Mitigating bias throughout the AI lifecycle

Machine learning models are increasingly used to inform high-stakes decisions about people. Although machine learning, by its very nature, is always a form of statistical discrimination, the discrimination becomes objectionable when it places certain privileged groups at systematic advantage and certain unprivileged groups at systematic disadvantage. Bias in training data, due to either prejudice in labels or under-/over-sampling, yields models with unwanted bias.

This initial release of the AIF360 Python package contains nine different algorithms, developed by the broader algorithmic fairness research community, to mitigate that unwanted bias. They can all be called in a standard way, very similar to scikit-learn’s fit/predict paradigm. In this way, we hope that the package is not only a way to bring all of us researchers together, but also a way to translate our collective research results to data scientists, data engineers, and developers deploying solutions in a variety of industries. AIF360 is a bit different from currently available open source efforts1 due its focus on bias mitigation (as opposed to simply on metrics), its focus on industrial usability, and its software engineering.

AIF360 contains three tutorials (with more to come soon) on credit scoring, predicting medical expenditures, and classifying face images by gender. I would like to highlight the medical expenditure example; we’ve worked in that domain for many years with many health insurance clients (without explicit fairness considerations), but it has not been considered in algorithmic fairness research before. (For background, here are two papers describing our earlier applied data science work in the domain.)

AI Fairness 360 demo

AI Fairness 360 interactive experience

AIF360 is not just a Python package. It is also an interactive experience that provides a gentle introduction to the concepts and capabilities of the toolkit. Being a comprehensive set of capabilities, it may be confusing to figure out which metrics and algorithms are most appropriate for a given use case. To help, we have created some guidance material that can be consulted as well.

Our team includes members from the IBM India Research Lab and the T. J. Watson Research Center in the United States2. We created the toolkit as a summer project this year. We are a diverse lot in terms of national origin, scientific discipline, gender identity, years of experience, palate for bitter gourd, and innumerable other characteristics, but we all believe that the technology we create should uplift all of humanity.

One of the reasons we decided to make AIF360 an open source project as a companion to the adversarial robustness toolbox is to encourage the contribution of researchers from around the world to add their metrics and algorithms. It would be really great if AIF360 becomes the hub of a flourishing community.

The currently implemented set of metrics and algorithms are described in the following list of papers, including one of ours.

1Some of the excellent repositories are Aequitas, Audit-AI, FairML, Fairness Comparison, Fairness Measures, FairTest, Themis™, and Themis-ML.

2AIF360 team members are Rachel Bellamy, Kuntal Dey, Mike Hind, Sam Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Saška Mojsilović, Seema Nagar, Karthi Natesan Ramamurthy, John Richards, Dipti Saha, Prasanna Sattigeri, Moninder Singh, Kush Varshney, Dakuo Wang, and Yunfeng Zhang.

Principal Research Staff Member and Manager, IBM Research

More AI stories

We’ve moved! The IBM Research blog has a new home

In an effort better integrate the IBM Research blog with the IBM Research web experience, we have migrated to a new landing page:

Continue reading

Pushing the boundaries of human-AI interaction at IUI 2021

At the 2021 virtual edition of the ACM International Conference on Intelligent User Interfaces (IUI), researchers at IBM will present five full papers, two workshop papers, and two demos.

Continue reading

From HPC Consortium’s success to National Strategic Computing Reserve

Founded in March 2020 just as the pandemic’s wave was starting to wash over the world, the Consortium has brought together 43 members with supercomputing resources. Private and public enterprises, academia, government and technology companies, many of whom are typically rivals. “It is simply unprecedented,” said Dario Gil, Senior Vice President and Director of IBM Research, one of the founding organizations. “The outcomes we’ve achieved, the lessons we’ve learned, and the next steps we have to pursue are all the result of the collective efforts of these Consortium’s community.” The next step? Creating the National Strategic Computing Reserve to help the world be better prepared for future global emergencies.

Continue reading