AI
Introducing AI Fairness 360
September 19, 2018 | Written by: Kush R. Varshney
Categorized: AI
Share this post:
We are pleased to announce AI Fairness 360 (AIF360), a comprehensive open-source toolkit of metrics to check for unwanted bias in datasets and machine learning models, and state-of-the-art algorithms to mitigate such bias. We invite you to use it and contribute to it to help engender trust in AI and make the world more equitable for all.

Mitigating bias throughout the AI lifecycle
Machine learning models are increasingly used to inform high-stakes decisions about people. Although machine learning, by its very nature, is always a form of statistical discrimination, the discrimination becomes objectionable when it places certain privileged groups at systematic advantage and certain unprivileged groups at systematic disadvantage. Bias in training data, due to either prejudice in labels or under-/over-sampling, yields models with unwanted bias.
This initial release of the AIF360 Python package contains nine different algorithms, developed by the broader algorithmic fairness research community, to mitigate that unwanted bias. They can all be called in a standard way, very similar to scikit-learn’s fit/predict paradigm. In this way, we hope that the package is not only a way to bring all of us researchers together, but also a way to translate our collective research results to data scientists, data engineers, and developers deploying solutions in a variety of industries. AIF360 is a bit different from currently available open source efforts1 due its focus on bias mitigation (as opposed to simply on metrics), its focus on industrial usability, and its software engineering.
AIF360 contains three tutorials (with more to come soon) on credit scoring, predicting medical expenditures, and classifying face images by gender. I would like to highlight the medical expenditure example; we’ve worked in that domain for many years with many health insurance clients (without explicit fairness considerations), but it has not been considered in algorithmic fairness research before. (For background, here are two papers describing our earlier applied data science work in the domain.)
AIF360 is not just a Python package. It is also an interactive experience that provides a gentle introduction to the concepts and capabilities of the toolkit. Being a comprehensive set of capabilities, it may be confusing to figure out which metrics and algorithms are most appropriate for a given use case. To help, we have created some guidance material that can be consulted as well.
Our team includes members from the IBM India Research Lab and the T. J. Watson Research Center in the United States2. We created the toolkit as a summer project this year. We are a diverse lot in terms of national origin, scientific discipline, gender identity, years of experience, palate for bitter gourd, and innumerable other characteristics, but we all believe that the technology we create should uplift all of humanity.
One of the reasons we decided to make AIF360 an open source project as a companion to the adversarial robustness toolbox is to encourage the contribution of researchers from around the world to add their metrics and algorithms. It would be really great if AIF360 becomes the hub of a flourishing community.
The currently implemented set of metrics and algorithms are described in the following list of papers, including one of ours.
- Flavio P. Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R. Varshney, “Optimized Pre-Processing for Discrimination Prevention,” Conference on Neural Information Processing Systems, 2017.
- Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian, “Certifying and Removing Disparate Impact,” ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2015.
- Moritz Hardt, Eric Price, and Nathan Srebro, “Equality of Opportunity in Supervised Learning,” Conference on Neural Information Processing Systems, 2016.
- Faisal Kamiran and Toon Calders, “Data Preprocessing Techniques for Classification without Discrimination,” Knowledge and Information Systems, 2012.
- Faisal Kamiran, Asim Karim, and Xiangliang Zhang, “Decision Theory for Discrimination-Aware Classification,” IEEE International Conference on Data Mining, 2012.
- Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma, “Fairness-Aware Classifier with Prejudice Remover Regularizer,” Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2012.
- Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q. Weinberger, “On Fairness and Calibration,” Conference on Neural Information Processing Systems, 2017.
- Till Speicher, Hoda Heidari, Nina Grgic-Hlaca, Krishna P. Gummadi, Adish Singla, Adrian Weller, and Muhammad Bilal Zafar, “A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual & Group Unfairness via Inequality Indices,” ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2018.
- Richard Zemel, Yu (Ledell) Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork, “Learning Fair Representations,” International Conference on Machine Learning, 2013.
- Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell, “Mitigating Unwanted Biases with Adversarial Learning,” AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, 2018.
1Some of the excellent repositories are Aequitas, Audit-AI, FairML, Fairness Comparison, Fairness Measures, FairTest, Themis™, and Themis-ML.
2AIF360 team members are Rachel Bellamy, Kuntal Dey, Mike Hind, Sam Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Saška Mojsilović, Seema Nagar, Karthi Natesan Ramamurthy, John Richards, Dipti Saha, Prasanna Sattigeri, Moninder Singh, Kush Varshney, Dakuo Wang, and Yunfeng Zhang.

Principal Research Staff Member and Manager, IBM Research
We’ve moved! The IBM Research blog has a new home
In an effort better integrate the IBM Research blog with the IBM Research web experience, we have migrated to a new landing page: https://research.ibm.com/blog
Pushing the boundaries of human-AI interaction at IUI 2021
At the 2021 virtual edition of the ACM International Conference on Intelligent User Interfaces (IUI), researchers at IBM will present five full papers, two workshop papers, and two demos.
From HPC Consortium’s success to National Strategic Computing Reserve
Founded in March 2020 just as the pandemic’s wave was starting to wash over the world, the Consortium has brought together 43 members with supercomputing resources. Private and public enterprises, academia, government and technology companies, many of whom are typically rivals. “It is simply unprecedented,” said Dario Gil, Senior Vice President and Director of IBM Research, one of the founding organizations. “The outcomes we’ve achieved, the lessons we’ve learned, and the next steps we have to pursue are all the result of the collective efforts of these Consortium’s community.” The next step? Creating the National Strategic Computing Reserve to help the world be better prepared for future global emergencies.



