AI Fairness 360

The AI Fairness 360 toolkit (AIF360) is an open source software toolkit that can help detect and remove bias in machine learning models. It enables developers to use state-of-the-art algorithms to regularly check for unwanted biases from entering their machine learning pipeline and to mitigate any biases that are discovered.

AIF360 enables AI developers and data scientists to easily check for biases at multiple points along their machine learning pipeline, using the appropriate bias metric for their circumstances. It also provides a range of state-of-the-art bias mitigation techniques that enable the developer or data scientist to reduce any discovered bias. These bias detection techniques can be deployed automatically to enable an AI development team to perform systematic checking for biases similar to checks for development bugs or security violations in a continuous integration pipeline.

AI Fairness diagram

The diagram above represents a simple machine learning pipeline. Bias might exist in the initial training data, in the algorithm that creates the classifier, or in the predictions the classifier makes. The AI Fairness 360 toolkit can measure and mitigate bias in all three stages of the machine learning pipeline.

Why the AI Fairness 360 toolkit?

To accurately make predictions, machine learning algorithms look for patterns in the training data that are correlated with a particular prediction. For example, an algorithm might discover the pattern that a person with a high salary and low debt seems to be correlated with a person paying off a loan.

It can be problematic, however, to base predictions, directly or indirectly, on attributes protected through social norms, organizational policies, or legal regulations. Take, for instance, US civil rights regulations. These regulations prohibit credit decisions that are based on race, color, and national origin.

Not surprisingly, this area has attracted a growing number of researchers who have proposed metrics for detecting bias and algorithms for mitigating bias. Although these techniques are accessible to experts, they are not easy to consume by AI developers. To bridge this gap, we have created the AI Fairness 360 toolkit, which will bring the state-of-the-art fairness techniques to the developers in a familiar environment.

The AI Fairness 360 toolkit includes a comprehensive set of metrics for data sets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in data sets and models. The AI Fairness 360 toolkit interactive experience provides a gentle introduction to fairness concepts and capabilities. The tutorials and other notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.

As a comprehensive set of capabilities, it may be confusing to figure out which metrics and algorithms are most appropriate for a given use case. To help, we have created some guidance material that you can consult.

Why should I contribute?

As an open source project, the AI Fairness 360 toolkit is designed to create a vibrant ecosystem of contributors both from industry and academia. We have developed the toolkit with extensibility in mind. This is the first comprehensive bias-mitigation toolbox with industry-relevant policy specifications and tutorials and with bias metrics and mitigation techniques that have no available implementation. Bringing together the top bias metrics and mitigation techniques in the field will help accelerate both the scientific advancement of the field and adoption of the techniques in real-world deployments. We encourage you to contribute your metrics, explainers, and debiasing algorithms. Please join the community and get started as a contributor.