AI Trust and Transparency

Artificial intelligence is becoming a crucial component of enterprises’ operations and strategy. To give our clients the confidence they need to responsibly take advantage of AI today we must figure out ways to instill transparency, explainability, fairness, and robustness into AI.

LF AI Trusted AI Committee

IBM’s commitment to make AI more trustworthy drove us to join the LF AI Foundation, an umbrella foundation under the Linux Foundation that supports and sustains open source innovation in AI, ML and DL. IBM chairs the Trusted AI committee established by LF AI to advance ethical and trustworthy AI. Joining forces with LF AI member organizations from around the world, CODAIT is working to create principles for trustworthy AI and identify public use cases. You can learn more about the efforts of the Trusted AI Committee on the public LF AI wiki.

CODAIT Maintains Trusted AI Projects:

The Center for Open Source Data and AI Technologies (CODAIT) helps maintain projects created by IBM Research that can increase fairness, explainability, robustness, and transparency in machine learning systems. This includes open source packages that remove bias from machine learning like the AI Fairness 360 Toolkit (AIF360), packages that increase explainability of machine learning models with the AI Explainability 360 Toolkit (AIX360), tools that battle bad actors like the Adversarial Robustness 360 Toolbox, open source pre-trained deep learning models like the Model Asset eXchange (MAX), and open source data sets like the Data Asset eXchange (DAX.)

Browse Trusted AI Projects:
IBM Research

IBM Research AI is developing diverse approaches for how to achieve fairness, robustness, explainability, accountability, value alignment, and how to integrate them throughout the entire lifecycle of an AI application. IBM Research’s comprehensive strategy addresses multiple dimensions of trust to enable AI solutions that inspire confidence.