Accelerating the Path Towards AI Transparency

21 August 2020

1 min read

When you’re grocery shopping, you may check an item’s nutrition label to discover its calorie or sugar content. This crucial information helps people make informed decisions about their eating habits and ultimately their health. A similar kind of transparency is what we should expect in AI systems, especially when they are used in the context of high-stakes decisions, such as in healthcare, public or financial services, and justice.

That’s why IBM is proud of our participation in the European Commission High Level Expert Group on AI, which recently published the final version of its Assessment List for Trustworthy AI (ALTAI). The ALTAI provides a comprehensive self-evaluation checklist of ethical and principles-led questions for organizations to consider when developing and deploying AI systems. We believe it will be a valuable tool to help guide organizations during the actual process of designing and building AI in a responsible way.

We also believe it is important for industry to contribute supplemental efforts aimed at improving transparency in AI. To that end, earlier this month, we launched an AI FactSheets 360 website that presents a first of its kind methodology for how to assemble documentation or “fact sheets” about an AI model’s important features, such as its purpose, performance, datasets, characteristics, and more. Within each step of the methodology, we describe the issues to consider and the questions to explore with the relevant people involved in creating and consuming the facts that go into an AI FactSheet. The website also shares an approach to AI Lifecycle Governance and a collection of example FactSheets, research papers, and other resources for anyone to use.

The concept of an AI FactSheet is very flexible. Different users will need different types of information. Likewise, different AI applications or use cases will implicate different information needs. Also, an AI FactSheet is not meant to explain every technical detail or disclose proprietary information about an algorithm. Rather, the goal is to promote human decision-making in the use, development, and deployment of AI systems, while also accelerating developers’ education on AI ethics and their broader adoption of the concepts of transparency and documentation.

For the past several years, government calls for policy provisions to ensure principled, trustworthy AI have been building, as evidenced by statements like the Organization for Economic Cooperation and Development’s (OECD) Principles on AI. But while principles are crucial for setting a direction and establishing guardrails in a complex and evolving area, they are not enough on their own. Recognizing this, many stakeholders are now moving towards putting principles into practice. If 2019 was the year of AI principles, 2020 should be the year we actually translate those principles into concrete and actionable initiatives that help companies develop and deliver trustworthy AI.

For example, the OECD’s ONE.AI work on practical implementation guidance, ongoing AI ethics standards development work at organizations like the IEEE, and the newly launched Global Partnership on AI are all helping to advance the dialogue on AI transparency and make it more concrete.

Efforts like these – supplemented by industry initiatives like the AI FactSheet – will all deliver valuable contributions in promoting trustworthy AI through greater transparency. However, governments can and should do more to help drive consensus around AI transparency policy. Governments should work with industry and other partners to consider:

  • Strengthening mechanisms for global coordination on AI transparency-enabling best practices;
  • Using multistakeholder environments to drive consensus around clear and consistent standards and best practices for AI transparency by documentation;
  • Recognizing the various information needs of different AI users, from developers to consumers in any regulatory framework and policy; and
  • Explaining how transparency tools can help regulators better meet their goals of protecting consumers and citizens.

Flexible mechanisms like the ALTAI and AI FactSheets can help foster transparent and accountable AI. By placing ethical considerations – such as human agency, technical robustness and safety, fairness, transparency, and other requirements – at the core of organizational conversations around best practices in developing and deploying AI, initiatives like these can help promote greater consensus and consistency in how companies and policy makers think and act about these issues.

This is the start, not the end, of these conversations.

Francesca Rossi, IBM AI Ethics Global Leader and IBM Fellow

Aleksandra Mojsilović, IBM Research Head of AI Foundations, Co-Director of IBM Science for Social Good, and IBM Fellow

About IBM Policy Lab
The IBM Policy Lab is a new forum providing policymakers with a vision and actionable recommendations to harness the benefits of innovation while ensuring trust in a world being reshaped by data. As businesses and governments break new ground and deploy technologies that are positively transforming our world, we work collaboratively on public policies to meet the challenges of tomorrow.