My IBM Log in Subscribe

What is AI transparency?

6 September 2024

Authors

Alexandra Jonker

Editorial Content Lead

Alice Gomstyn

IBM Content Contributor

Amanda McGrath

Writer

IBM

What is AI transparency?

AI transparency helps people access information to better understand how an artificial intelligence (AI) system was created and how it makes decisions.

Researchers sometimes describe artificial intelligence as a “black box,” as it can still be difficult to explain, manage and regulate AI outcomes due to the technology’s increasing complexity. AI transparency helps open this black box to better understand AI outcomes and how models make decisions.

A growing number of high-stakes industries (including finance, healthcare, human resources (HR) and law enforcement) rely on AI models for decision-making. Improving people’s understanding about how these models are trained and how they determine outcomes builds trust in AI decisions and the organizations that use them.

AI creators can achieve transparent and trustworthy AI through disclosure. They can document and share the underlying AI algorithm’s logic and reasoning, the data inputs used to train the model, the methods used for model evaluation and validation and more. This allows stakeholders to assess the model’s predictive accuracy against fairness, drift and biases.

A high level of transparency is essential to responsible AI. Responsible AI is a set of principles that helps guide the design, development, deployment and use of AI. It considers the broader societal impact of AI systems and the measures that are required to align these technologies with stakeholder values, legal standards and ethical considerations.

3D design of balls rolling on a track

The latest AI News + Insights 


Discover expertly curated insights and news on AI, cloud and more in the weekly Think Newsletter. 

Why is AI transparency important?

AI applications such as generative AI chatbots, virtual agents and recommendation engines are now used by tens of millions of people around the world each day. Transparency into how these AI tools work is likely not a concern for this level of low-stakes decision-making: should the model prove inaccurate or biased, the users might just lose some time or disposable income.

However, more sectors are adopting AI applications to inform high-stakes decision-making. For example, AI now helps businesses and users make investment choices, medical diagnoses, hiring decisions, criminal sentencing and more. In these cases, the potential consequences of biased or inaccurate AI outputs are far more dangerous. People can lose lifetime savings, career opportunities or years of their lives.

For stakeholders to trust that AI is making effective and fair decisions on their behalf, they need visibility into how the models operate, the logic of the algorithms and how the model is evaluated for accuracy and fairness. They also need to know more about the data used that is to train and tune the model, including data sources and how data is processed, weighted and labeled.

In addition to building trust, AI transparency fosters knowledge-sharing and collaboration across the entire AI ecosystem, contributing to advancements in AI development. And by being transparent by default, organizations can focus more on using AI technologies to achieve business goals—and worry less about AI reliability.

AI Academy

Trust, transparency and governance in AI

AI trust is arguably the most important topic in AI. It's also an understandably overwhelming topic. We'll unpack issues such as hallucination, bias and risk, and share steps to adopt AI in an ethical, responsible and fair manner.

AI transparency regulations and frameworks

The web of regulatory requirements surrounding the use of AI is constantly evolving. Transparent model processes are critical to compliance with these regulations and to addressing requests from model validators, auditors and regulators. The EU AI Act is considered the world's first comprehensive regulatory framework for AI.

The EU AI Act

The Artificial Intelligence Act of the European Union (EU) takes a risk-based approach to regulation, applying different rules to AI according to the risk they pose. It prohibits some AI uses outright and implements strict governance, risk management and transparency requirements for others. There are additional transparency obligations for specific types of AI. For example:

  • AI systems intended to directly interact with individuals should be designed to inform users that they are interacting with an AI system, unless this is obvious to the individual from the context. A chatbot, for example, should be designed to notify users that it is a chatbot.

  • AI systems that generate text, images, or other certain other content must use machine-readable formats to mark outputs as AI generated or manipulated. This includes, for example, AI that generates deepfakes—images or videos that are altered to show that someone doing or saying something they didn’t do or say.

The implementation of the EU’s General Data Protection Regulation (GDPR) led other countries to adopt personal data privacy regulations. In the same way, experts predict the EU AI Act will spur the development of AI governance and ethics standards worldwide.

Guiding frameworks for AI transparency

Most countries and regions have yet to enact comprehensive legislation or regulations regarding the use of AI; however, there are several extensive frameworks available. While not always enforceable, they exist to guide future regulation and the responsible development and use of AI. Notable examples include:

  • The White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence: Released on 30 October 2023 (and rescinded on 20 January 2025), the order addressed transparency in several sections. In Section 8 it specifically addressed protecting consumers, patients, passengers and students. It encouraged independent regulatory agencies to consider using their authority to protect American consumers from AI risks, including “emphasizing or clarifying requirements and expectations related to the transparency of AI models and regulated entities’ ability to explain their use of AI models.”1

  • The Blueprint for an AI Bill of Rights: The Blueprint is a set of five principles and associated practices to help guide the design, use and deployment of AI systems. The fourth principle, “Notice and Explanation,” directly addresses transparency: “Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible.”2

  • The Hiroshima AI Process Comprehensive Policy Framework: Launched in 2023 following development at the G7 Hiroshima Summit, the Hiroshima AI Process is a set of guiding principles for the worldwide development of advanced AI systems that promote safe, secure and trustworthy AI. The framework calls on organizations to abide by 11 principles, several of which encourage “publishing transparency reports” and “responsibly sharing information.”3

AI explainability vs. AI interpretability vs. AI transparency

AI transparency is closely related to the concepts of AI explainability and AI interpretability. These concepts provide insights that help to address the long-standing “black box” problem—the practical and ethical issue that AI systems are so sophisticated that they are impossible for humans to interpret. However, they have distinct definitions and use cases:

  • AI explainability: How did the model arrive at that result?

  • AI interpretability: How does the model make decisions?

  • AI transparency: How was the model created, what data trained it and how does it make decisions?

AI explainability: How did the model arrive at that result?

AI explainability, or explainable AI (XAI), is a set of processes and methods that allow human users to comprehend and trust the results and output created by machine learning models. Model explainability looks at how an AI system arrives at a specific result and helps to characterize model transparency.

AI interpretability: How does the model make decisions?

AI interpretability refers to making the overall AI process understandable by a human. AI interpretability supplies meaningful information about the underlying logic, significance and anticipated consequences of the AI system. It is the success rate that humans can predict for the result of an AI output, while explainability goes a step further and looks at how the AI model arrived at the result.

AI transparency: How was the model created, what data trained it and how does it make decisions?

AI transparency goes beyond just explaining AI decision-making processes. It encompasses factors that are related to the development of AI systems and their deployment, such as the AI training data and who has access to it.

How to provide AI transparency

While providing AI transparency differs by use case, organization and industry, there are some strategies that businesses might keep in mind as they build AI systems. At a high level, these strategies include having clear principles for trust and transparency, putting those principles into practice and embedding them into the entire AI lifecycle.

A more specific strategy for AI transparency is thorough disclosure at every stage of the AI lifecycle. To provide disclosure, organizations need to determine what information to share and how to share it.

Information needed in AI transparency documentation

Model use case, industry, audience and other factors will help determine what information is necessary to disclose. For example, higher-stakes uses of AI (such as mortgage evaluations) will likely require more comprehensive disclosure than lower-stakes applications (such as audio classification for virtual assistants).

Disclosure might include all or some of the following information about the model:

  • Model name
  • Purpose
  • Risk level
  • Model policy
  • Model generation
  • Intended domain
  • Training data
  • Training and testing accuracy
  • Bias
  • Adversarial robustness metrics
  • Fairness metrics
  • Explainability metrics
  • Contact information

Each role in the AI lifecycle can contribute information, distributing accountability across the ecosystem rather than to an individual. There are software platforms and tools available that can help automate information gathering and other AI governance activities.

How to share AI transparency information

Organizations can present information for AI transparency in various formats, such as printed documents or videos. The format depends on both audience and use case. Is the information intended for a consumer, and therefore needs to be easily digestible? Or is it intended for a data scientist or regulator, and therefore needs a high level of technical detail?

Formats might include:

  • A living document that is modeled after a supplier’s declaration of conformity (SDoC), which is a document used in many industries to show that a product conforms to certain standards or technical regulations

  • Official policy pages on the company website detailing how the organization is putting transparent AI initiatives into action

  • Educational resources such as documents and videos to help users understand how AI is used in products and services, and how it affects the customer experience

  • Public-facing discourse with the organization’s ethical AI viewpoint through official public relations activities, events, social media and other communications.

  • Research papers, data sets and other data-driven communications to offer insights into the use, development and benefits of AI within the organization’s industry or use cases.

AI transparency challenges

Transparent AI practices have many benefits, but they also raise issues of safety and privacy. For example, the more information that is given about the inner workings of an AI project, the easier it might be for hackers to find and exploit vulnerabilities. OpenAI addressed this exact challenge in its GPT-4 Technical Report, stating:

“Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, data set construction, training method, or similar.”4

The quotation also reveals another AI transparency challenge: the tradeoff between transparency and protecting intellectual property. Other hurdles might include clearly explaining intricate and complex programs and machine learning algorithms (such as neural networks) to nonexperts and the lack of transparency standards globally for AI.

Related solutions

Related solutions

IBM® watsonx.governance™

Govern generative AI models from anywhere and deploy on cloud or on premises with IBM watsonx.governance.

Discover watsonx.governance
AI governance solutions

See how AI governance can help increase your employees’ confidence in AI, accelerate adoption and innovation, and improve customer trust.

Discover AI governance solutions
AI governance consulting services

Prepare for the EU AI Act and establish a responsible AI governance approach with the help of IBM Consulting®.

Explore AI governance services
Take the next step

Direct, manage and monitor your AI with a single portfolio to speed responsible, transparent and explainable AI.

Explore watsonx.governance Book a live demo
Footnotes

1. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” The White House, 30 October 2023.

2. Notice and Explanation,” The White House.

3. Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI System,” Ministry of Foreign Affairs of Japan, 2023.

4. GPT-4 Technical Report,” arXiv, 15 March 2023.