Focus areas


An AI system should be transparent, particularly about what went into its algorithm’s recommendations, as relevant to a variety of stakeholders with a variety of objectives.

Any AI system on the market that is making determinations or recommendations with potentially significant implications for individuals should be able to explain and contextualize how and why it arrived at a particular conclusion.

An AI system is intelligible if its functionality and operations can be explained non-technically to a person not skilled in the art. When systems have significant implications on individuals, the owners and operators should make available — as appropriate and in a context that the relevant end-user can understand — documentation that detail essential information for consumers to be aware of, such as confidence measures, levels of procedural regularity, and error analysis.

Good design does not sacrifice transparency in creating a seamless experience. Imperceptible AI is not ethical AI.


Properly calibrated, AI can assist humans in making fairer choices, countering human biases, and promoting inclusivity.

Fairness refers to the equitable treatment of individuals, or groups of individuals, by an AI system.

Bias occurs when an AI system has been designed, intentionally or not, in a way that may make the system's output unfair. Bias can be present both in the algorithm of the AI system and in the data used to train and test it. It can emerge as a result of cultural, social, or institutional expectations; because of technical limitations of its design; or when the system is used in unanticipated contexts or to make decisions about communities that are not considered in the initial design.

Inclusivity means working to create a diverse development team and seeking out the perspectives of minority-serving organizations and impacted communities.

As an example, a marketing team using an AI system may want to ensure that data collected about a user’s race, gender, etc. will not be used to market to or exclude certain demographics.


AI-powered systems must be actively defended from adversarial attacks, minimizing security risks and enabling confidence in system outcomes.

Robust AI effectively handles exceptional conditions, such as abnormalities in input or malicious attacks, without causing unintentional harm.

It is also built to withstand intentional and unintentional interference by protecting against exposed vulnerabilities: for example, if attackers poison training data in an effort to compromise system security.

As systems are increasingly employed to make crucial decisions, it is imperative that AI is secure and robust.


Users must be able to see how the service works, evaluate its functionality, and comprehend its strengths and limitations.

Transparency reinforces trust, and the best way to promote transparency is through disclosure. Transparent AI systems share information on what data is collected, how it will be used and stored, and who has access to it. They make their purposes clear to users.

Increased transparency provides information for AI consumers to better understand how the AI model or service was created. This helps a user of the model to determine whether it is appropriate for their situation.

Technology companies must be clear about who trains their AI systems, what data was used in that training and what went into their algorithm’s recommendations.

To aid industry efforts to improve transparency in AI, IBM launched an AI FactSheets 360 website. The site presents a first-of-its-kind methodology for assembling documentation - or “fact sheets” - about an AI model’s important features, such as its purpose, performance, datasets, characteristics, and more. 


AI systems must prioritize and safeguard consumers’ privacy and data rights and provide explicit assurances to users about how their personal data will be used and protected.

Respect for privacy means full disclosure around what data is collected, how it will be used and stored, and who has access to it.

AI systems and their operators should aim to collect and store only the minimum data necessary. The purpose for the data use should be explicit, and operators should prevent data from being repurposed.

Systems should enable consumers to choose how their personal data is collected, stored and used, through clear and accessible privacy settings.

Data protection begins with designing and deploying solutions that employ privacy and security practices such as encryption and access control methodologies.