An AI system should be transparent, particularly about what went into its algorithm’s recommendations, as relevant to a variety of stakeholders with a variety of objectives.
Any AI system on the market that is making determinations or recommendations with potentially significant implications for individuals should be able to explain and contextualize how and why it arrived at a particular conclusion.
An AI system is intelligible if its functionality and operations can be explained non-technically to a person not skilled in the art. When systems have significant implications on individuals, the owners and operators should make available — as appropriate and in a context that the relevant end-user can understand — documentation that detail essential information for consumers to be aware of, such as confidence measures, levels of procedural regularity, and error analysis.
Good design does not sacrifice transparency in creating a seamless experience. Imperceptible AI is not ethical AI.