Home Think Topics AI Bill of Rights What is the AI Bill of Rights?
Explore IBM's AI governance solution Subscribe to the Think Newsletter
Illustration with collage of pictograms of clouds, pie charts and graphs

Published: 27 September 2024
Contributors: Tom Krantz, Alexandra Jonker

What is the AI Bill of Rights?

The AI Bill of Rights is a framework published by the United States government to help protect Americans’ civil rights in the age of artificial intelligence (AI).

The AI Bill of Rights was introduced in October 2022 by the White House Office of Science and Technology Policy (OSTP) in a document titled, “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People.” Also referred to as the Blueprint, the AI Bill of Rights was created following consultation with various academics, human rights groups, nonprofits and companies in the private sector.

The AI Bill of Rights is intended to support the development of policies and practices that protect civil rights and promote democratic values in the deployment and governance of automated systems. To achieve this, the Blueprint sets out five principles to mitigate potential risks, such as algorithmic discrimination. It also addresses issues for access to critical resources or services that can come from deploying AI across areas such as healthcare, financial services and more.

The AI Bill of Rights consists of five core principles to help guide the design, use and deployment of AI systems. Specific considerations are provided across each principle, accounting for various situations in which peoples’ civil rights—such as their freedom of speech, voting rights or privacy—can be at risk. 

While the Blueprint is non-binding and does not mandate compliance with the core principles, it is intended to inform AI-related policy decisions where existing law or policy do not already provide guidance.

Get started with AI governance
Which AI systems does the AI Bill of Rights apply to?

The AI Bill of Rights applies to automated systems if they have the potential to meaningfully impact the American peoples' rights, opportunities or access to critical resources or services. The types of automated systems potentially in scope of the AI Bill of Rights include, but are not limited to, those that can impact:

Civil rights, liberties and privacy: This includes speech-related systems (for example, automated content moderation tools); surveillance and criminal justice system algorithms (for example, automated license plate readers); voting-related systems (for example, signature matching tools); and systems with a potential privacy impact (for example, ad-targeting systems).

Equal opportunities: This includes education-related systems (for example, plagiarism detecting software); housing-related systems (for example, tenant screening algorithms); and employment-related systems (for example, hiring or termination algorithms).

Access to critical resources and services: This includes health and health insurance technologies (for example, AI-assisted diagnostic tools); financial system algorithms (for example, credit scoring systems); systems that impact the safety of communities (for example, electrical grid controls); and systems related to access to benefits or services or assignment of penalties (for example, fraud detection algorithms). 

Why is the AI Bill of Rights important?

AI use cases are growing as technologies such as machine learning (ML) and natural language processing (NLP) become more sophisticated. In one study from Ernst and Young, 90% of respondents said they use AI at work.However, widespread AI adoption also brings new ethical challenges related to transparencybias and data privacy. For instance:

  • Facial recognition software (FRS) used in law enforcement can be prone to bias. Researchers have already confirmed several cases of misidentification due to artificial intelligence, most of which involved Black people who were wrongfully accused.2

  • AI hallucinations occur when an automated system falsely perceives a pattern and produces a nonsensical or inaccurate response. In one instance, researchers asked ChatGPT to provide information on the pathogenesis of two diseases. The AI provided a thorough paper with citations and PubMed IDs. After fact checking, researchers found that the papers were fabricated, and the IDs were pulled from other papers.3

  • In an IBM study of C-suite executives, only 24% of current gen AI projects have a component to secure the initiatives. In fact, nearly 70% of respondents said that innovation takes precedence over security.

To address these challenges, AI developers need guides and ethical frameworks built around the responsible use of AI. Responsible AI is a set of principles used to guide the design, development, deployment and use of AI. It considers the broader societal impact of AI systems, and the measures required to align AI with stakeholder values, legal standards and ethical principles. 

The Blueprint seeks to set out responsible AI best practices into a comprehensive framework so society can harness the full potential of AI tools without compromising peoples’ basic civil liberties.

The five principles of the AI Bill of Rights

The AI Bill of Rights consists of five principles designed with the civil rights of the American public in mind. The five principles include:

  • Safe and effective systems
  • Algorithmic discrimination protections
  • Data privacy
  • Notice and explanation
  • Human alternatives, consideration and fallback

Safe and effective systems

 

This principle states that people “should be protected from unsafe or ineffective AI systems.” To align with this principle, the Blueprint suggests developers work alongside diverse communities, stakeholders and domain experts to consider the risks of an AI system. The principle also suggests that systems undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring to improve safety and efficacy. According to the Blueprint, the results of any independent evaluations and reporting that confirms that the system is safe and effective should be made public whenever possible.

Algorithmic discrimination protections

 

This principle states that people “should not face discrimination by algorithms, and systems should be used and designed in an equitable way.” According to the AI Bill of Rights, algorithmic discrimination occurs when automated systems adversely impact people based on characteristics such as race, sexual orientation, disability status and other characteristics protected by law. To remedy this, the principle suggests creators of automated systems use measures, such as equity assessments, representative data and disparity testing to protect high-risk individuals and communities. The principle also encourages independent and third-party audits.

Data privacy

 

This principle states that people “should be protected from abusive data practices via built-in protections,” and “should have agency over how data about [them] is used.” To align with this principle, the Blueprint suggests that AI developers protect users and their privacy through design choices that help ensure the collection of personally identifiable information (PII) is strictly necessary. The principle also suggests that creators keep permission and consent requests brief and understandable and respect decisions around data use, access, transfer and deletion.

Enhanced protections and restrictions are required for sensitive information such as that regarding work, health, criminal justice and more. The Blueprint also states that surveillence technology should be subjected to heightened oversight to protect citizens' privacy and civil liberties.

Notice and explanation

 

This principle states that people “should know that an automated system is being used and understand how and why it contributes to outcomes that impact [them].” The AI Bill of Rights states that designers, developers and deployers of automated systems should use plain, accessible language to, among other things, explain the system’s function and the role automation plays. Furthermore, the principle suggests that automated systems should provide notice when in use and clear explanations of how and why it contributes to outcomes impacting individuals.

Human alternatives, consideration and fallback

 

This principle states that people “should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems [they] encounter.” According to the Blueprint, determination of what is “appropriate” should be based on “reasonable expectations” in the specific context and should focus on ensuring broad accessibility and protection against potential harm. It suggests that those looking to align with this principle should make human consideration and remedies through a fallback and escalation process accessible and timely, especially when an automated system fails, produces an error or when someone wants to appeal its impact.

"From Principles to Practice"

A technical companion, “From Principles to Practice,” was published alongside the Blueprint. It provides examples and steps that governments, industries and communities can take to embed the five principles into policy, practice or the technical design of automated systems.

Together, the AI Bill of Rights and its technical companion explain why each principle is important, what should be expected of automated systems and how each principle can move into practice. The examples provided are not critiques or endorsement, rather, they aim to inspire organizations to incorporate safeguards into their own AI operations and decision-making. 

What impact has the AI Bill of Rights had?

After its publication in Washington DC, the AI Bill of Rights might have partly inspired several federal agencies to adopt guidelines for their own responsible use of AI. As of writing this article, 12 US government agencies—including the Department of Commerce (DOC) and National Institute of Standards and Technology (NIST)—have AI requirements that span law, policy and national security.

On 30 October 2023, the Biden administration issued an executive order to establish new standards for safe, secure and trustworthy AI. In a press release published six months later, the DOC announced several plans to implement the executive order.4

At the state level, it appears that policymakers are aligning new legislation with the Blueprint in some respects. In 2021, New York adopted a law with requirements for employers including notifying when AI technologies are used in the hiring process. Several states now have requirements around the use of facial recognition technology within law enforcement. And, more recently, the California Civil Rights Council proposed amendments to the Fair Employment and Housing Act (FEHA) that further aligns FEHA with the AI Bill of Rights.  

Outside of the United States, 34 countries have established national AI strategies at the time of writing.Notably, the Artificial Intelligence Act of the European Union (EU AI Act) governs the development or use of AI in the EU. The act takes a risk-based approach to regulation, applying different rules to AI according to the risks they pose.

How does the AI Bill of Rights interact with existing policies?

Some of the protections suggested in the AI Bill of Rights are already required by the US Constitution or exist under current US laws. For instance, government surveillance mentioned in the “Data Privacy” principle is already subject to legal requirements and judicial oversight, while civil rights laws exist to protect the American people against discrimination.

Examples of other AI standards that the Blueprint aligns with include:

  • The Organization for Economic Co-operation and Development’s (OECD’s) 2019 Recommendation on Artificial Intelligence. This includes principles for responsible stewardship of trustworthy AI, which the US adopted. 

  • Executive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government. This sets out principles that govern the federal government’s use of AI.

  • Executive Order 13985 on Advancing Racial Equity and Support for Underserved Communities Through the Federal Government.

New policies and practices might be adopted to help ensure the protections found in the AI Bill of Rights are implemented. The Blueprint acknowledges that in some cases, exceptions to the principles might be necessary to help ensure compliance with existing legislation, conform to the practicalities of a specific use case or balance competing public interests. For instance, law enforcement and other government agencies are encouraged to follow the guidelines laid out in the AI Bill of Rights. However, to protect peoples’ rights and privacy, they might need to use alternative methods. 

Looking ahead, the AI Bill of Rights might play a key role in influencing the next wave of policies as the world’s nations adopt a more holistic approach to responsible AI.

Related solutions
IBM® watsonx.governance™

Direct, manage and monitor the artificial intelligence (AI) activities of your organization.

Explore IBM watsonx.governance

IBM Consulting® artificial intelligence (AI) services

Reimagine how you work with AI for business.

Explore IBM Consulting artificial intelligence (AI) services
Resources What is artificial intelligence?

Artificial intelligence (AI) is technology that enables computers and machines to simulate human activities.

What is responsible AI?

Responsible artificial intelligence (AI) is a set of principles that help guide the design, development, deployment and use of AI.

What is AI ethics?

AI ethics studies how to optimize AI's beneficial impact while reducing risks and adverse outcomes.

What is AI governance?

Artificial intelligence (AI) governance refers to the guardrails that help ensure AI tools and systems are and remain safe and ethical.

What is data privacy?

The principle that a person should have control over their personal data, including how it's collected, stored and used.

What is the EU AI Act?

A law that governs the development or use of artificial intelligence (AI) in the European Union (EU).

Take the next step

Accelerate responsible, transparent and explainable AI workflows across the lifecycle for both generative and machine learning models. Direct, manage, and monitor your organization’s AI activities to better manage growing AI regulations and detect and mitigate risk.

Explore watsonx.governance Book a live demo
Footnotes

All links reside outside ibm.com.

How organizations can stop skyrocketing AI use from fueling anxiety, Ernst & Young, December 2023.

Artificial Intelligence is Putting Innocent People at Risk of Being Incarcerated, Innocence Project, Sanford, 14 February 2024.

Artificial Hallucinations in ChatGPT: Implications in Scientific Writing, National Library of Medicine, Muacevic, Adler, 19 February 2023.

Department of Commerce Announces New Actions to Implement President Biden’s Executive Order on AI, US Department of Commerce, 29 April 2024.

A cluster analysis of national AI Strategies, Brookings, Denford, Dawson, Desouza, 13 December 2023.