Published: 27 September 2024
Contributors: Tom Krantz, Alexandra Jonker
The AI Bill of Rights is a framework published by the United States government to help protect Americans’ civil rights in the age of artificial intelligence (AI).
The AI Bill of Rights was introduced in October 2022 by the White House Office of Science and Technology Policy (OSTP) in a document titled, “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People.” Also referred to as the Blueprint, the AI Bill of Rights was created following consultation with various academics, human rights groups, nonprofits and companies in the private sector.
The AI Bill of Rights is intended to support the development of policies and practices that protect civil rights and promote democratic values in the deployment and governance of automated systems. To achieve this, the Blueprint sets out five principles to mitigate potential risks, such as algorithmic discrimination. It also addresses issues for access to critical resources or services that can come from deploying AI across areas such as healthcare, financial services and more.
The AI Bill of Rights consists of five core principles to help guide the design, use and deployment of AI systems. Specific considerations are provided across each principle, accounting for various situations in which peoples’ civil rights—such as their freedom of speech, voting rights or privacy—can be at risk.
While the Blueprint is non-binding and does not mandate compliance with the core principles, it is intended to inform AI-related policy decisions where existing law or policy do not already provide guidance.
The AI Bill of Rights applies to automated systems if they have the potential to meaningfully impact the American peoples' rights, opportunities or access to critical resources or services. The types of automated systems potentially in scope of the AI Bill of Rights include, but are not limited to, those that can impact:
Civil rights, liberties and privacy: This includes speech-related systems (for example, automated content moderation tools); surveillance and criminal justice system algorithms (for example, automated license plate readers); voting-related systems (for example, signature matching tools); and systems with a potential privacy impact (for example, ad-targeting systems).
Equal opportunities: This includes education-related systems (for example, plagiarism detecting software); housing-related systems (for example, tenant screening algorithms); and employment-related systems (for example, hiring or termination algorithms).
Access to critical resources and services: This includes health and health insurance technologies (for example, AI-assisted diagnostic tools); financial system algorithms (for example, credit scoring systems); systems that impact the safety of communities (for example, electrical grid controls); and systems related to access to benefits or services or assignment of penalties (for example, fraud detection algorithms).
AI use cases are growing as technologies such as machine learning (ML) and natural language processing (NLP) become more sophisticated. In one study from Ernst and Young, 90% of respondents said they use AI at work.1 However, widespread AI adoption also brings new ethical challenges related to transparency, bias and data privacy. For instance:
To address these challenges, AI developers need guides and ethical frameworks built around the responsible use of AI. Responsible AI is a set of principles used to guide the design, development, deployment and use of AI. It considers the broader societal impact of AI systems, and the measures required to align AI with stakeholder values, legal standards and ethical principles.
The Blueprint seeks to set out responsible AI best practices into a comprehensive framework so society can harness the full potential of AI tools without compromising peoples’ basic civil liberties.
The AI Bill of Rights consists of five principles designed with the civil rights of the American public in mind. The five principles include:
This principle states that people “should be protected from unsafe or ineffective AI systems.” To align with this principle, the Blueprint suggests developers work alongside diverse communities, stakeholders and domain experts to consider the risks of an AI system. The principle also suggests that systems undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring to improve safety and efficacy. According to the Blueprint, the results of any independent evaluations and reporting that confirms that the system is safe and effective should be made public whenever possible.
This principle states that people “should not face discrimination by algorithms, and systems should be used and designed in an equitable way.” According to the AI Bill of Rights, algorithmic discrimination occurs when automated systems adversely impact people based on characteristics such as race, sexual orientation, disability status and other characteristics protected by law. To remedy this, the principle suggests creators of automated systems use measures, such as equity assessments, representative data and disparity testing to protect high-risk individuals and communities. The principle also encourages independent and third-party audits.
This principle states that people “should be protected from abusive data practices via built-in protections,” and “should have agency over how data about [them] is used.” To align with this principle, the Blueprint suggests that AI developers protect users and their privacy through design choices that help ensure the collection of personally identifiable information (PII) is strictly necessary. The principle also suggests that creators keep permission and consent requests brief and understandable and respect decisions around data use, access, transfer and deletion.
Enhanced protections and restrictions are required for sensitive information such as that regarding work, health, criminal justice and more. The Blueprint also states that surveillence technology should be subjected to heightened oversight to protect citizens' privacy and civil liberties.
This principle states that people “should know that an automated system is being used and understand how and why it contributes to outcomes that impact [them].” The AI Bill of Rights states that designers, developers and deployers of automated systems should use plain, accessible language to, among other things, explain the system’s function and the role automation plays. Furthermore, the principle suggests that automated systems should provide notice when in use and clear explanations of how and why it contributes to outcomes impacting individuals.
This principle states that people “should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems [they] encounter.” According to the Blueprint, determination of what is “appropriate” should be based on “reasonable expectations” in the specific context and should focus on ensuring broad accessibility and protection against potential harm. It suggests that those looking to align with this principle should make human consideration and remedies through a fallback and escalation process accessible and timely, especially when an automated system fails, produces an error or when someone wants to appeal its impact.
A technical companion, “From Principles to Practice,” was published alongside the Blueprint. It provides examples and steps that governments, industries and communities can take to embed the five principles into policy, practice or the technical design of automated systems.
Together, the AI Bill of Rights and its technical companion explain why each principle is important, what should be expected of automated systems and how each principle can move into practice. The examples provided are not critiques or endorsement, rather, they aim to inspire organizations to incorporate safeguards into their own AI operations and decision-making.
After its publication in Washington DC, the AI Bill of Rights might have partly inspired several federal agencies to adopt guidelines for their own responsible use of AI. As of writing this article, 12 US government agencies—including the Department of Commerce (DOC) and National Institute of Standards and Technology (NIST)—have AI requirements that span law, policy and national security.
On 30 October 2023, the Biden administration issued an executive order to establish new standards for safe, secure and trustworthy AI. In a press release published six months later, the DOC announced several plans to implement the executive order.4
At the state level, it appears that policymakers are aligning new legislation with the Blueprint in some respects. In 2021, New York adopted a law with requirements for employers including notifying when AI technologies are used in the hiring process. Several states now have requirements around the use of facial recognition technology within law enforcement. And, more recently, the California Civil Rights Council proposed amendments to the Fair Employment and Housing Act (FEHA) that further aligns FEHA with the AI Bill of Rights.
Outside of the United States, 34 countries have established national AI strategies at the time of writing.5 Notably, the Artificial Intelligence Act of the European Union (EU AI Act) governs the development or use of AI in the EU. The act takes a risk-based approach to regulation, applying different rules to AI according to the risks they pose.
Some of the protections suggested in the AI Bill of Rights are already required by the US Constitution or exist under current US laws. For instance, government surveillance mentioned in the “Data Privacy” principle is already subject to legal requirements and judicial oversight, while civil rights laws exist to protect the American people against discrimination.
Examples of other AI standards that the Blueprint aligns with include:
New policies and practices might be adopted to help ensure the protections found in the AI Bill of Rights are implemented. The Blueprint acknowledges that in some cases, exceptions to the principles might be necessary to help ensure compliance with existing legislation, conform to the practicalities of a specific use case or balance competing public interests. For instance, law enforcement and other government agencies are encouraged to follow the guidelines laid out in the AI Bill of Rights. However, to protect peoples’ rights and privacy, they might need to use alternative methods.
Looking ahead, the AI Bill of Rights might play a key role in influencing the next wave of policies as the world’s nations adopt a more holistic approach to responsible AI.
Direct, manage and monitor the artificial intelligence (AI) activities of your organization.
Reimagine how you work with AI for business.
Artificial intelligence (AI) is technology that enables computers and machines to simulate human activities.
Responsible artificial intelligence (AI) is a set of principles that help guide the design, development, deployment and use of AI.
AI ethics studies how to optimize AI's beneficial impact while reducing risks and adverse outcomes.
Artificial intelligence (AI) governance refers to the guardrails that help ensure AI tools and systems are and remain safe and ethical.
The principle that a person should have control over their personal data, including how it's collected, stored and used.
A law that governs the development or use of artificial intelligence (AI) in the European Union (EU).
All links reside outside ibm.com.
1 How organizations can stop skyrocketing AI use from fueling anxiety, Ernst & Young, December 2023.
2 Artificial Intelligence is Putting Innocent People at Risk of Being Incarcerated, Innocence Project, Sanford, 14 February 2024.
3 Artificial Hallucinations in ChatGPT: Implications in Scientific Writing, National Library of Medicine, Muacevic, Adler, 19 February 2023.
4 Department of Commerce Announces New Actions to Implement President Biden’s Executive Order on AI, US Department of Commerce, 29 April 2024.
5 A cluster analysis of national AI Strategies, Brookings, Denford, Dawson, Desouza, 13 December 2023.