What an AI Bill of Rights Should Look Like
Share this post:
The IBM Policy Lab welcomes the White House Office of Science and Technology Policy (OSTP)’s initiative to develop an AI Bill of Rights. As a leader in responsible AI innovation, IBM recognizes the potential this powerful technology has to positively impact society, and the Policy Lab in particular has actively contributed our thinking and concrete recommendations to the global policy dialogue around promoting trust in AI.
The IBM Policy Lab launched in January 2020 with an initial policy perspective titled Precision Regulation for Artificial Intelligence, inspired in part by the company’s longstanding Principles for Trust and Transparency.
Since that time, the Policy Lab has continued to build out policy ideas and actions that could be relevant to developing an AI Bill of Rights for the benefit of the American people, which could provide a template for similar initiatives by governments worldwide. Specific principles recommended to OSTP by the IBM Policy Lab team include:
- AI must be transparent. An individual has a right to know when they are interacting with an AI system and also the confidence that they are being treated fairly. Trust and transparency are key.
- AI must be explainable. The owner of an AI system should be able to explain to an individual how an AI system reached a conclusion and what data informed the decision-making process.
- AI must not be biased. In order to advance diversity, equity and inclusion in our society, AI systems must be continually tested and improved to mitigate harmful and inappropriate bias and enhance confidence and trust in AI.
- Consumers must have a voice. Consumers deserve clear lines of communication when they have a concern about an AI system. Individuals need to be able to voice their feedback, and system owners must act responsibly to address legitimate issues of concern.
- High-risk AI needs oversight. Overall, we believe in a precision regulation approach to AI, where the greatest regulatory control is placed on use cases with the greatest risk of societal harm. In a June 2020 letter to Congress, IBM Chairman and CEO Arvind Krishna shared the company’s decision to sunset general purpose facial recognition and analysis software products. His letter made clear that the company opposes and will not condone uses of technology that violate human rights, and we believe strongly that this should be a core tenet of any AI Bill of Rights.
- American-built AI should not be misused abroad. IBM has urged the Department of Commerce to more carefully restrict the export of facial recognition technology to governments that have a history of human rights abuses.
- AI must not exacerbate inequalities. In addition to mitigating bias, it is essential to have diverse stakeholders at the table as AI systems are being developed and deployed. Increasing the diversity of our technology workforce will help to ensure products don’t have harmful social impacts.
- We are responsible for holding ourselves accountable. IBM recognizes that our work and actions have an impact on the entire tech industry, which is why we have an internal AI ethics board that encourages responsible approaches. We also value our partnerships with governments, academic institutions and other organizations to promote the advancement of ethical AI in our society.
Ryan Hagemann, co-director of the IBM Policy Lab, commented: “We commend OSTP for pursuing an AI Bill of Rights and welcoming inputs from experts from across academia and private industry. Establishing a set of shared beliefs to promote trust in AI through transparency, explainability, and strong consumer protections is key to unlocking its full potential, and we look forward to sharing recommendations with OSTP based on the expansive body of work that our team has assembled over the past three years.”