My IBM Log in
Q&A with Dr. Stacy Hobson and Anjelica Dortch on the IBM Policy Lab's POV “Mitigating Bias in Artificial Intelligence”
May 25,2021

The IBM Policy Lab published “Mitigating Bias in Artificial Intelligence” by Dr. Stacy Hobson and Anjelica Dortch with recommendations for policymakers to strengthen testing, assessment, and mitigation strategies to minimize bias in AI systems. We sat down with the authors to hear more about the POV.

 

 

Question 1: What is AI bias and how much of a problem is it in our world today?

Dr. Stacy Hobson and Anjelica Dortch: “When we refer to instances of AI bias, these are situations where AI models or systems produce unfair and unobjective results for particular groups and ultimately may cause harm, and create new or exacerbate existing inequalities. It’s difficult to discern how pervasive these biases are in the technology we develop and use in our everyday life, but humans have many kinds of biases that show up in AI systems. We have become aware recently of many examples of AI bias in critical systems, for example, those that are used for decisions in healthcare, financial services and criminal justice.  The impacts of AI bias in these contexts can be life altering for the affected communities, ranging from death, limited economic opportunities and imprisonment.”

 

Question 2: IBM’s new POV proposes five priorities to strengthen the adoption of testing, assessment, and mitigation strategies to minimize bias in AI systems. How will those priorities make a difference?

SH & AD: “We believe these priorities provide a foundation for policymakers considering new laws, regulatory frameworks, and guidance for mitigating bias in AI systems. Industry, academia, governments, and consumers have a shared responsibility to ensure AI systems are tested and assessed for bias. Furthermore, fostering a more inclusive and diverse AI ecosystem will also advance racial equity in AI, and enhance consumer confidence and trust.”

 

Question 3: How does IBM recommend that governments and businesses implement policies to minimize bias in artificial intelligence?

SH & AD: “There are few ways we believe governments and businesses can implement these policies to minimize instances of bias in AI. Organizations should prioritize creating, implementing, and operationalizing AI ethics principles, training programs, and a governance board that provides ongoing education, review, and oversight of their AI systems. Lawmakers should introduce and pass legislation that requires assessments and testing for high-risk AI systems, enhances transparency through disclosure, and incorporates a mechanism for consumers to provide insight and feedback.”

 

Question 4: How does this build on IBM’s existing calls for “precision regulation” of emerging technology?

SH & AD: “In light of how the public dialogue around AI bias has evolved, this POV further expands our call for “precision regulation” while highlighting immediate measures that can be taken to provide industry, governments, and consumers with clear testing, assessment, mitigation, and education requirements to prevent bias in AI.”

 

About IBM Policy Lab
The IBM Policy Lab is a forum providing policymakers with a vision and actionable recommendations to harness the benefits of innovation while ensuring trust in a world being reshaped by data. As businesses and governments break new ground and deploy technologies that are positively transforming our world, we work collaboratively on public policies to meet the challenges of tomorrow.

 

Sign up for the IBM Policy Lab newsletter for our latest updates:

 

 

Media Contact:
Jordan Humphreys
jordan.humphreys@ibm.com
+1-202-754-0830

 

Share this post: