AI Fairness

Exploring AI Fairness for People with Disabilities

Share this post:

This year’s International Day of Persons with Disabilities emphasizes participation and leadership. In today’s fast-paced world, more and more decisions affecting participation and opportunities for leadership are automated. This includes selecting candidates for a job interview, approving loan applicants, or admitting students into college. There is a trend towards using artificial intelligence (AI) methods such as machine learning models to inform or even make these decisions. This raises important questions around how such systems can be designed to treat all people fairly, especially people who already face barriers to participation in society.

 

Members of an IBM Business Resource Group

A Diverse Abilities Lens on AI Fairness

Machine learning finds patterns in data, and compares a new input against these learned patterns.  The potential of these models to encode bias is well-known. In response, researchers are beginning to explore what this means in the context of disability and neurodiversity. Mathematical methods for identifying and addressing bias are effective when a disadvantaged group can be clearly identified. However, in some contexts it is illegal to gather data relating to disabilities. Adding to the challenge, individuals may choose not to disclose a disability or other difference, but their data may still reflect their status. This can lead to biased treatment that is difficult to detect. We need new methods for handling potential hidden biases.

Our diversity of abilities, and combinations of abilities, pose a challenge to machine learning solutions that depend on recognizing common patterns. It’s important to consider small groups, not represented strongly in training data. Even more challenging, unique individuals have data that does not look like anyone else’s.

First Workshop on AI Fairness

To stimulate progress on this important topic, IBM sponsored two workshops on AI Fairness for People with Disabilities. The first workshop in 2018, gathered individuals with lived experience of disability, advocates and researchers. Participants identified important areas of opportunity and risk, such as employment, education, public safety and healthcare.  That workshop resulted in a recently published report outlining practical steps towards accommodating people with diverse abilities throughout the AI development lifecycle. For example, review proposed AI systems for potential impact, and design-in ways to correct errors and raise fairness concerns. Perhaps the most important step is to include diverse communities in both development and testing. This should improve robustness and help develop algorithms that support inclusion.

ASSETS 2019 Workshop on AI Fairness

The second workshop was held at this year’s ACM SIGACCESS ASSETS Conference on Computers and Accessibility, and brought together thinkers from academia, industry, government, and non-profit groups.  The organizing team of accessibility researchers from industry and academia selected seventeen papers and posters. These represent the latest research on AI methods and fair treatment of people with disabilities in society. Alexandra Givens of Georgetown University kicked off the program with a keynote talk outlining the legal tools currently available in the United States to address algorithmic fairness for people with disabilities. Next, the speakers explored topics including: fairness in AI models as applied to disability groups, reflections on definitions of fairness and justice, and research directions to pursue.  Going forward, key topics in continuing these discussions are:

  • The complex interplay between diversity, disclosure and bias.
  • Approaches to gathering datasets that represent people with diverse abilities while protecting privacy.
  • The intersection of ableism with racism and other forms of discrimination.
  • Oversight of AI applications.

Ongoing Conversations

Abstracts of all the presentations are available, and the October 2019 issue of the SIGACCESS Newsletter features full position papers for many of the submissions. Join the conversations emerging from the workshop by contacting aiworkshop-assets19@acm.org or using the Twitter hashtag #FATE4PWD.

IBM Accessibility Manager & Researcher

More AI Fairness stories
By Si McAleer on July 26, 2021

Celebrating 31

Positivity changes hearts and minds and culture. We still have a long way to go. But as we celebrate the 31st anniversary of the American with Disabilities Act, I wanted to step back and celebrate how far we’ve come.

Continue reading

By Shari Trewin and MaryJo Mueller on December 14, 2020

#ShiverStrong: Remembering Brent Shiver

IBM remembers Dr. Brent Shiver. His perspective as a Deaf person was an essential part of his impact: "Because of my deafness, I see the world differently from my colleagues and can make technical and innovative contributions from angles not usually considered."

Continue reading

By Alexandra Grossi on October 26, 2020

National Disability Employment Awareness Month

This October is the 75th year America has observed National Disability Employment Awareness Month (NDEAM). Like other “awareness months,” NDEAM spotlights a disparity in the workforce—in this case, the lower than average employment rates of Persons with Disabilities (PWDs).

Continue reading