AI Fairness

Exploring AI Fairness for People with Disabilities

Share this post:

This year’s International Day of Persons with Disabilities emphasizes participation and leadership. In today’s fast-paced world, more and more decisions affecting participation and opportunities for leadership are automated. This includes selecting candidates for a job interview, approving loan applicants, or admitting students into college. There is a trend towards using artificial intelligence (AI) methods such as machine learning models to inform or even make these decisions. This raises important questions around how such systems can be designed to treat all people fairly, especially people who already face barriers to participation in society.

 

Members of an IBM Business Resource Group

A Diverse Abilities Lens on AI Fairness

Machine learning finds patterns in data, and compares a new input against these learned patterns.  The potential of these models to encode bias is well-known. In response, researchers are beginning to explore what this means in the context of disability and neurodiversity. Mathematical methods for identifying and addressing bias are effective when a disadvantaged group can be clearly identified. However, in some contexts it is illegal to gather data relating to disabilities. Adding to the challenge, individuals may choose not to disclose a disability or other difference, but their data may still reflect their status. This can lead to biased treatment that is difficult to detect. We need new methods for handling potential hidden biases.

Our diversity of abilities, and combinations of abilities, pose a challenge to machine learning solutions that depend on recognizing common patterns. It’s important to consider small groups, not represented strongly in training data. Even more challenging, unique individuals have data that does not look like anyone else’s.

First Workshop on AI Fairness

To stimulate progress on this important topic, IBM sponsored two workshops on AI Fairness for People with Disabilities. The first workshop in 2018, gathered individuals with lived experience of disability, advocates and researchers. Participants identified important areas of opportunity and risk, such as employment, education, public safety and healthcare.  That workshop resulted in a recently published report outlining practical steps towards accommodating people with diverse abilities throughout the AI development lifecycle. For example, review proposed AI systems for potential impact, and design-in ways to correct errors and raise fairness concerns. Perhaps the most important step is to include diverse communities in both development and testing. This should improve robustness and help develop algorithms that support inclusion.

ASSETS 2019 Workshop on AI Fairness

The second workshop was held at this year’s ACM SIGACCESS ASSETS Conference on Computers and Accessibility, and brought together thinkers from academia, industry, government, and non-profit groups.  The organizing team of accessibility researchers from industry and academia selected seventeen papers and posters. These represent the latest research on AI methods and fair treatment of people with disabilities in society. Alexandra Givens of Georgetown University kicked off the program with a keynote talk outlining the legal tools currently available in the United States to address algorithmic fairness for people with disabilities. Next, the speakers explored topics including: fairness in AI models as applied to disability groups, reflections on definitions of fairness and justice, and research directions to pursue.  Going forward, key topics in continuing these discussions are:

  • The complex interplay between diversity, disclosure and bias.
  • Approaches to gathering datasets that represent people with diverse abilities while protecting privacy.
  • The intersection of ableism with racism and other forms of discrimination.
  • Oversight of AI applications.

Ongoing Conversations

Abstracts of all the presentations are available, and the October 2019 issue of the SIGACCESS Newsletter features full position papers for many of the submissions. Join the conversations emerging from the workshop by contacting aiworkshop-assets19@acm.org or using the Twitter hashtag #FATE4PWD.

Accessibility Research

More AI Fairness stories
By Si McAleer on May 21, 2020

IBM contributing back to the community on Global Accessibility Awareness Day

Today is the ninth Global Accessibility Awareness Day (GAAD). Over the years, IBM has held numerous events and activities to bring awareness to the digital access and inclusion of people with disabilities internally at IBM and externally. It is more critical than ever to blend the technical aspects of creating accessible solutions with the empathy […]

Continue reading

By Brent Shiver on May 8, 2020

Harnessing the Power of Video Remote Interpreting in Professional Space

In a typical interpreting scenario, there are three main actors: Deaf user, sign language interpreter, and hearing non-signer(s).  When the deaf user signs, the interpreter would voice, so the non-signer hears the conveyed message.  When the non-signer speaks, the interpreter will sign accordingly, so the deaf user receives the communication from the speaker.  With the […]

Continue reading

By Meghan Grable on February 24, 2020

Accessibility Strengthens the User Research Practice

Inclusive design is the buzz term around the design community. Inclusive design can be interpreted in many different ways and have numerous outcomes. When narrowing the scope to user research, though, how does inclusive design make an impact? As a user researcher, it is critical to talk with people using various offerings to understand their […]

Continue reading