AI Fairness

Exploring AI Fairness for People with Disabilities

Share this post:

This year’s International Day of Persons with Disabilities emphasizes participation and leadership. In today’s fast-paced world, more and more decisions affecting participation and opportunities for leadership are automated. This includes selecting candidates for a job interview, approving loan applicants, or admitting students into college. There is a trend towards using artificial intelligence (AI) methods such as machine learning models to inform or even make these decisions. This raises important questions around how such systems can be designed to treat all people fairly, especially people who already face barriers to participation in society.

 

Silhouettes of thirteen people. The one in the middle is sitting in a wheelchair.

AI Fairness Silhouette

A Diverse Abilities Lens on AI Fairness

Machine learning finds patterns in data, and compares a new input against these learned patterns.  The potential of these models to encode bias is well-known. In response, researchers are beginning to explore what this means in the context of disability and neurodiversity. Mathematical methods for identifying and addressing bias are effective when a disadvantaged group can be clearly identified. However, in some contexts it is illegal to gather data relating to disabilities. Adding to the challenge, individuals may choose not to disclose a disability or other difference, but their data may still reflect their status. This can lead to biased treatment that is difficult to detect. We need new methods for handling potential hidden biases.

Our diversity of abilities, and combinations of abilities, pose a challenge to machine learning solutions that depend on recognizing common patterns. It’s important to consider small groups, not represented strongly in training data. Even more challenging, unique individuals have data that does not look like anyone else’s.

First Workshop on AI Fairness

To stimulate progress on this important topic, IBM sponsored two workshops on AI Fairness for People with Disabilities. The first workshop in 2018, gathered individuals with lived experience of disability, advocates and researchers. Participants identified important areas of opportunity and risk, such as employment, education, public safety and healthcare.  That workshop resulted in a recently published report outlining practical steps towards accommodating people with diverse abilities throughout the AI development lifecycle. For example, review proposed AI systems for potential impact, and design-in ways to correct errors and raise fairness concerns. Perhaps the most important step is to include diverse communities in both development and testing. This should improve robustness and help develop algorithms that support inclusion.

ASSETS 2019 Workshop on AI Fairness

The second workshop was held at this year’s ACM SIGACCESS ASSETS Conference on Computers and Accessibility, and brought together thinkers from academia, industry, government, and non-profit groups.  The organizing team of accessibility researchers from industry and academia selected seventeen papers and posters. These represent the latest research on AI methods and fair treatment of people with disabilities in society. Alexandra Givens of Georgetown University kicked off the program with a keynote talk outlining the legal tools currently available in the United States to address algorithmic fairness for people with disabilities. Next, the speakers explored topics including: fairness in AI models as applied to disability groups, reflections on definitions of fairness and justice, and research directions to pursue.  Going forward, key topics in continuing these discussions are:

  • The complex interplay between diversity, disclosure and bias.
  • Approaches to gathering datasets that represent people with diverse abilities while protecting privacy.
  • The intersection of ableism with racism and other forms of discrimination.
  • Oversight of AI applications.

Ongoing Conversations

Abstracts of all the presentations are available, and the October 2019 issue of the SIGACCESS Newsletter features full position papers for many of the submissions. Join the conversations emerging from the workshop by contacting aiworkshop-assets19@acm.org or using the Twitter hashtag #FATE4PWD.

Accessibility Research

More AI Fairness stories
By Mary Jo Mueller on December 10, 2019

IBM Accessibility tribute to Dr. Jim Thatcher

News has spread throughout the IBM accessibility community that one of our family, Dr. Jim Thatcher passed away this weekend at the age of 83. Jim worked for IBM for 37 years and left a legacy that to this day works to make technology accessible to persons with disabilities. He first joined IBM as a […]

Continue reading

By Alexandra Grossi on December 3, 2019

Diversely Deaf IBMers Work to Create An Accessible Future

This past August, I started my position as UX designer at IBM Accessibility. I was new to the team, and relatively new to the world of inclusive design—but only from a designer’s perspective. Born profoundly deaf, I brought with me a lifetime of experiences with accessibility (or lack thereof, in many cases). My second week […]

Continue reading

By Erich Manser on May 13, 2019

IBM Celebrates Global Accessibility Awareness Day – May 16

On Thursday, May 16, IBM site locations and iX Design Studios across the U.S. will celebrate the eighth annual Global Accessibility Awareness Day (GAAD) –  a worldwide observation to raise awareness of digital access and inclusion issues for people with disabilities. As an IBMer with a disability, and a founding member of IBM’s Team Able, I am […]

Continue reading