December 5, 2018 | Written by: Shari Trewin
Categorized: Accessibility | AI | Diversity & Inclusion
Share this post:
International Day of Persons with Disabilities aims to increase awareness and promote the well-being of people with disabilities in every aspect of life. Fair treatment means equal opportunity to access education, find employment, receive healthcare, find information, and participate in society.
Increasingly, our education, healthcare, recruitment, and information systems involve technologies based on machine learning. To mark International Day of Persons with Disabilities 2018, IBM Accessibility Research has released a paper examining how fair treatment for people with disabilities applies in machine learning.
In the wake of several high profile examples of unwanted bias in AI systems, many AI developers are acutely aware of the need to treat marginalized groups fairly, especially for race and gender. One approach is to balance the data used for training AI models, so that all groups are represented appropriately. There are also many ways to check mathematically for bias against a protected group and make corrections.
One very important aspect of diversity has been neglected: disability. The World Health Organization estimates that 15% of people worldwide have some form of impairment that can lead to disability. Almost all of us will experience sensory, physical or cognitive disability in our lives. Whether permanent or temporary, this is a normal part of human experience that technology can and should accommodate. That includes AI systems.
However, there’s a catch. Disability is quite different to other protected attributes like race and gender. Our point-of-view paper identifies two important differences: extreme diversity, and data privacy.
Disability is not a simple concept with a small number of possible values. It has many dimensions, varies in intensity and impact, and often changes over time. As defined by the United Nations Convention on the Rights of People with Disabilities, disability “results from the interaction between persons with impairments and attitudinal and environmental barriers that hinders their full and effective participation in society”. As such, it depends on context and comes in many forms, including physical barriers, sensory barriers, and communication barriers. The issues faced by a visually impaired person in navigating a city are very different to those of someone in a wheelchair, and a blind person has different challenges to someone with low vision. What this means is that the data of a person with a disability may look very unique. Achieving fairness by building balanced training data sets for AI systems cannot easily be applied to the diverse world of disability.
One important consequence of having a disability is that it can lead us to do things in a different way, or to look or act differently. As a result, disabled people may often be outliers in the data, not fitting the patterns learned by machine learning. There is a risk that outlier individuals will not receive fair treatment from systems that rely on learned statistical norms.
Many people have privacy concerns about sharing disability information. The Americans with Disabilities act specifically prohibits employers from asking candidates about their disability status during the hiring process. This ‘fairness through unawareness’ approach aims to allow candidates to be evaluated purely on their ability to do the job. People with disabilities know from experience that revealing a disability can be risky.
Recently I listened to a group of students discussing the pros and cons of revealing their disabilities when applying for internships. One chooses not to disclose it, believing it will reduce their chances. Another has to reveal his disability so that accommodations can be provided for the application process. A third chooses to disclose by including relevant professional experience at disability organizations in her resume. She argues that her disability is an important driver of her experience and talents, and this will select out places where her disability would be seen as a negative.
This dilemma illustrates both the sensitivity of disability information, and some of the reasons why data used to train AI systems often will not contain disability information, but may still reflect the presence of a disability. For people with disabilities to contribute their own data to help test or train AI systems is a public good, but a personal risk. Even if the data is anonymized, the unusual nature of their situation may make them easily re-identifiable. Yet without disability information, existing methods for testing for and removing bias in AI models cannot be applied.
To ensure that AI-based systems are treating people with disabilities fairly, it is essential to include them in development. We call on developers to take the time to consider who their outliers might be, and who might be impacted by their solution. For example, a voice-controlled service might impact people with speech impairment or deaf speakers whose voices are not well understood by today’s speech recognition systems. An online assessment test based on test times might not be fair to people who use assistive technologies to access the test. Whoever the affected stakeholders are, seek them out, and work with them towards a fair and equitable system. If we can identify and remove bias against people with disabilities from our technologies, we will be taking an important step towards creating a society that respects and upholds the human rights of us all.
A version of this story first appeared on VentureBeat.