Share this post:
National Disability Employment Awareness Month (NDEAM) is nearing its halfway point, and we would like to reflect on its theme of “America’s Workforce: Empowering All.” For more than 100 years, IBM has embodied that theme by practicing disability inclusion. While there is (always) still work to be done, we are excited and awed by what our team has accomplished and where we are headed.
AI Meets Disability
Last week, IBM Research convened AI experts and researchers from academia and industry to participate in talks and workshops on various aspects of AI. One of the focus areas was the ethics of AI, specifically AI fairness for individuals with disabilities.
AI brings tremendous potential to help humans and society. From the accessibility perspective, AI enables more exciting assistive technology. For example, with AI vision, it is now becoming possible to describe an image, a local environment, or a video to a person who has a visual impairment. AI also makes it easier for companies to implement accessibility standards in products and services. At IBM, we have been working on accessibility testing automation and with AI, we are getting close to automatically testing and fixing many accessibility bugs. With AI enabling large scale automation, we hope to see more accessible products and solutions.
While AI can benefit individuals with disabilities, it can also bring unintended bias. Machine learning (ML) is a specific form of AI that consists of two key factors: algorithms, which are the rules we give to the machine, and data. When training ML systems, data from people with disabilities are often missing, or data was thrown out as “outliers” because these data don’t fit into the “normal” patterns. This can result in machine learning models that are biased against people with disabilities.
To help address this, IBM Research released AI Fairness 360, an open source toolkit. This extensible toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application life cycle.
There’s more that we can do to help achieve AI fairness for individuals with disabilities. We would like to make this call to action: Let’s work together to contribute data, bring awareness, share best practices, etc. to make AI benefit human kind with all abilities by eliminating bias.
Accessibility and Usability
While AI might go a long way to helping us become more efficient in our jobs, one of the bigger challenges we face is the effectiveness of new technology. Sometimes potential fixes are clear, but sometimes they are not as obvious. The IBM Accessibility Research team is tackling this problem in two ways. First, we have recruited a volunteer brigade of IBMers to gather usability feedback, both on IBM’s offerings and on offerings that we acquire from other vendors. Our own developers and third-party developers can use that feedback to improve solutions for everyone. Second, with Disability:IN and other partners, we are working to understand how we can improve procurement practices. This collaborative team is looking to reevaluate the process from enablement for procurement teams, evaluation of potential solutions, and maintenance of solutions. We want to make sure that all employees get an equal opportunity to contribute by having tools that meet their needs.
This year, millions of people who have disabilities will enter the workforce in the US. It would be a shame if we don’t tap into the potential of this group. So, let’s facilitate their transition to the workplace, let’s learn from their experiences and all they have to offer and let’s get to work empowering all.
We are excited about AI, but we are even more excited about the human intelligence which will allow us to use AI wisely to make this world a better and more fair place for all.