How AI Can Help Shed Bias Baggage and Where to Start in HR

By Barrett Richardson

It's hard to go to an HR conference these days and not hear about AI, diversity, or bias — or all of the above! Some people are worried that AI will take their jobs. I say AI can help people do their jobs – and avoid introducing bias in the process.

Right now, HR professionals are largely overwhelmed. Too many talent acquisition teams are inundated with tedious tasks – sifting through resumes, finding talent in an environment where unemployment is so low, etc. AI can help. By matching people to jobs using real data like skill fit, career path, industry experience, etc., we can have machine learning models do the initial screening of resumes to surface the top candidates faster.

At this point I typically get asked, "How do you ensure the AI isn't biased?" I welcome the question. For the record, humans introduce bias into AI processes. It's unavoidable that we carry our baggage wherever we go. But how do we address this? Is there a way to detect bias before training AI? And is there a way to monitor the AI for bias? Let's dig in.

Focus on skills and qualifications to build the right foundation

Matching applicants to jobs based on qualifications isn't a new idea. We've been doing that for decades. But the definition of job qualifications can be subjective (and open the door for bias), especially for new roles. And determining the right qualifications can take time. For existing jobs, AI can create the qualifications for the job based on data about current and former employees in the job. Looking at attributes of past and current employees to determine what differentiates those who are successful from those who aren't provides a more objective look at attributes that really matter. Using AI, we can also get a sense for the skill set needed for success. With AI functionality like natural language classification, we can identify and surface skill commonalities even if skills aren't referenced the same way across individuals. For newer jobs, we can focus on how well the applicant aligns with the skills of the job. Assessments can also be a great tool for this, particularly for very specific or niche roles (for instance, IBM has built an assessment that identifies the behavioral skills associated with success in cybersecurity). 

Biased HR data equals biased AI

AI systems are only as good as the data we put into them. In other words, that age-old saying, "garbage in, garbage out" still applies to data-driven AI systems for HR. Bad data can contain implicit racial, gender, or ideological biases (often perpetuated by years of unconscious bias across HR functions). Many AI systems will continue to be trained using bad data, making this an ongoing problem. But we believe that bias can be tamed and that the AI systems that tackle bias will be the most successful.

There are tools (like IBM Watson Recruitment's Adverse Impact Analysis module) that can identify potential age, gender, or ethnic bias before the AI is trained. This insight alone is something most companies aren't yet analyzing for. Using this data, the AI then enables models to be adjusted to eliminate those influencers causing bias. Tools like this can then also be used to monitor for potential bias once the AI is up and running. This ensures that as the machine learns it isn't producing results that add new adverse impact. We call this AI for AI.

Addressing bias is our collective responsibility

A crucial principle for both humans and machines is to address bias and therefore prevent discrimination. Bias in AI systems mainly occurs in the data or in the algorithmic model. As we work to develop AI systems we can trust, it's critical to develop and train these systems with unbiased data and to develop algorithms that can be easily explained.

Here at IBM, scientists have devised an independent bias rating system that can determine the fairness of an AI system. As AI systems find, understand, and point out human inconsistencies in decision making, they could also reveal ways in which we are partial, parochial, and cognitively biased, leading us to adopt more impartial or egalitarian views. The methodology reduces the bias that may be present in a training dataset.

We also recently announced technology that gives businesses new transparency into AI, enabling them to more fully harness and regulate its power. And IBM will release an AI bias detection and mitigation toolkit into the open source community, bringing forward tools and education to encourage global collaboration around addressing bias in AI.

The unbiased path forward

In many ways, AI is still in its infancy. We understand the immense responsibility that comes with automating complex tasks that were once the domain of humans. Humans now must be the respected advisors, experts, and judges who know when to say "no" and overrule mistakes.

At IBM, we believe that artificial intelligence actually holds the keys to mitigating bias out of AI systems – and offers an unprecedented opportunity to shed light on the existing biases we hold as humans. In the process of recognizing our bias and teaching machines about our common values, we may improve more than AI. We might just improve ourselves.

Update: Learn more about the IBM Talent & Transformation Services announced on November 28, which can help you use bias reduction capabilities to flag bias in recruitment efforts, such as language in job descriptions.