What is AI bias?
Explore IBM's AI bias solution Subscribe for AI updates
Close-up of eyes looking out at the viewer

Contributor: James Holdsworth

Date: 12/22/23

What is AI bias?

AI bias, also called machine learning bias or algorithm bias, refers to the occurrence of biased results due to human biases that skew the original training data or AI algorithm—leading to distorted outputs and potentially harmful outcomes.

When AI bias goes unaddressed, it can impact an organization’s success and hinder people’s ability to participate in the economy and society. Bias reduces AI’s accuracy, and therefore its potential.

Businesses are less likely to benefit from systems that produce distorted results. And scandals resulting from AI bias could foster mistrust among people of color, women, people with disabilities, the LGBTQ community, or other marginalized groups.

The models upon which AI efforts are based absorb the biases of society that can be quietly embedded in the mountains of data they're trained on. Historically biased data collection that reflects societal inequity can result in harm to historically marginalized groups in use cases including hiring, policing, credit scoring and many others.  According to The Wall Street Journal, “As use of artificial intelligence becomes more widespread, businesses are still struggling to address pervasive bias.”1

IBM named a leader by Gartner

Read why IBM was named a leader in the 2023 Gartner® Magic Quadrant™ for Cloud AI Developer Services report.

Related content

Register for the white paper on AI governance

Real-world examples and risks 

When AI makes a mistake due to bias—such as groups of people denied opportunities, misidentified in photos or punished unfairly—the offending organization suffers damage to its brand and reputation. At the same time, the people in those groups and society as a whole can experience harm without even realizing it. Here are a few high-profile examples of disparities and bias in AI and the harm they can cause.

In healthcare, underrepresenting data of women or minority groups can skew predictive AI algorithms.2 For example, computer-aided diagnosis (CAD) systems have been found to return lower accuracy results for African-American patients than white patients.

While AI tools can streamline the automation of resume scanning during a search to help identify ideal candidates, the information requested and answers screened out can result in disproportionate outcomes across groups. For example, if a job ad uses the word “ninja,” it might attract more men than women, even though that is in no way a job requirement.3   

As a test of image generation, Bloomberg requested more than 5,000 AI images be created and found that, “The world according to Stable Diffusion is run by white male CEOs. Women are rarely doctors, lawyers or judges. Men with dark skin commit crimes, while women with dark skin flip burgers.”4  Midjourney conducted a similar study of AI art generation, requesting images of people in specialized professions. The result showed both younger and older people, but the older people were always men, reinforcing gender bias of the role of women in the workplace.5 

AI-powered predictive policing tools used by some organizations in the criminal justice system are supposed to identify areas where crime is likely to occur. However, they often rely on historical arrest data, which can reinforce existing patterns of racial profiling and disproportionate targeting of minority communities.6

Sources of bias 

Distorted results can harm organizations and society at large. Here are a few of the more common types of AI bias7

  • Algorithm bias: Misinformation can result if the problem or question asked is not fully correct or specific, or if the feedback to the machine learning algorithm does not help guide the search for a solution. 

  • Cognitive bias: AI technology requires human input, and humans are fallible. Personal bias can seep in without practitioners even realizing it. This can impact either the dataset or model behavior. 
     

  • Confirmation bias: Closely related to cognitive bias, this happens when AI relies too much on pre-existing beliefs or trends in the data—doubling-down on existing biases, and unable to identify new patterns or trends. 
     

  • Exclusion bias: This type of bias occurs when important data is left out of the data being used, often because the developer has failed to see new and important factors. 
     

  • Measurement bias: Measurement bias is caused by incomplete data. This is most often an oversight or lack of preparation that results in the dataset not including the whole population that should be considered. For example, if a college wanted to predict the factors to successful graduation, but included only graduates, the answers would completely miss the factors that cause some to drop out. 
     

  • Out-group homogeneity bias: This is a case of not knowing what one doesn’t know. There is a tendency for people to have a better understanding of ingroup members—the group one belongs to—and to think they are more diverse than outgroup members. The result can be developers creating algorithms that are less capable of distinguishing between individuals who are not part of the majority group in the training data, leading to racial bias, misclassification and incorrect answers. 

  • Prejudice bias: Occurs when stereotypes and faulty societal assumptions find their way into the algorithm’s dataset, which inevitably leads to biased results. For example, AI could return results showing that only males are doctors and all nurses are female. 
     

  • Recall bias: This develops during data labeling, where labels are inconsistently applied by subjective observations.  
     

  • Sample/Selection bias: This is a problem when the data used to train the machine learning model isn't large enough, not representative enough or is too incomplete to sufficiently train the system. If all school teachers consulted to train an AI model have the same academic qualifications, then any future teachers considered would need to have identical academic qualifications. 
     

  • Stereotyping bias: This happens when an AI system—usually inadvertently—reinforces harmful stereotypes. For example, a language translation system could associate some languages with certain genders or ethnic stereotypes. McKinsey gives a word of warning about trying to remove prejudice from datasets: “A naive approach is removing protected classes (such as sex or race) from data and deleting the labels that make the algorithm biased. Yet, this approach may not work because removed labels may affect the understanding of the model and your results’ accuracy may get worse.”8

Principles for avoiding bias 

The first step in avoiding the bias trap is just to step back at the beginning and give an AI effort some thought. As is true with almost any business challenge, problems are much easier to fix up-front rather than waiting for the train wreck and then sorting through the damaged result. But many organizations are in a rush: penny-wise-and-pound-foolish, and it costs them. 

Identifying and addressing bias in AI requires AI governance, or the ability to direct, manage and monitor the AI activities of an organization. In practice, AI governance creates a set of policies, practices and frameworks to guide the responsible development and use of AI technologies. When done well, AI governance helps to ensure that there is a balance of benefits bestowed upon businesses, customers, employees and society as a whole.

AI governance often includes methods that aim to assess fairness, equity and inclusion. Approaches such as counterfactual fairness identifies bias in a model’s decision making and ensures equitable results, even when sensitive attributes, such as gender, race or sexual orientation are included.

 Because of the complexity of AI, an algorithm can be a black box system with little insight into the data used to create it. Transparency practices and technologies help ensure that unbiased data is used to build the system and that results will be fair. Companies that work to protect customers’ information build brand trust and are more likely to create trustworthy AI systems.

To provide another layer of quality assurance, institute a “human-in-the-loop” system to offer options or make recommendations that can then be approved by human decisions.

How to avoid bias

Here’s a checklist of six process steps that can keep AI programs free of bias.

1.  Select the correct learning model:

  • When using a supervised model, stakeholders select the training data. It’s critical that the stakeholder team be diverse—not just data scientists— and that they have had training to help prevent unconscious bias.  
  • Unsupervised models use AI alone to identify bias. Bias prevention tools need to be built into the neural network so that it learns to recognize what is biased.

2.  Train with the right data: Machine learning trained on the wrong data will produce wrong results. Whatever data is fed into the AI should be complete and balanced to replicate the actual demographics of the group being considered.     

3.  Choose a balanced team: The more varied the AI team—racially, economically, by educational level, by gender and by job description—the more likely it will recognize bias. The talents and viewpoints on a well-rounded AI team should include AI business innovators, AI creators, AI implementers, and a representation of the consumers of this particular AI effort.9  

4.  Perform data processing mindfully: Businesses need to be aware of bias at each step when processing data. The risk is not just in data selection: whether during pre-processing, in-processing or post-processing, bias can creep in at any point and be fed into the AI.  

5.  Continually monitor: No model is ever complete or permanent. Ongoing monitoring and testing with real-world data from across an organization can help detect and correct bias before it causes harm. To further avoid bias, organizations should consider assessments by an independent team from within the organization or a trusted third-party.  

6.  Avoid infrastructural issues: Aside from human and data influences, sometimes infrastructure itself can cause bias. For example, using data collected from mechanical sensors, the equipment itself could inject bias if the sensors are malfunctioning. This kind of bias can be difficult to detect and requires investment in the latest digital and technological infrastructures.

Related solutions
IBM AI solutions

Operationalize AI across your business to deliver benefits quickly and ethically. Our rich portfolio of business-grade AI products and analytics solutions are designed to reduce the hurdles of AI adoption and establish the right data foundation while optimizing for outcomes and responsible use.

Explore AI solutions
Master data management

Drive faster insights by delivering a comprehensive view of an entity’s data and relationships across the enterprise data fabric

Explore data management tools

Resources Trust, transparency and governance in AI

In this episode of AI Academy, explore issues including AI hallucination, bias and risk, and learn how applying AI ethics and governance builds trust.

Build responsible AI workflows with AI governance

Learn more about AI governance for responsible, transparent, and explainable workflows in this eBook.

Shedding light on AI bias with real world examples

As companies increase their use of AI, people are questioning the extent to which human biases have made their way into AI systems.

Take the next step

Accelerate responsible, transparent and explainable AI workflows across the lifecycle for both generative and machine learning models. Direct, manage, and monitor your organization’s AI activities to better manage growing AI regulations and detect and mitigate risk.

Explore watsonx.governance Book a live demo
Footnotes

1 The Wall Street Journal: Rise of AI Puts Spotlight on Bias in Algorithms

2  Booz Allen Hamilton: Artificial Intelligence Bias in Healthcare

3  LinkedIn:  Reducing AI Bias — A Guide for HR Leaders

4  Bloomberg: Humans Are Biased. Generative AI Is Even Worse

5  The Conversation US: Ageism, sexism, classism and more — 7 examples of bias in AI-generated images

6  Technology Review:  Predictive policing is still racist—whatever data it uses

7  Tech Target: Machine learning bias (AI bias)
     Chapman University AI Hub: Bias in AI    
     AIMultiple: Bias in AI —What it is, Types, Examples & 6 Ways to Fix it in 2023

8  McKinsey: Tackling bias in artificial intelligence (and in humans)

9  Forbes:  The Problem With Biased AIs (and How To Make AI Better)