Artificial Intelligence

Diversity needed to tackle the inherent bias in artificial intelligence

Share this post:

AI is designed to make our lives easier. But there’s a big problem when the people who design the systems program in their own bias.

Social distancing has been one of the most significant themes of 2020, and it’s not limited to human contact. In June, three technology giants distanced themselves from the multibillion-dollar business of facial recognition, amid a surge of global anti-racism demonstrations.

Why? Forbes reported that the datasets used to create the technology lacked diversity, causing a significant risk of misidentification, and women and people of colour were most affected.

Experts say it’s time for the industry to diversify – partly to do right by the community, partly to improve commercial outcomes – and women in tech are leading the charge.

AI IS DESIGNED TO LOOK FOR BIAS

Before we dive in, it’s important to understand what AI does. The most common use is to automate tasks, which can increase the speed and accuracy of things like data-crunching, while cutting costs. Computers are programmed with certain rules, which in theory means AI is not prone to human biases.

Artificial intelligence is an incredibly useful tool. It’s used to make diagnostic decisions in healthcare, to allocate resources for social services in things like child protection, to help recruiters crunch through piles of job applications, and much more. The technology is brilliant and sound, but it can be let down by the data it’s churning through.

Some programs are also limited by the knowledge and experience of the people who write it, which means there are always blind spots. Data scientist Cathy O’Neill, author of Weapons of Math Destruction, calls it “opinions embedded in mathematics”. Numbers can be interpreted in all manner of ways, which means even computer-made decisions aren’t always fair.

Lisa Bouari says the trouble is that AI is inherently biased. “If you think about why we use AI, we’re trying to find patterns, related groups or deduce things so we can inform business decisions,” she says, explaining the only way to remove bias completely would be to take humans out of the equation at every step before, during and after using AI.

Executive Director at OutThought, which designs and builds virtual assistants and chatbots using artificial intelligence, this year Lisa was named on IBM’s annual Women Leaders in AI list, which honours women from around the world shaping the future of technology.

FLAWED DATA HAS SERIOUS CONSEQUENCES

Case in point, the 2016 report from independent news organisation ProPublica which claimed a computer program that a US court was using to make sentencing decisions may have been unfairly biased against African American prisoners.

Analysis by ProPublica suggested that the models indicated that black prisoners were almost twice as likely to reoffend as caucasians, potentially neglecting to factor in the higher rates of arrest and false imprisonment among the black community.

“The dataset they were using to try and predict this had human bias already in it, which was unfavourable to African-Americans,” Bouari tells news.com.au. “That data was assuming they were rightly imprisoned, which they weren’t, and so the outcome of that model was that instead of having a fair view on who was likely to reoffend, they actually ended up with a model that exaggerated the very issue they were trying to solve, because it was in the data to begin with.”

It was hard for women to even get to an interview stage at Amazon when they started using their AI system to sift through CVs, as the developers had unintentionally built in a bias against females. Picture: iStock.

It was hard for women to even get to an interview stage at Amazon when they started using their AI system to sift through CVs, as the developers had unintentionally built in a bias against females. Picture: iStock.Source:istock

Another classic example occurred at Amazon, where AI was used as a recruitment tool. It scanned years’ worth of resumes, learning to identify the types of candidates that would be successful, and promptly began discriminating against women. Nearly all of the company’s recent hires had been men, so the computer learnt that being male was a favourable attribute, reinforcing the existing bias.

Although the technology functioned correctly, human bias tainted the outcomes. It begs the question: could they have been different if the development teams had been more diverse?

IMPORTANCE OF DIVERSE THOUGHT

Of course, these are just two scenarios. Improving the quality of our datasets is one essential step in tackling bias; however, Bouari says we also need to think critically about the way we use it.

“We need to make sure the data we’re using to begin with is the correct set of data,” she says. “[But] it’s not just getting the data right, we need to make sure teams are using the correct models and really thinking about the problem, in relation to the data, in relation to the question they’re trying to answer.”

How concerned are you about the future of artificial intelligence?

Lucy Lin, Founder and Chief Marketing Officer at Forestlyn.com, says diversity is key to avoiding “groupthink syndrome”, because people with different genders, ages, skills, cultural values, personalities and backgrounds will approach problems in different ways. The biggest challenge in the field of AI, she says, isn’t the technology itself, but the ethics around it.

“While the laws and regulations guiding AI are still in their infancy, we must question if the data we’re using is correct and if we trust the data source. The reputation of the source becomes incredibly important,” she tells news.com.au, explaining that transparency is key. “To address the data ownership perspective, you really need to ask for permission for usage … and you can use new technology, like blockchain, so people can see where it’s sourced and check its authenticity.”

DIVERSITY HAS COMMERCIAL REWARDS

The other side of this issue, of course, is the commercial outcomes. American research and advisory firm Gartner estimates the business value created by artificial intelligence will reach US $3.9 trillion by 2022.

“Women make up less than five per cent of venture capital, and these numbers are even lower with minority women,” says Shelli Trung, Managing Partner for VC firm REACH Australia. “If algorithms and products are not created for and catered to 50 per cent of the population, like women or minority groups, this limits the ability for the product to successfully reach more customers and scale as a business.”

Four out of the six investments she led last year included AI. In addition to creating better outcomes, she says mitigating bias and reducing discrimination will ultimately also benefit the industry’s bottom line.

Originally published on news.com.au.

More Artificial Intelligence stories

How IBM is helping to skill South Australian students for the jobs of the future

By Jade Moffat Herman, Corporate Social Responsibility Lead, IBM A/NZ After almost seven years at IBM Australia and New Zealand, you don’t need to tell me how rewarding a career in technology can be. In my role as Corporate Social Responsibility Lead, I am honoured to work closely with leading public sector, not-for-profit and educational […]

Continue reading

Four Australian teams lead the 2021 Call for Code to help combat climate change

By Alison Haire, Lead Developer Advocate, Hybrid Cloud Build Team Solving global challenges like climate change may seem never-ending, but we can draw inspiration and hope from communities that are making a difference. The open-source movement is one such community, involving hundreds of thousands of individuals and organisations around the world. Together, they have created […]

Continue reading

How to avoid data breaches while accelerating your digital transformation

Author: Chris Hockings, Chief Technology Officer (Cyber Security), IBM Australia and New Zealand  As the pandemic accelerated your need for digital transformation, you needed to act. And fast. And you were not alone. But new findings from the recent IBM-Ponemon Institute Cost of a Data Breach Report 2021 suggest that an organisation’s pace of change […]

Continue reading