Artificial Intelligence

How IBM is building trustworthy AI

By Ross Farrelly, Director, Data Science and Artificial Intelligence, IBM A/NZ

As IBM’s leader for artificial intelligence (AI) in the Australia and New Zealand region, I spend a lot of time talking to companies about how they can realize the benefits associated with this powerful technology.

That includes how AI can help them modernize, respond to changes faster with real-time intelligence, automate at scale, and generally accelerate improved business outcomes. These gains come from AI’s ability to understand and communicate in natural language, classify images, and extract meaningful insights of large volumes of semi-structured data.

Recently though, one topic has been cropping up far more than others, and that’s the subject of trustworthy AI. Business leaders know that trust is a crucial issue for any enterprise looking to invest in AI solutions because if customers don’t trust the technology, it’s unlikely they’ll adopt the services it enables.

Further, if AI is implemented without an ethical framework, it can damage a company’s reputation and erode people’s trust in its social license to operate.

Why we need trustworthy AI
While there’s enormous potential for AI to improve the performance of our businesses and our lives, it can also be misused. For instance, it’s possible for AI to reflect historical biases embedded in the datasets on which the system has been trained.

For example, suppose a bank wants to predict whether it should give somebody a loan and that in the past, this particular bank hasn’t given as many loans to women or people from certain minorities.

These features will be present in that bank’s dataset, which could mean that an AI algorithm is less able to assess applications from women and minority groups, and so errs toward declining their loan applications. In other words, the bank’s AI program would be picking up a bias and amplifying it—or at the very least, perpetuating it.

Three pillars of an ethical approach
So how can we ensure people will trust computer decisions and be confident that the results are accurate and unbiased? How do we create AI that helps us, rather than causes harm?

Locking in fairness starts by ensuring a system is built by well-trained teams, and with representative data, so that the technology does not replicate societal biases toward marginalized groups. And we also need to make sure that the system is designed to detect and mitigate bias as new data is introduced.

At the same time, users need to be able to understand how an AI system has arrived at its decisions and recommendations. That’s not possible if an algorithm is a black box just runs in the background and spits out a decision without transparency or explanation around how it was developed. Rather, what’s required is hard evidence, such as plain language explanations customised for different personae, so a system’s output can be clearly explained.

Equally important is making it clear to users the parameters of data ownership and usage. Companies should prioritize the need for opt-in and informed consent before people interact with their AI, and highlight any policies on data privacy.

IBM: ethics at the core
At IBM, we’ve always prioritized the ethical considerations of the technologies we bring into the world. For example, we won’t invest in technology that we believe has a high probability of being misused. That’s why in 2020, we declared we would no longer work on artificial intelligence for facial recognition, following concerns the software was being used for citizen surveillance and racial profiling by certain law enforcement agencies.

We’ve also created an internal AI ethics board that examines all our initiatives—both technical and non-technical—in light of our IBM principles of trust and transparency.

And we have prioritized the research and release of open-source toolkits to make it easier for the developer community to collaborate with one another, establish common standards and platforms, and so advance trust in AI. These include AI Fairness 360, which allows developers to examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle.

These efforts, born at IBM Research, have also led to innovative business solutions such as Watson OpenScale, a tool that allows developers to detect and correct unwanted biases in datasets and machine-learning models, and suggest algorithms to mitigate them.

Giving a voice to Samoan communities
All this means that when we help organizations scale up AI across their business, we do so with AI ethics being a primary concern. We supply the guidelines and practices, the monitoring, and the tools that can ensure that decisions are explainable, transparent, and fair.

That’s the approach we adopted on a recent project with Beca, one of Asia-Pacific’s largest advisory, technology and engineering design consultancies. Beca’s experts frequently need to measure sentiment and get feedback from New Zealand’s ethnic communities on large public sector infrastructure projects. But online responses have often been poor from communities where English is a second language, and from cultures where it’s not traditional to provide feedback to government organizations.

Beca used IBM’s AI solution Watson to create Tala, a natural language chatbot that can interact with people online in Samoan or English.

The model was trained by a diverse team that included developers with a Samoan heritage.

That helped ensure that Tala understands the differences between formal and casual Samoan so that it is respectful towards elders using the tool, while at the same time able to communicate with young people in an easy and natural way.

Responses to pilot projects have been extremely positive. Now, people can participate in public consultation sessions in a language, and at a time and place, that they are comfortable with. Tala is an AI tool that people trust, and that in turn is helping boost meaningful community engagement and ensure more voices can be heard as part of inclusive decision making.

Driving business transformation with trusted AI
Getting AI right can unlock enormous benefits for companies and consumers alike. But as we devise a new social contract with this powerful tool, we do need to set standards for its ethical development and deployment.

When organizations develop their AI with fairness, transparency, and explainability in mind from the outset, it’s much easier to recognize the presence of any bias that might have an impact on the system’s decisions and take steps to correct that.

That way, it is much easier to avoid harm, manage risk, and safeguard company reputations—ensuring a way forward for the widespread acceptance of AI in our daily lives.

More stories

Summer’s Coming – Get ready to ride the wave of post-lockdown optimism

Author Ross Farrelly, Director, Data Science and Artificial Intelligence, IBM A/NZ You can feel it in the air. Summer’s on its way and there’s a spring in our step. In October, business confidence in NSW leapt by 42pts in September, while Victoria climbed by 16pts.[i] But it’s not just the change of season that is […]

Continue reading

Why natural language search is the way forward

By Jessica Vella, Associate Data Scientist – Advanced Analytics and Paul Sherlock, Associate Partner, Offering Lead – Cognitive Care A/NZ Every week, hundreds of young couples take their first step on an exciting, scary, nerve-wracking journey – one of the most important of their lives. They decide to buy a house. If you’ve been on […]

Continue reading

How IBM is building trustworthy AI

By Ross Farrelly, Director, Data Science and Artificial Intelligence, IBM A/NZ As IBM’s leader for artificial intelligence (AI) in the Australia and New Zealand region, I spend a lot of time talking to companies about how they can realize the benefits associated with this powerful technology. That includes how AI can help them modernize, respond […]

Continue reading