Building trust in AI: getting the wizard out from behind the curtain

By | 2 minute read | September 2, 2020

Artificial Intelligence (AI) is often regarded as “Great and Powerful;” it can add tremendous value by transforming business workflows with faster, smarter decisions. At the same time, AI can be mysterious and even scary. In order to build trust, AI needs to be transparent and explainable: “out from behind the curtain” so to speak. As IBM’s recent study on AI Ethics found, corporate boards are looking to Data and Technology leaders to make that happen, and I couldn’t agree more. CDOs and CTOs can be instrumental in bringing forth both human value and human values in enterprise AI.

Putting the human first

To build trust in business AI, we must always put the value of the human first. This should happen at the data-provider level and the decision-maker level. At the provider level, building trust starts with data governance to ensure that the data itself can be trusted. In our organization, embedded within this is the IBM promise that “your data is your data, and your insights are your insights.” Then when AI is applied, it needs to be trusted as a means to augment human decision making, not replace it. Quite simply, if users feel that AI will disintermediate them, they won’t use it.

At the decision-maker level, explainable AI is fundamental to trust. Understanding the context for a recommendation builds trust in the AI that provided it and confidence in the decision to follow it. Clear reasoning respects the value of the human as the decision maker. This is critical particularly in the business setting, where ultimately the decision maker’s job can be on the line. Consider the doctor who uses AI to help with diagnosis and treatment. His decisions can quite literally be life or death.

Using AI to help society

Another important way to build trust is to leverage AI to support human values. A couple of examples come to mind. First are the initiatives of the Science for Social Good program led by IBM Research. These include the United Nations Development Programme that developed an AI algorithm to streamline sustainability procedures in developing countries, and the Cognitive Financial Advisor for Low-wage Workers.

Another example comes out of my team. A few years back, our Global Chief Data Office created an internal solution called Operations Risk Insights (ORI). ORI continually monitors more than 150 data sources, including The Weather Channel and social media, and uses AI to assess threats to the IBM supply chain. Recognizing the value of this capability to help with disaster relief globally, we shared this capability with Day One Disaster Relief, Save the Children, and others. Most recently, we created a COVID-19 overlay for ORI which we also have shared with our non-profit partners.

In our recent virtual CDO Summit on AI Ethics & Trust, we received several questions around “Is AI the right thing to do?” CDOs and CTOs can help make sure the answer is yes. Applied in a framework of transparency, explicability, ownership, and accountability, AI can build trust and assure our users that “there’s no place like AI” to augment the human experience.

This topic came out of the latest IBM CDO Summit. You can watch the replay of CDO Summit: AI Ethics & Trust to hear more from Inderpal, as well as his peers: Jerry Gupta, SVP, Data and Tech Leader, Swiss Re; Timothy Nagle, Chief Privacy Officer, US Bank; JoAnn Stonier, Chief Data Officer, MasterCard; Francesca Rossi, AI Ethics Global Leader, IBM; Seth Dobrin, Chief Data Officer for Cloud and Cognitive, IBM