October 22, 2020 By Priya Krishnan 4 min read

It’s all about trust

Two things are true about trust: it’s at the core of all successful human relationships, and it’s not an easy thing to attain. Trust contains multitudes of nuance, and when achieved, it can lead to transformational events. The same could be said of artificial intelligence.

We have seen that a business that can trust their AI will do more and go beyond their expectations and projections. Every business uses AI differently, and that trust looks different to each industry and every use case. So what does trust mean when it comes to a technology like AI? IBM Research has broken its taxonomy of AI trust into three dynamics:

  • Ethics
  • Governance
  • Trustworthiness

To take those ideas further, trust in AI means understanding:

  • Where the data is coming from
  • How that data is being used
  • What data the training model contains
  • How all of this affects the entire lifecycle of the AI

For IBM, trust is a foundational pillar of AI. Whether you’re looking at data collected by AI or seeing how AI performs within your industry use-case guidelines, you will need those insights delivered in a trusted manner. As such, we’ve developed a multifaceted perspective around this complex topic, which helped us devise tools and capabilities for enterprise use to help businesses remain confident with their AI.

The Pillars of Trust

Our engineers at IBM Research started with the question, “What would it take to trust the output of an AI model?” The properties they came up with centered around accuracy, fairness, understandability, dependability and transparency in AI. They further developed those key takeaways into the AI Pillars of Trust:

  • Performance
  • Fairness
  • Explainability
  • Robustness
  • Transparency

Over the last several years, IBM Research has been building AI algorithms that will imbue AI with these properties of trust. They then created toolkits that embody those algorithms, and now we’ve taken those innovations and added them to Watson OpenScale capabilities inside IBM Cloud Pak for Data.

AI Governance, which is part of the overall taxonomy, is how a business operationalizes and vets AI results — so they’re getting only what’s intended. It’s also the ability to prove trustworthiness. In regulated industries, this implies audit readiness.

However, vetting results requires documenting the model’s inputs and behavior, which is manual and tedious work. It’s also not easy to share metadata about models across multiple enterprise tools and platforms, and current practices and tools are not optimized for AI.

IBM Cloud Pak for Data combines the best of IBM Research and engineering to enable a fully governed AI Lifecycle, making it easier to know your model, trust your model, and use your model.

AI Fairness

Fairness is fundamental to who we are and where we want to be as a society. As such, bias in AI has drawn much attention in the last couple of years. In our quest for unbiased AI, IBM Research has authored a pioneering algorithm for bias detection.

Imagine a credit lender who needs to approve a loan. When the lender checks the client against their risk model, a lot of information gets shipped into the modeler to help it make its recommendation to approve the loan or not. This information comes from many sources, including the lender’s data and often third-party data. In most cases, the lender cannot know if the data is free of bias.

Products like Watson OpenScale in Cloud Pak for Data provide tools that can mitigate bias and detect drift and performance invalidation, so operations personnel or data scientists can fix instances of biased outcomes by model. The idea is to give users the ability to take biased data and easily shape it into a fairer version of itself while still allowing the model to learn what it needs to learn.

AI Explainability

Explainability in AI is multifaceted. One approach does not fit all cases, because different processes require different explanations.

For example, a loan officer asks why you recommended rejection of a loan; the customer wants to know why their loan was denied; the regulator wants proof that your system isn’t discriminatory. There is no single answer that will satisfy all of these questions.

Enterprise-grade decisions must be consumable, so this concept has been integrated into Watson OpenScale to make explainability more transparent for business use cases. We’ve introduced two types of explanations to truly open up your AI black box. The first shows visually why a prediction was made by the model, showing the features or the inputs that are most important to the outcome, and how they’re stacked up. These visualizations can be generated on the fly via the OpenScale dashboard. The second type of explanation allows the user to change the inputs of the model to test the boundaries of the model’s decision-making.

This is all just a small taste of the advanced features IBM Research is working on in regard to AI governance. There’s much more to explore, and as these advances make their way into the product, we’ll be back to tell you about them.

Trusted AI is not only a strategic imperative but also an ethical one. As a result of the work we’re doing around trust and AI, clients can understand and explain how their AI models are making decisions, and why they’re making them.

Interested in seeing these capabilities in action? Check out the full Innovation panel to watch demos and hear how this transformative technology has helped clients like KPMG, IBM HR and the US Open develop innovative and trustworthy experiences for users.

Watch the Innovation panel

Was this article helpful?
YesNo

More from Artificial intelligence

What you need to know about the CCPA rules on AI and automated decision-making technology

9 min read - In November 2023, the California Privacy Protection Agency (CPPA) released a set of draft regulations on the use of artificial intelligence (AI) and automated decision-making technology (ADMT).  The proposed rules are still in development, but organizations may want to pay close attention to their evolution. Because the state is home to many of the world's biggest technology companies, any AI regulations that California adopts could have an impact far beyond its borders.  Furthermore, a California appeals court recently ruled that…

AI transforms the IT support experience

5 min read - We know that understanding clients’ technical issues is paramount for delivering effective support service. Enterprises demand prompt and accurate solutions to their technical issues, requiring support teams to possess deep technical knowledge and communicate action plans clearly. Product-embedded or online support tools, such as virtual assistants, can drive more informed and efficient support interactions with client self-service. About 85% of execs say generative AI will be interacting directly with customers in the next two years. Those who implement self-service search…

Bigger isn’t always better: How hybrid AI pattern enables smaller language models

5 min read - As large language models (LLMs) have entered the common vernacular, people have discovered how to use apps that access them. Modern AI tools can generate, create, summarize, translate, classify and even converse. Tools in the generative AI domain allow us to generate responses to prompts after learning from existing artifacts. One area that has not seen much innovation is at the far edge and on constrained devices. We see some versions of AI apps running locally on mobile devices with…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters