Loading
Business professional working at a computer in an open office.
Trustworthy AI

01

1 min read

Introduction

Businesses are increasingly looking to automate AI to help them become more intelligent, resilient and flexible

On average, companies report 6.3% points of direct revenue gains attributable to AI, amongst other strong evidence that shows a direct positive relationship between AI adoption and business performance. 1

Successful adoption, however, requires trust and transparency across the entire AI lifecycle—from building a strong data foundation to ensuring models are accurate and outcomes are explainable, and everything in between. By scaling trustworthy AI successfully, businesses can enhance operational and productivity efficiencies to improve their bottom line.

Through more than 30,000 customer engagements, IBM has found that the path to trustworthy AI requires more than technology. The IBM view is that while governed data and AI technologies are crucial to trusted outcomes, they must also be rooted in ethical principles and supported by an open and diverse ecosystem. IBM’s technology strategy is focused on providing transparency, explainablity, fairness, privacy and robustness across 3 key areas:

1) Trust in data

Build a foundation of trusted data that delivers a complete view of quality data from across the enterprise.

2) Trust in models

Increase insight accuracy with improved model development, validation and bias mitigation.

3) Trust in processes

Ensure privacy, compliance and repeatability for AI at scale.

The sections in this smart paper explore the benefits of a strategic approach to data and AI technology. With the right platform and processes, your teams can confidently operationalize AI across hybrid and multicloud environments. The end result is accuracy and compliance in AI-driven outcomes, as well as trust in every portion of the AI lifecycle.

1 The business value of AI, IBM Institute for Business Value, November 2020.

02

1 min read

Trust in data

The foundation of enabling trust in data begins with a data management system powered by modern technologies that facilitate digital transformation

To stay ahead of the competition, you need to adopt a data management system that helps your teams collect a wide variety of data types and structures across multiple sources and deployments, including on premises and multiple cloud vendors.

But a trusted data foundation also requires embedded governance and compliance features to help you increase compliance readiness and reduce risks. Incorporating a DataOps strategy can help you deliver trusted, business-ready data to data citizens, operations and applications throughout the data lifecycle. With IBM, you can increase efficiency, data quality, findability and governing rules to provide a seamless self-service data pipeline from essentially any source to the right people at the right time.

Use extract, transform, and load (ETL) and data virtualization together for a powerful data integration solution that can reduce unnecessary data engineering efforts by up to 65% with the IBM Cloud Pak® for Data platform.1

Data virtualization is a major benefit. By giving self-service capabilities to users, they will have more understanding of what enterprise data there is, and also understand if their data request is worth sending through the ETL chain.” 1
Information architect, financial services industry

Through the IBM Cloud Pak for Data platform, you also get a bundle of data governance capabilities that help ensure the right people on your team find data to curate, categorize, govern, analyze and use. By activating business-ready data and AI analytics with intelligent cataloging, you’re backed by the power of active metadata and policy management tools.

In summary, to trust your data, your teams must first feel confident in the entire data lineage from collection to analytics to infusing AI. Trust in AI is impossible without first having the right data management solutions in place.

Next steps:

03

1 min read

Trust in models

In order to make your AI models more accurate, adaptable and explainable, you need to have the tools to track and manage those models throughout the complete AI lifecycle

Model operations (ModelOps) is a principled approach to operationalizing a model within business applications. By adopting a ModelOps approach to building and deploying AI, you can optimize your data and AI investments using data, models and resources from hybrid multicloud environments, while at the same time helping ensure accuracy of these models for better business outcomes.

But to be a leader in the AI and machine learning (ML) market, you have to adopt an offering that automates the entire AI lifecycle with not only proprietary services, but community-led capabilities, as well. These capabilities can include deep learning, Kubernetes container orchestration, and public and hybrid cloud tooling from multiple providers, including IBM Cloud®, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform and more. Trusted AI models require three components—lifecycle visibility, model fairness and explainable AI—and ModelOps principles help you develop all three.

Establish end-to-end visibility of the AI lifecycle

Challenges can arise when managing the AI lifecycle manually, from human errors in models, to a slowed down, costly workflow as users need to understand each individual algorithm. To resolve these challenges, you need automation of your AI lifecycle that allows the documentation and validation process of AI governance to be much more efficient and streamlined.

All these capabilities are available through IBM Watson® Studio for IBM Cloud Pak for Data. It can help your teams scale the entire AI lifecycle through user-friendly toolsets that help empower and streamline creation, training and management of ML models. It’s also flexible to your deployment needs, as the platform runs on essentially any cloud whether private, public or hybrid. With IBM Watson Machine Learning specifically, it’s designed to help you quicky and more securely deploy models in applications throughout your business. With AutoAI on IBM Watson Studio, you can simplify the AI management lifecycle by automating:

  • Data preparation
  • Model development
  • Feature engineering
  • Hyper-parameter optimization

Access the 5-part on-demand virtual event series   on AI lifecycle management with speakers from IBM, Cognizant, Prolifics and Appen.

Monitor model fairness and drift

Building and deploying your model is only the beginning. Within days of going into production, AI models can “drift,” causing their accuracy to drop significantly. No matter how prepared your teams may be, production data inevitably differs from training data and drift is bound to happen. To maintain confidence in your models, you need a strategy to detect and mitigate drift.

But how can you mitigate model drift? By turning to an integrated solution that helps you tune and adjust your models. The data and AI solution you choose should enable your teams to:

  • Define, apply and monitor custom and predefined model performance metrics.
  • Assess model accuracy with sophisticated diagnostics services and correct against drift.
  • Identify and get automated support to help mitigate harmful biases in both the data and the model, without requiring retraining by the data science team.
  • Perform bias checks at build time and during runtime to catch issues early.

The IBM Watson® Studio capability within IBM Watson Studio for IBM Cloud Pak for Data monitors and manages models to deliver trusted AI. It helps you monitor model fairness, explainablity and drift. Once you have a good understanding of your model’s lifecycle and fairness, you’re ready to establish explainable AI.

Read the infographic to learn more about What is AI model drift and why it’s risky (PDF, 1.1 MB).

Foster explainable AI

The final step in building trustworthy models comes in the form of explainablity. Confidently understanding why your models have generated a specific insight or outcome is one of the key requirements for successfully implementing AI. In fact, the “responsible AI” methodology for the large-scale implementation of AI methods in real organizations requires fairness, model explainablity and accountability.

Bias—which is often based on race, gender, age or location—has been a longstanding risk in training AI models. This issue makes it crucial for a business to continuously monitor and manage models to identify and remove any potential bias shown in their models. Explainable AI helps promote end-user trust, model auditability and productive use of AI while also measuring the business impact of the algorithms and their output. When applied to production AI, it mitigates compliance, legal, security and reputational risks.

Gartner defines explainable AI as a set of capabilities that describes a model, highlights its strengths and weaknesses, predicts its likely behavior, and identifies any potential biases. Explainable AI allows human users to comprehend and trust the results and output created by the ML algorithms.

68% of business leaders believe that customers will demand more explainablity from AI in the next three years.1

As AI becomes more advanced, humans are challenged to comprehend and retrace how the algorithm came to a result. There are many advantages to understanding how an AI-enabled system has arrived at a specific output. For example, explainablity can help developers ensure that the system is working as expected; it might give those affected by a decision the chance to change the outcome, and it might simply be necessary to meet regulatory standards.

Five considerations for explainable AI

Fairness and debiasing
Manage and monitor fairness. Scan your deployment for potential biases.
Model drift mitigation
Analyze your model and make recommendations based on the most logical outcome. Generate alerts when models deviate from the intended outcomes.
Model risk management
Quantify and mitigate model risk. Get alerted when a model performs inadequately. Understand what happened when deviations persist.
Lifecycle automation
Build, run and manage models as part of integrated data and AI services. Unify the tools and processes on a platform to monitor models and share outcomes. Explain the dependencies of ML models.
Multicloud readiness
Deploy AI projects across hybrid clouds, including public clouds, private clouds and on premises. Promote trust and confidence with explainable AI.

The IBM Cloud Pak for Data platform provides a unified environment so that you can assess the impact and relations of data and models to improve AI explainablity. With IBM Watson OpenScale for IBM Cloud Pak for Data, your teams gain insights on model deployments, fairness, quality and risk. Explain AI transactions, categorical models, image models and unstructured text models with tools, such as contrastive explanations and local interpretable model-agnostic explanations (LIME). Making AI explainable and transparent by automating the AI lifecycle on a modern information architecture (IA) is vital to production AI success.

With IBM Cloud Pak for Data, you get a unified set of tools to help you build, run and manage AI models, and optimize decisions at scale across essentially any cloud to operationalize AI virtually anywhere. This collaborative platform brings together historically siloed tools to speed the process of building and deploying more accurate models. It can help your teams work together to:

  • Build, deploy and manage AI models at scale.
  • Track and measure AI outcomes.
  • Correlate outcomes to key performance indicators (KPIs).
  • Enable thorough auditability.
  • Integrate capabilities with common business reporting tools.

View the infographic, Explainable AI on IBM Cloud Pak for Data (PDF, 231 KB).

Next steps:

Access the New Technology: The Projected Total Economic Impact of Explainable AI and Model Monitoring in IBM Cloud Pak for Data to learn how you can potentially:

  • Increase total profits, ranging from USD 4.1 million to USD 15.6 million, over three years.2
  • Reduce model monitoring efforts by 35% to 50%.2
  • Increase model accuracy by 15% to 30%.2
1 The business value of AI (PDF, 264 KB), IBM Institute for Business Value, November 2020.

04

1 min read

Trust in processes

In today’s privacy-centric global climate, it’s imperative that organizations implement an AI strategy that considers risk and compliance from the start

From the privacy and security necessary to safeguard your most sensitive information to the governance and transparency you need to truly trust your AI and insights, the process and safeguards in place around your AI are as important as the data and models themselves. Streamlining manual processes and tasks with intelligent automation—and modernizing your existing investments and legacy business processes with a flexible, collaborative platform—will help your teams trust the analytics process from beginning to end.

Streamline your handling of sensitive data

IBM is simplifying how you can fully understand and manage sensitive data throughout your organization. By integrating solutions spanning data security, hybrid data management, governance and risk, and compliance within a single collaborative platform, your data users can get a real-time view of sensitive data across the business. Of course, this real-time view occurs on a governed basis to authorized users only. In addition to this universal view of who has access to sensitive data, IBM also delivers the pervasive policy and regulation enforcement tools required to accelerate risk mitigation across all data and AI assets.

Capitalize on intelligent automation to accelerate the collection, cataloging and masking of sensitive data to build a foundation of trusted, compliant data that can be easily accessed by your teams and models. Turn every data consumer into an expert in risk and compliance with built-in AI that understands the latest regulations, as well as natural language and context to augment your team’s skills and understanding. With the IBM Data and AI privacy framework, your teams get:

  • Automatic, pervasive privacy and policy enforcement across all phases of the AI lifecycle
  • A unified platform and experience
  • An end-to-end privacy solution
  • A flexible hybrid multicloud platform for improved operational efficiency

Mitigate regulatory, reputational and operational risks

Model risk occurs when a model is used to predict and measure quantitative information, but the model performs inadequately. Poor model performance can lead to adverse outcomes and result in substantial operational losses.

For example, the Federal Reserve Board and the Office of Comptroller of the Currency issued a joint regulation for model risk management called “Supervisory Guidance on Model Risk Management” (SR 11-7) in 2011. This regulation has now become the key guideline for financial institutions around the world that need to manage model risk. With guidance, such as SR 11-7, organizations face unique challenges when using AI models across the lifecycle.

Implementing model risk management in your data and AI solution can help you:

  • Save time to help meet regulatory compliance and other risk objectives.
  • Simplify model validation across multiple clouds.
  • Take advantage of models and data running virtually anywhere.

There are 5 ways that an IBM platform can help you simplify AI model risk management:

Customize and automate a model validation test across the model lifecycle.
Document test results automatically.
Save time by comparing model test results side by side.
Automatically copy validation test configurations to production and continue automated testing.
Synchronize test results and other governance, risk and compliance (GRC) solutions.

Next steps:

05

1 min read

Resources and next steps

One thing is clear, now and going forward: trustworthy AI is imperative not only for business success, but to promote accurate insights for fair decisions

IBM helps clients more easily achieve trustworthy AI through the use of governed data and AI technologies that’s underpinned by ethical principles that allow you to:

  • Trust in data.
  • Trust in models.
  • Trust in processes.
By 2023, companies that earn and maintain digital trust with customers will see 30% more digital commerce profits than their competitors.1

Review the resources to learn more about how you can start the journey to empower your business through trusted AI.