AI adoption is critical for business success in the changing, competitive market. A HIMMS and IBM study found that 64% of respondents said their organizations placed a “critical” or “high” strategic priority on AI.
However, many organizations are still reluctant to fully adopt AI into their processes and decision-making for fear of the unknown.
“To many who aren’t data scientists, AI is still a black box and that scares us,” says Kelly Combs, Director of Emerging Technologies Risk Services at KPMG.
So how do organizations go about establishing trust in their AI? The answer lies in leadership buy-in, automated checks and balances, and access to clean and complete data.
1. Unite your people and processes around strategic AI through education and information architecture
Culture and strategy often trickle from the top of an organization down. This is why it is important to rally leaders and key stakeholders around the mission of adopting AI and educate them about the benefits and potential risks, so they are fully onboard with the strategy.
This comes with making sure that there is clear and regular communication between leaders and the line of business employees that actually work with AI, so there is a strong understanding of why certain actions were done with the data and how they affect the quality of the output. This could help assuage the fears leaders may have of legal, financial and reputational damage that could occur as a result of adopting faulty AI, especially in regulated industries like finance and banking. Having a clear and holistic understanding of the data and AI process will help reinforce trust in its potential.
In an interview, Deborah Leff, Global Leader and Industry CTO for Data Science and AI at IBM, describes a common scenario where data science teams deliver innovative solutions, but business leaders reject and actually work against the findings of AI because they do not fully understand it. “If there isn’t trust there, then you see humans starting to work against AI to try and circumvent it and go around it,” Leff adds.
In addition to properly educating organizational leaders on AI, make sure your organization has established an information architecture that is ready to support the demands of AI.
“The foundation for AI is data,” said Sumit Gupta, VP AI Strategy and CTO of Data and AI at IBM. “But before you can analyze that data, you have to understand how you are going to collect, organize and share it. You can’t do any of that unless you first have a well-thought-out information architecture in place.”
Make sure to clarify which specific teams would be owning the various data processes across the AI lifecycle and how they would be held accountable and measured for success. This way, everyone from the top leadership down to the line of business is strategically aligned.
Read more about how to deliver smarter AI with the right information architecture.
2. Make your AI trustworthy and compliant by automating checks and balances throughout your data process
Trust in AI is not only needed internally at an organization, it is needed externally to appease auditors. New regulations around AI, such as the US SR-11-7 under the US Federal Reserve, require organizations using AI to implement model risk management initiatives that prove that their AI has been properly built, monitored, and trained to prevent bias and drift. Bias in AI occurs when models give preferential treatment to one group over another and output results that are prejudiced and based on false assumptions. Drift occurs when AI models evaluate data that is different or has changed from the data it was trained on. This causes accuracy to degrade in as little as a few days.
Learn more about AI Model Drift:
According to US SR-11-7, “All models have some degree of uncertainty and inaccuracy because they are by definition imperfect representations of reality.” Failure to decode this uncertainty can result in serious fines and business losses.
One way to decode this uncertainty and prove AI integrity is adopt self-governance practices that provide checks and balances across the entire AI lifecycle—from ensuring that the data you are collecting is clean and complete, to building and running your AI models on that data, to managing and examining the output of your models. Make sure you are double checking the results you are getting. Are they what you expected? Do they show signs of drift or bias?
Learn more about AI Governance and how you can prepare your AI to be trustworthy and transparent.
Luckily, much of this governance process can be automated. IBM Watson Knowledge Catalog Instascan, for instance, identifies risk potentials in unstructured cloud data and provides remediation recommendations and compliance checks. Similarly, the Explainable AI feature with IBM Cloud Pak for Data identifies model accuracy, fairness, transparency, and outcomes.
Read more about security and governance built-in functions within IBM Cloud Pak® for Data here.
3. Optimize your AI models by making sure your data is clean, complete and fully accessible
AI systems are only as good as the data they are fed. Are you accessing all of your data to ensure your AI models are optimal, and what is the quality of the data?
Establishing trust in AI starts with making sure the data used to train the AI models is clean and complete. Gathering complete a dataset is often a time-consuming task as organizations collect a tremendous amount of data from a variety of sources. This often means that data exists in a number of silos—in several storage locations and across different tools and platforms.
Because acquiring data is so time-consuming, data scientists often take a subset of a larger dataset to train their models. This leads to suboptimal and potentially incorrect decisions because the datasets used to train AI models are incomplete and do not provide the full picture. This compromises both AI quality and compliance, as the risk of issues such as bias and drift is higher.
Luckily, IBM Cloud Pak for Data also provides data virtualization, which enables organizations to access their data from anywhere and everywhere—whether on-prem or on the cloud—by allowing them to make queries across multiple data sources. This breaks down the data silos and eliminates the issues within AI models that arise from them being trained on incomplete data subsets. From one singular dashboard view, users can examine their entire data landscape across all their data, AI, and open source assets for governance and quality control without having to spend an inordinate amount of time lifting and shifting the data from one location to another.
Invest in a full-lifecycle AI solution with built-in governance and access to all of your data, regardless of location
Instilling trust in AI may seem like a daunting task, but the good news is you don’t have to do it alone. Investing in a full lifecycle solution like IBM Cloud Pak for Data can help you break down data silos and ensure that your models are fair, explainable, and compliant so that auditors inside and outside of your organization can be set at ease.
Using one unified platform will help to establish trust in AI. In the case of IBM Cloud Pak for Data, AI governance is automated through the whole process of data collection, preparation, implementation and management, and data access issues are eliminated with data virtualization which helps save money, time and mistakes.
Trust in AI is about humans trusting the output of algorithms. Help your organization establish that trust in AI by ensuring that your data is clean, accessible, and well-monitored against bias and drift. Furthermore, educate your leadership about the workings of AI so they will have confidence in using it to drive decision-making. Data quality clear; AI have no fear.