While the hype around AI and foundational models continues to grow—and dominate the news and conversations—organizations still struggle to successfully deploy responsible AI algorithms and models across real-world environments. In fact, only about half of AI projects make it from pilot to production.¹ This is where you come in.
As stewards of a company’s digital transformation, chief data officers, chief AI officers and other data leaders are important voices for the effective and ethical use of AI to improve operations, drive innovation and grow revenue. Your expertise and decision-making are foundational to the success of enterprise AI.
Meet watsonx
Get timely AI industry news, events and insights in a 5-minute read
Integrating AI into your organization starts with identifying how AI platforms, foundational models, generative AI and machine learning (ML) align with key goals. Businesses tend to overestimate the impact of AI capabilities and underestimate the complexity—requiring data and analytics leaders to manage expectations, or risk costly project failures.²
“If you’re a data leader, think about the things your teams are being asked for the most, and how AI could make life easier for those lines of business,” says Ann Leach, Director, Portfolio Product Management, IBM. “Where can they infuse AI to help make decisions, create better workflows and processes, or provide information to the business that drives forward thinking?”
To make the most of your AI applications, keep these directives in mind:
Connect to business outcomes
Work with leadership to serve your organization’s overall business objectives. Tim Humphrey, Chief Analytics Officer, IBM suggests that whether you’re considering an AI use case with a leader in marketing, human resources, supply chain, sales or asset management, you should ask where the leader is trying to take that function or organization. You need to understand both where it is now and where it’s supposed to go. Humphrey adds, “If you can’t apply AI along that continuum of as is and to be, you shouldn’t start.”
Do the testing first
With AI, test proofs of concept until you find the right fix. Then optimize it. “Rather than spending a lot of time making everything perfect, I’m a big fan of a lot of proof of concepts going through until you find the one that actually has legs,” says Caroline Carruthers, CEO of Carruthers and Jackson and author of The Chief Data Officer’s Playbook.
Set and track targets
Define KPIs that measure success for each use case. Let’s say the project is about identifying credit card fraud and you’d like AI to catch 95% of fraudulent cases. Tracking your progress with metrics lets you map and monitor AI performance levels and demonstrate the value of AI to stakeholders.
One of the hardest pieces of a data leader’s job is establishing quick, trusted ways to get from data to insights. You need the right data to run your model, but not all data is fit for AI.
“The root of everything starts with the appropriate data set for a specific use case, and without that there is no AI, period,” says Remus Lazar, Vice President of Software Development, Data Fabric, IBM. He points to the example of an airline that wants predictive AI to forecast whether passengers can make their connecting flights. “If you’ve only collected data on passengers who missed their connections, not on those who made their flights, then it isn’t exactly the right data to use. Without the appropriate data sets you will never be able to solve the use case.”
Review your data architecture
More than half of organizations cite data as the culprit for AI projects stalling. A modern data architecture like a data fabric provides built-in data quality and data governance capabilities. It enables your data scientists to self-serve data regardless of where it resides, with all the governance and privacy requirements automatically applied. This approach gives users reliable data at the ready and access to disparate sources in real time with full governance that paves the way for agility and speed.
Fuel your models with trusted data
At a time of changing and complex regulations and ethics around AI, you should always be asking: What’s the governance around this data and can it be used for this purpose? Data quality and data governance are critical for successfully scaling AI solutions. Consider the following questions your organization needs answers to before it can rely on the algorithm’s decisions.
It’s up to you as a data leader to determine who controls the data, who has access to AI software and apps, and who needs access to ensure AI initiatives are useful.
Commit to ethical AI
Guidelines for responsible AI include considerations like security, explainability and bias. If you’re using historical data to feed a model, make sure it aligns with society’s current ethics and sensibilities. For example, attitudes around gender, race, sex, class and age are different today than they were in the 1970s. Using an outdated data set could perpetuate AI bias, skewing results from the start. Organizations can distinguish themselves by confronting ethical issues strategically, purposefully and thoughtfully.
Enterprise AI demands the same communication, structure and rigor common in more established areas of an organization. But model development too often takes place on a data scientist’s laptop, and orchestration is done manually, or ad hoc, using custom code and scripts. That’s why you need machine learning operations (MLOps), which is the application of AI capabilities, such as natural language processing (NLP) and ML models, to automate and streamline operational workflows. And don't overlook the efficiency gains that come with flexible and reusable AI models like foundation models either.
Speed workflows efficiently
It’s useful to have a set of best practices for enterprise AI platforms that speeds and syncs collaboration between your data science teams and IT department.
“You want to create the ability to automatically roll out secure models to the edge, to web services, to mainframes, onto the right type of hardware, as well, and justify it,” says Steven Eliuk, Vice President, AI & Governance, IBM Global Chief Data Office. “At IBM, we’re always looking at ways to enable groups to get their models into production quicker, but in a secure, governed fashion,” Eliuk adds.
Solve for human error
MLOps automates manual processes and helps eliminate costly human error, reducing risk and making the company much more agile. In addition to streamlining production, MLOps helps models perform as they are meant to, so there’s trust across the AI lifecycle. It helps you answer critical questions like: Is this data biased to begin with? Does it have enough representative samples on the data set? When you get into development, are you using the right algorithms, or will those algorithms perpetuate bias that already exists in the data?
Here’s how one data leader has put MLOps into practice: “We have our MLOps constantly checking the quality, testing the quality of our predictions and the quality of our ML,” explains Peter Jackson, Chief Data and Operations Officer, Outra. “We have a whole series of dashboards which report to the senior management team where we can see the quality and the predictive power of those models. And if we see a drop-off during the course of a month, we will unpack our machine learning programs, and look at the data sources, to see why they’re not working.”
Organizations face severe risks to their brand reputation if their AI models are biased or unexplainable. They could also face government audits and millions in fines for failing to meet complex and changing regulatory requirements. All these issues can have a devastating impact on shareholder and customer relationships.
Know and trust your AI model
Black box models that lack transparent process are a growing concern for AI stakeholders. These models are built and deployed but lack transparency. It isn’t always easy—even for the data scientist—to trace how and why the model made the decision. And with the rise of regulations, such as New York City’s law regulating how AI is used in hiring and the European Union’s proposed AI Act, companies must get savvier, fast.
AI governance is the overall process of directing, managing and monitoring the AI activities across business processes. Data leaders should work with chief risk officers, chief compliance officers and other key stakeholders from the onset of an AI project to develop an AI governance framework. This framework should outline the company’s best practices for developing, deploying and managing AI models and, ultimately, eliminating the black box.
Track models end to end
AI governance establishes guardrails at each stage of the AI and ML lifecycle, including data collection, model building, deploying, managing and monitoring. These guardrails result in more transparent processes and provide explainable results to key stakeholders and customers. Implementing AI governance from start to finish helps you better manage risk and reputation, adhere to ethical principles, and protect and scale against government regulations.
One major US retailer turned to IBM for help addressing issues of fairness in tools and recruiting systems that screen candidates. It was critical for this employer to embed fairness and trust, including the ability to identify bias and explain decisions within its AI and ML model used for hiring. The company used IBM Cloud Pak® for Data to consistently manage AI-enabled models for accuracy and fairness. Now, the company is proactively monitoring for and mitigating bias in its hiring processes.
Show your work
IBM applies this approach internally, too. “If a certain regulation requires transparency or explainability, we make sure that the algorithm or the impact assessment showcases those details so we can quickly pivot for continuous compliance around new regulation instead of impacting the business,” Eliuk says.
As AI moves from experimentation to business critical, organizations are seeing the need to proactively implement AI governance to drive transparent and explainable AI. The lack of guardrails around AI can derail AI projects and slow innovation.
Champion the continued application of AI
As a data leader, you’re shaping AI technology for every part of the enterprise. It’s your job to set forward-thinking, organization-wide policies on ML and AI processes. But you’re not acting alone. Being a strong partner to the business means identifying new AI use cases that touch areas including data management, cybersecurity, supply chain, enterprise software and customer service.
Scaling your enterprise AI capabilities brings down costs, streamlines workflows, creates more revenue for R&D and builds trust among shareholders and customers. AI is no longer a choice; it’s an imperative. While there can be trepidation or hesitation around the impact of AI, consider Carruthers’ words:
“The power of AI is incredible, and I think it’s always worth focusing on the positive. Usually fear around new technology is driven by lack of understanding. It’s important to remember that we are in control and we should stay in control. AI can help us. We can stand on it to see further, to do more, to be quicker. And when we get that combination right and people understand that part, that’s when we can do some fantastic things.”
¹ “Gartner 2022 AI Survey” (Link resides outside ibm.com), Gartner, 2022.
² “What Is Artificial Intelligence? Ignore the Hype; Here’s Where to Start”, Gartner, 2022.
³ “AI ethics in action”, IBM Institute for Business Value, 2022.