As organizations scale their use of AI, they increasingly need to do so in a responsible and governed manner. This is driven by many complementary forces: brand reputation, anticipated regulations [1,2,3], AI complexity, and social justice. Each of these alone would merit the need for AI governance, but the convergence of these forces makes it clear that AI governance is a critical capability for all enterprises deploying AI. Without proper AI governance, AI projects are destined to fail.

What is AI governance?

AI governance is the process of defining policies and establishing accountability to guide the creation and deployment of AI systems in an organization. Capturing and managing metadata on AI models as part of AI governance processes provides transparency into how AI systems are constructed and deployed, a key requirement for most regulatory concerns.

When done right, AI governance empowers organizations to operate with agility and complete trust rather than slowing them down. As organizations deploy AI to automate existing or new business workflows to create time-to-market advantage, AI governance enables them to trust AI-powered outcomes at every step of the way to create trusted business outcomes. While AI-powered automation fuels the end-to-end experience in new or existing client offers, governance from data profiling and policy enforcement to model development and model risk management drives grounding organizational trust internally and externally.

Achieving AI governance requires:

1) an organizational structure that provides governance leaders with the information they need to set policies for the organization and establish accountability, and

2) an enhanced AI lifecycle that collects this information, enforces the policies specified by the governance leaders, and makes this information accessible to interested parties in a consumable, customized manner.

Done well, enterprises gain considerable benefits from governed AI: [8]

  • Gain greater visibility and automated documentation from metadata captured throughout the AI lifecycle
  • Improve outcomes and efficiencies from best practices learned through analysis of the metadata
  • Establish and enforce consistent policies during the AI development and deployment lifecycle
  • Facilitate communication and collaboration among data scientists, AI engineers, developers, and other stakeholders shaping the AI lifecycle
  • Build AI at scale, with a centralized, comprehensive view of all activities

To further understand the value of AI governance, we provide a simple maturity model that focuses on the governance of the AI lifecycle.

Level 0: No AI lifecycle governance

At this level, each AI development team uses their own tools, and there are no documented central policies for AI development or deployment. This approach provides a lot of flexibility and is typical for organizations starting out on their journey to AI. However, it has the potential to introduce significant risks to the business if these models were to be deployed to production. Specifically, since there is no framework, it would be impossible to even evaluate the source of risk. Companies at this level tend to find scaling AI practices difficult. Hiring 10x data scientists does not lead to 10x increase in AI productivity due to inconsistencies.

Level 1: AI policies available to guide AI lifecycle governance

This level sees AI policies being created at either a line-of-business or enterprise level (such as a CDO or CRO) for constructing and deploying AI, as well as a common definition of information required before validating a model. However, as there is no enforcement of these policies, individual AI systems are still siloed with little consistency. In this stage, there is potential for misunderstanding the policies and in rare cases even subverting them, since there is no common monitoring framework to provide enforcement. The potential for risk here is also high. Companies at this level do not see many improvements in productivity, but they start to develop strategies for measuring successful AI.

Level 2: Common set of metrics to govern AI lifecycle

This level builds on level 1 by defining a standard set of acceptable metrics and a monitoring tool to evaluate models. This not only brings consistency among the AI teams, but also enables metrics to be compared across different development lifecycles. A common monitoring framework is typically introduced to track these metrics, to enable everyone in the organization to interpret them in the same way. This reduces the level of risk and improves transparency of information needed to make policy decisions or troubleshoot reliability in case of issues. Companies at this level usually have a central model validation team upholding the policies laid out by the enterprise during their validation process, so they start to see some productivity gains.

Level 3: Enterprise data and AI catalog

This level leverages the metadata information from level 2 to ensure that all assets in a model’s lifecycle are available in an enterprise catalog [12] with data quality insights and data provenance. With a single data and AI catalog, the enterprise can now trace the full lineage of data, models, lifecycle metrics, code pipelines and more. This also lays the foundation for making connections between the numerous versions of models to enable a full audit in compliance situations. It also provides a single view to a CDO/CRO for a comprehensive AI risk assessment. Companies at this level are able to clearly articulate risks related to AI and have a comprehensive view of the success of their AI strategy.

Level 4: Automated validation and monitoring

This level introduces automation into the process to automatically capture information from the AI lifecycle. This information significantly reduces the burden on the data scientist (and other lifecycle participants) to manually document their actions, measurements, and decisions. This information also enables model validation teams to make decisions on an AI model, as well as allowing them to leverage AI-based suggestions. With this capability, an enterprise can significantly reduce the operations effort in documenting data and model lifecycles. It also removes any risks from mistakes along the lifecycle in terms of metrics, metadata, or versions of data/model being left out. Companies at this level start to see an exponential increase in productivity as they’re able to consistently and quickly put AI models into production.

Level 5: Fully automated AI lifecycle governance

This level uses the automation from the previous step to automatically enforce enterprise-wide policies on AI models. This framework now ensures that enterprise policies will be enforced consistently throughout all models’ lifecycles. At this point, an organization’s AI documentation is produced automatically with the right level of transparency through the organization for regulators and, more importantly, for customers. This enables the team to prioritize the riskiest areas for more manual intervention. Companies at this level can be extremely efficient in their AI strategy while maintaining confidence in their risk levels.

Gartner [9] named Watson Studio on Cloud Pak for Data as a Leader among a group of 20 providers, crediting the solution’s thorough attention to responsible AI and governance.

IBM brings a comprehensive approach to this challenge [10], including IBM Research’s scientific and open source technologies [11], IBM Cloud Pak for Data platform offerings such as AI governance, and IBM’s Services consulting and industry-driven solutions. IBM can help you increase your AI lifecycle governance maturity level to ensure your AI systems satisfy the requirements of your business.

IBM provides expertise and technology for both components of AI governance. IBM services [4,5] can help organizations decide how they want their AI system to perform responsibly, and our Cloud Pak for Data platform [6] can provide the enhanced AI lifecycle that will help organizations implement their governance by leveraging the AI FactSheet technology [7] from IBM Research.

[1] https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm

[2] https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1682

[3] https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai

[4] https://www.ibm.com/services/artificial-intelligence

[5] https://www.ibm.com/products/expertlabs

[6] ../2020/12/how-ibm-is-advancing-ai-governance-to-help-clients-build-trust-and-transparency/

[7] https://aifs360.mybluemix.net/

[8] https://www.ibm.com/account/reg/us-en/signup?formid=urx-46439

[9] https://www.ibm.com/blogs/journey-to-ai/2021/03/ibm-is-named-a-leader-2021-magic-quadrant-for-data-science-and-machine-learning-platforms/

[10] ibm.com/watson/trustworthy-ai

[11] https://research.ibm.com/artificial-intelligence/trusted-ai/

[12] https://www.ibm.com/analytics/data-cataloging

Join the IBM AI Governance early access program

Was this article helpful?
YesNo

More from Artificial intelligence

AI Bundle for IBM Z and LinuxONE

5 min read - IT leaders have long faced a need to add compute capacity to meet the increased demands from their business. Adoption of mobile technologies and ongoing digital transformation has added to these capacity demands, and IT leaders have been forced to plan for the increasing need for compute infrastructure. We have seen that the explosion in interest and adoption of AI has led IT leaders to revisit their capacity plans. They are seeing the need for increasing compute resources at a scale…

Unlock the value of your Informix data for advanced analytics and AI with watsonx.data

3 min read - Every conversation that starts with AI ends in data. There's an urgent need for businesses to harness their data for advanced analytics and AI for competitive edge. But it’s not as simple as it sounds. Data is exploding, both in volume and in variety. According to IDC, by 2025, stored data will grow 250% across on-premises and cloud storages. With growth comes complexity—multiple data applications, formats and data silos make it harder for organizations to utilize all their data while managing costs. To unlock…

How to prevent prompt injection attacks

8 min read - Large language models (LLMs) may be the biggest technological breakthrough of the decade. They are also vulnerable to prompt injections, a significant security flaw with no apparent fix. As generative AI applications become increasingly ingrained in enterprise IT environments, organizations must find ways to combat this pernicious cyberattack. While researchers have not yet found a way to completely prevent prompt injections, there are ways of mitigating the risk.  What are prompt injection attacks, and why are they a problem? Prompt…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters