As organizations scale their use of AI, they increasingly need to do so in a responsible and governed manner. This is driven by many complementary forces: brand reputation, anticipated regulations [1,2,3], AI complexity, and social justice. Each of these alone would merit the need for AI governance, but the convergence of these forces makes it clear that AI governance is a critical capability for all enterprises deploying AI. Without proper AI governance, AI projects are destined to fail.

What is AI governance?

AI governance is the process of defining policies and establishing accountability to guide the creation and deployment of AI systems in an organization. Capturing and managing metadata on AI models as part of AI governance processes provides transparency into how AI systems are constructed and deployed, a key requirement for most regulatory concerns.

When done right, AI governance empowers organizations to operate with agility and complete trust rather than slowing them down. As organizations deploy AI to automate existing or new business workflows to create time-to-market advantage, AI governance enables them to trust AI-powered outcomes at every step of the way to create trusted business outcomes. While AI-powered automation fuels the end-to-end experience in new or existing client offers, governance from data profiling and policy enforcement to model development and model risk management drives grounding organizational trust internally and externally.

Achieving AI governance requires:

1) an organizational structure that provides governance leaders with the information they need to set policies for the organization and establish accountability, and

2) an enhanced AI lifecycle that collects this information, enforces the policies specified by the governance leaders, and makes this information accessible to interested parties in a consumable, customized manner.

Done well, enterprises gain considerable benefits from governed AI: [8]

  • Gain greater visibility and automated documentation from metadata captured throughout the AI lifecycle
  • Improve outcomes and efficiencies from best practices learned through analysis of the metadata
  • Establish and enforce consistent policies during the AI development and deployment lifecycle
  • Facilitate communication and collaboration among data scientists, AI engineers, developers, and other stakeholders shaping the AI lifecycle
  • Build AI at scale, with a centralized, comprehensive view of all activities

To further understand the value of AI governance, we provide a simple maturity model that focuses on the governance of the AI lifecycle.

Level 0: No AI lifecycle governance

At this level, each AI development team uses their own tools, and there are no documented central policies for AI development or deployment. This approach provides a lot of flexibility and is typical for organizations starting out on their journey to AI. However, it has the potential to introduce significant risks to the business if these models were to be deployed to production. Specifically, since there is no framework, it would be impossible to even evaluate the source of risk. Companies at this level tend to find scaling AI practices difficult. Hiring 10x data scientists does not lead to 10x increase in AI productivity due to inconsistencies.

Level 1: AI policies available to guide AI lifecycle governance

This level sees AI policies being created at either a line-of-business or enterprise level (such as a CDO or CRO) for constructing and deploying AI, as well as a common definition of information required before validating a model. However, as there is no enforcement of these policies, individual AI systems are still siloed with little consistency. In this stage, there is potential for misunderstanding the policies and in rare cases even subverting them, since there is no common monitoring framework to provide enforcement. The potential for risk here is also high. Companies at this level do not see many improvements in productivity, but they start to develop strategies for measuring successful AI.

Level 2: Common set of metrics to govern AI lifecycle

This level builds on level 1 by defining a standard set of acceptable metrics and a monitoring tool to evaluate models. This not only brings consistency among the AI teams, but also enables metrics to be compared across different development lifecycles. A common monitoring framework is typically introduced to track these metrics, to enable everyone in the organization to interpret them in the same way. This reduces the level of risk and improves transparency of information needed to make policy decisions or troubleshoot reliability in case of issues. Companies at this level usually have a central model validation team upholding the policies laid out by the enterprise during their validation process, so they start to see some productivity gains.

Level 3: Enterprise data and AI catalog

This level leverages the metadata information from level 2 to ensure that all assets in a model’s lifecycle are available in an enterprise catalog [12] with data quality insights and data provenance. With a single data and AI catalog, the enterprise can now trace the full lineage of data, models, lifecycle metrics, code pipelines and more. This also lays the foundation for making connections between the numerous versions of models to enable a full audit in compliance situations. It also provides a single view to a CDO/CRO for a comprehensive AI risk assessment. Companies at this level are able to clearly articulate risks related to AI and have a comprehensive view of the success of their AI strategy.

Level 4: Automated validation and monitoring

This level introduces automation into the process to automatically capture information from the AI lifecycle. This information significantly reduces the burden on the data scientist (and other lifecycle participants) to manually document their actions, measurements, and decisions. This information also enables model validation teams to make decisions on an AI model, as well as allowing them to leverage AI-based suggestions. With this capability, an enterprise can significantly reduce the operations effort in documenting data and model lifecycles. It also removes any risks from mistakes along the lifecycle in terms of metrics, metadata, or versions of data/model being left out. Companies at this level start to see an exponential increase in productivity as they’re able to consistently and quickly put AI models into production.

Level 5: Fully automated AI lifecycle governance

This level uses the automation from the previous step to automatically enforce enterprise-wide policies on AI models. This framework now ensures that enterprise policies will be enforced consistently throughout all models’ lifecycles. At this point, an organization’s AI documentation is produced automatically with the right level of transparency through the organization for regulators and, more importantly, for customers. This enables the team to prioritize the riskiest areas for more manual intervention. Companies at this level can be extremely efficient in their AI strategy while maintaining confidence in their risk levels.

Gartner [9] named Watson Studio on Cloud Pak for Data as a Leader among a group of 20 providers, crediting the solution’s thorough attention to responsible AI and governance.

IBM brings a comprehensive approach to this challenge [10], including IBM Research’s scientific and open source technologies [11], IBM Cloud Pak for Data platform offerings such as AI governance, and IBM’s Services consulting and industry-driven solutions. IBM can help you increase your AI lifecycle governance maturity level to ensure your AI systems satisfy the requirements of your business.

IBM provides expertise and technology for both components of AI governance. IBM services [4,5] can help organizations decide how they want their AI system to perform responsibly, and our Cloud Pak for Data platform [6] can provide the enhanced AI lifecycle that will help organizations implement their governance by leveraging the AI FactSheet technology [7] from IBM Research.






[6] ../2020/12/how-ibm-is-advancing-ai-governance-to-help-clients-build-trust-and-transparency/







Join the IBM AI Governance early access program

Was this article helpful?

More from Artificial intelligence

What you need to know about the CCPA draft rules on AI and automated decision-making technology

9 min read - In November 2023, the California Privacy Protection Agency (CPPA) released a set of draft regulations on the use of artificial intelligence (AI) and automated decision-making technology (ADMT). The proposed rules are still in development, but organizations may want to pay close attention to their evolution. Because the state is home to many of the world's biggest technology companies, any AI regulations that California adopts could have an impact far beyond its borders.  Furthermore, a California appeals court recently ruled that…

In preview now: IBM watsonx BI Assistant is your AI-powered business analyst and advisor

3 min read - The business intelligence (BI) software market is projected to surge to USD 27.9 billion by 2027, yet only 30% of employees use these tools for decision-making. This gap between investment and usage highlights a significant missed opportunity. The primary hurdle in adopting BI tools is their complexity. Traditional BI tools, while powerful, are often too complex and slow for effective decision-making. Business decision-makers need insights tailored to their specific business contexts, not complex dashboards that are difficult to navigate. Organizations…

Introducing the watsonx platform on Microsoft Azure

4 min read - Artificial intelligence (AI) is revolutionizing industries by enabling advanced analytics, automation, and personalized experiences. According to The business value of AI, from the IBM Institute of Business Value, AI adoption has more than doubled since 2017. Enterprises are taking an intentional design approach to hybrid cloud and AI to drive technology decisions and enable adoption of Generative AI. According to the McKinsey report,  The economic potential of generative AI: The next productivity frontier, generative AI is projected to add $2.6…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters