As organizations scale their use of AI, they increasingly need to do so in a responsible and governed manner. This is driven by many complementary forces: brand reputation, anticipated regulations [1,2,3], AI complexity, and social justice. Each of these alone would merit the need for AI governance, but the convergence of these forces makes it clear that AI governance is a critical capability for all enterprises deploying AI. Without proper AI governance, AI projects are destined to fail.

What is AI governance?

AI governance is the process of defining policies and establishing accountability to guide the creation and deployment of AI systems in an organization. Capturing and managing metadata on AI models as part of AI governance processes provides transparency into how AI systems are constructed and deployed, a key requirement for most regulatory concerns.

When done right, AI governance empowers organizations to operate with agility and complete trust rather than slowing them down. As organizations deploy AI to automate existing or new business workflows to create time-to-market advantage, AI governance enables them to trust AI-powered outcomes at every step of the way to create trusted business outcomes. While AI-powered automation fuels the end-to-end experience in new or existing client offers, governance from data profiling and policy enforcement to model development and model risk management drives grounding organizational trust internally and externally.

Achieving AI governance requires:

1) an organizational structure that provides governance leaders with the information they need to set policies for the organization and establish accountability, and

2) an enhanced AI lifecycle that collects this information, enforces the policies specified by the governance leaders, and makes this information accessible to interested parties in a consumable, customized manner.

Done well, enterprises gain considerable benefits from governed AI: [8]

  • Gain greater visibility and automated documentation from metadata captured throughout the AI lifecycle
  • Improve outcomes and efficiencies from best practices learned through analysis of the metadata
  • Establish and enforce consistent policies during the AI development and deployment lifecycle
  • Facilitate communication and collaboration among data scientists, AI engineers, developers, and other stakeholders shaping the AI lifecycle
  • Build AI at scale, with a centralized, comprehensive view of all activities

To further understand the value of AI governance, we provide a simple maturity model that focuses on the governance of the AI lifecycle.

Level 0: No AI lifecycle governance

At this level, each AI development team uses their own tools, and there are no documented central policies for AI development or deployment. This approach provides a lot of flexibility and is typical for organizations starting out on their journey to AI. However, it has the potential to introduce significant risks to the business if these models were to be deployed to production. Specifically, since there is no framework, it would be impossible to even evaluate the source of risk. Companies at this level tend to find scaling AI practices difficult. Hiring 10x data scientists does not lead to 10x increase in AI productivity due to inconsistencies.

Level 1: AI policies available to guide AI lifecycle governance

This level sees AI policies being created at either a line-of-business or enterprise level (such as a CDO or CRO) for constructing and deploying AI, as well as a common definition of information required before validating a model. However, as there is no enforcement of these policies, individual AI systems are still siloed with little consistency. In this stage, there is potential for misunderstanding the policies and in rare cases even subverting them, since there is no common monitoring framework to provide enforcement. The potential for risk here is also high. Companies at this level do not see many improvements in productivity, but they start to develop strategies for measuring successful AI.

Level 2: Common set of metrics to govern AI lifecycle

This level builds on level 1 by defining a standard set of acceptable metrics and a monitoring tool to evaluate models. This not only brings consistency among the AI teams, but also enables metrics to be compared across different development lifecycles. A common monitoring framework is typically introduced to track these metrics, to enable everyone in the organization to interpret them in the same way. This reduces the level of risk and improves transparency of information needed to make policy decisions or troubleshoot reliability in case of issues. Companies at this level usually have a central model validation team upholding the policies laid out by the enterprise during their validation process, so they start to see some productivity gains.

Level 3: Enterprise data and AI catalog

This level leverages the metadata information from level 2 to ensure that all assets in a model’s lifecycle are available in an enterprise catalog [12] with data quality insights and data provenance. With a single data and AI catalog, the enterprise can now trace the full lineage of data, models, lifecycle metrics, code pipelines and more. This also lays the foundation for making connections between the numerous versions of models to enable a full audit in compliance situations. It also provides a single view to a CDO/CRO for a comprehensive AI risk assessment. Companies at this level are able to clearly articulate risks related to AI and have a comprehensive view of the success of their AI strategy.

Level 4: Automated validation and monitoring

This level introduces automation into the process to automatically capture information from the AI lifecycle. This information significantly reduces the burden on the data scientist (and other lifecycle participants) to manually document their actions, measurements, and decisions. This information also enables model validation teams to make decisions on an AI model, as well as allowing them to leverage AI-based suggestions. With this capability, an enterprise can significantly reduce the operations effort in documenting data and model lifecycles. It also removes any risks from mistakes along the lifecycle in terms of metrics, metadata, or versions of data/model being left out. Companies at this level start to see an exponential increase in productivity as they’re able to consistently and quickly put AI models into production.

Level 5: Fully automated AI lifecycle governance

This level uses the automation from the previous step to automatically enforce enterprise-wide policies on AI models. This framework now ensures that enterprise policies will be enforced consistently throughout all models’ lifecycles. At this point, an organization’s AI documentation is produced automatically with the right level of transparency through the organization for regulators and, more importantly, for customers. This enables the team to prioritize the riskiest areas for more manual intervention. Companies at this level can be extremely efficient in their AI strategy while maintaining confidence in their risk levels.

Gartner [9] named Watson Studio on Cloud Pak for Data as a Leader among a group of 20 providers, crediting the solution’s thorough attention to responsible AI and governance.

IBM brings a comprehensive approach to this challenge [10], including IBM Research’s scientific and open source technologies [11], IBM Cloud Pak for Data platform offerings such as AI governance, and IBM’s Services consulting and industry-driven solutions. IBM can help you increase your AI lifecycle governance maturity level to ensure your AI systems satisfy the requirements of your business.

IBM provides expertise and technology for both components of AI governance. IBM services [4,5] can help organizations decide how they want their AI system to perform responsibly, and our Cloud Pak for Data platform [6] can provide the enhanced AI lifecycle that will help organizations implement their governance by leveraging the AI FactSheet technology [7] from IBM Research.

[1] https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm

[2] https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1682

[3] https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai

[4] https://www.ibm.com/services/artificial-intelligence

[5] https://www.ibm.com/products/expertlabs

[6] ../2020/12/how-ibm-is-advancing-ai-governance-to-help-clients-build-trust-and-transparency/

[7] https://aifs360.mybluemix.net/

[8] https://www.ibm.com/account/reg/us-en/signup?formid=urx-46439

[9] https://www.ibm.com/blogs/journey-to-ai/2021/03/ibm-is-named-a-leader-2021-magic-quadrant-for-data-science-and-machine-learning-platforms/

[10] ibm.com/watson/trustworthy-ai

[11] https://research.ibm.com/artificial-intelligence/trusted-ai/

[12] https://www.ibm.com/analytics/data-cataloging

Join the IBM AI Governance early access program

More from Artificial intelligence

Best practices for augmenting human intelligence with AI

2 min read - Artificial Intelligence (AI) should be designed to include and balance human oversight, agency, and accountability over decisions across the AI lifecycle. IBM’s first Principle for Trust and Transparency states that the purpose of AI is to augment human intelligence. Augmented human intelligence means that the use of AI enhances human intelligence, rather than operating independently of, or replacing it. All of this implies that AI systems are not to be treated as human beings, but rather viewed as support mechanisms…

IBM watsonx AI and data platform, security solutions and consulting services for generative AI to be showcased at AWS re:Invent

3 min read - According to a Gartner® report, “By 2026, more than 80% of enterprises will have used generative AI APIs or models, and/or deployed GenAI-enabled applications in production environments, up from less than 5% in 2023.”* However, to be successful they need the flexibility to run it on their existing cloud environments. That’s why we continue expanding the IBM and AWS collaboration, providing clients flexibility to build and govern their AI projects using the watsonx AI and data platform with AI assistants…

Watsonx: A game changer for embedding generative AI into commercial solutions

4 min read - IBM watsonx is changing the game for enterprises of all shapes and sizes, making it easy for them to embed generative AI into their operations. This week, the CEO of WellnessWits, an IBM Business Partner, announced they embedded watsonx in their app to help patients ask questions about chronic disease and more easily schedule appointments with physicians. Watsonx comprises of three components that empower businesses to customize their AI solutions: watsonx.ai offers intuitive tooling for powerful foundation models; watsonx.data enables…

Announcing watsonx.ai & SingleStore for generative AI applications

2 min read - In December 2021, IBM and SingleStore announced their strategic partnership with SingleStoreDB with IBM and in December 2022 the partnership launched SingleStoreDB as a Service with IBM available on AWS, Azure and the Microsoft Azure Marketplace.  Now they are taking the next step in their strategic partnership to announce SingleStoreDB’s powerful vector database functionality’s support of watsonx.ai. Why watsonx.ai? IBM watsonx.ai is the next-generation enterprise studio for AI builders. It brings together traditional machine learning and new generative AI capabilities…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters