AI governance should be a best practice for every organization, and here's why.
Discover what proper AI governance delivers.
See what guidance different countries and regions recommend.
Learn why a centralized, comprehensive view of models is important.
Automation in AI governance is crucial in keeping a competitive edge while meeting regulations.
Automating governance processes is just the beginning.
Different capabilities help you know, trust, and use your AI models.
2 min read
Leaders of enterprises creating AI services are being challenged by an emerging problem of how to effectively govern the creation, deployment and management of these services, throughout the AI lifecycle. These enterprise officials want to understand and gain control over their processes to meet internal policies, external regulations or both. This is where AI governance makes a difference.
AI governance is the ability to direct, manage and monitor the AI activities of an organization. In particular, leaders of organizations and enterprises in regulated industries, such as banking and financial services, are legally required to provide a certain level of transparency into their AI models to satisfy regulators. Failure to offer this transparency can lead to seven-figure fines and penalties. With this, AI models can no longer function as a mystery. Enterprise leaders must provide greater visibility into their automation processes and provide clear documentation of the health and functionality of their models in order to meet regulations.
Read more to find out what AI governance is, why it is important, and how IBM can help your organizations embrace it as a practice.
4 min read
AI governance is the ability to direct, manage and monitor the AI activities of an organization. This practice includes processes that trace and document the origin of data, models and associated metadata and pipelines for audits. The documentation should include the techniques that trained each model, the hyperparameters used, and the metrics from testing phases. The result of this documentation is increased transparency into the model’s behavior throughout the lifecycle, the data that was influential in its development, and its possible risks.
Before a model is put into production, it is validated to assess the risks to the business. Once the model goes live, it is continuously monitored for fairness, quality and drift. Regulators and auditors are given access to its documentation which provide explanations of the model’s behavior and predictions. These explanations provide visibility into how the model works and what processes and training the model received.
For a deeper dive on AI governance and documentation research, read the AI FactSheets 360.
Proper AI governance gives enterprises the ability to achieve the following benefits:
By 2022, 65 percent of enterprises will task CIOs to transform and modernize governance policies to seize the opportunities and confront new risks posed by AI, machine learning (ML) and data privacy and ethics.1
Drivers behind this trend include the following demands for enterprises:
Watch this webinar to find out AI Governance can help organizations scale their AI.
7 min read
News stories from the last few years show that AI can be discriminatory—the most widely known examples have occurred in banking and financial services.2 However, other sectors are not immune. For example, an online retailer chose to disband an internal team because of a controversy involving the algorithms they used in a hiring process. The algorithms that were used to vet potential employees were said to be biased because they were trained on mostly male resumes, which means they could potentially identify more male candidates than female candidates and perpetuate a gender bias.3 Similarly, in the public sector, a study revealed that UK police officers questioned whether using algorithms to predict future crime could result in bias and discrimination.4
To reduce risk from factors such as AI bias, many countries and regions have adopted guidance on how to govern AI.
SR-11-7 is the US regulatory model governance standard for effective and strong model governance in banking. The regulation requires bank officials to apply company-wide model risk management initiatives and maintain an inventory of models implemented for use, under development for implementation, or recently retired. Leaders of the institutions also must prove their models are achieving the business purpose they were intended to solve, and that they are up-to-date and have not drifted. Model development and validation must enable anyone unfamiliar with a model to understand the model’s operations, limitations and key assumptions.5
Canada’s Directive on Automated Decision-Making describes how that country’s government uses AI to guide decisions in several departments. The directive uses a scoring system to assess the human intervention, peer review, monitoring and contingency planning needed for an AI tool built to serve citizens. Organizations creating AI solutions with a high score must conduct two independent peer reviews, offer public notice in plain language , develop a human intervention failsafe, and establish recurring training courses for the system.6 As Canada’s Directive on Automated Decision-Making is a guidance for the country’s own development of AI, the regulation doesn’t directly affect companies the way SR 11-7 does in the US.
In 2019, the European Commission’s incoming president said she planned to introduce new legislation governing AI.7 The new legislation on AI would require high-risk AI systems to be transparent, traceable and under human control. Authorities would check AI systems to make sure data sets were unbiased. The commission also wanted to launch a debate throughout the European Union (EU) about when and whether to use facial recognition and other biometric identification.8
In the Asia-Pacific region, countries have released several principles and guidelines for governing AI. In 2019, Singapore’s federal government released a framework with guidelines for addressing issues of AI ethics in the private sector. India’s AI strategy framework recommends setting up a center for studying how to address issues related to AI ethics, privacy and more. China, Japan, South Korea, Australia and New Zealand are also exploring guidelines for AI governance.9
6 min read
Any company using AI models to automate its business process needs governance. For instance, a leading telecommunications company developing multiple models. Or officials discovering significant redundancy in their efforts who want to learn from best practices.
Many companies have multiple data science teams using different tools to build models. These teams need the following insights:
Some companies use AI to detect fraudulent insurance claims, identity theft and illegal impersonation, money laundering, or other fraud. Insurers for instance, use natural language processing to draw value from unstructured text, image recognition and classification tasks to help them do their job faster. In the US, anti-money laundering and fraud detection models used by insurers and others have been subject to review since 2011.10 Applying robust AI governance to insurers and others extends model governance. Additionally, AI governance can ensure that models are accurate and effective by reviewing the design process and determining whether the models continue to be adequate for real-life situations.
Watch this webinar on automating AI model risk management at financial firms.
Proper AI governance includes checkpoints in the AI lifecycle with clear accountability at each checkpoint. For instance, retailers using AI for product recommendations or supply-and-demand forecasting need to ensure their models don't drift. Healthcare organization leaders who use AI to look for patterns in medical research need to debias their models to ensure the data on which they've been trained them fairly represents protected features such as gender, race, and zip code.
The need for AI governance is similar to the need for software development governance a few decades ago. Enterprise executives determined too much of their software development was ad hoc, so they created the CIO office to help govern the processes. Now, the responsibility for AI governance should fall to a position such as the chief data officer (CDO) or chief risk officer (CRO).
There are many negative consequences for a company that does not adopt AI governance, one being a lack of efficiency. The machine learning process is iterative and requires collaboration. Without good governance and documentation, data scientists or validators can't be sure of the lineage of a model's data or how the model was built. Leading to results can be challenging to reproduce. If administrators train a model using wrong or incomplete data--months of work could be destroyed.
Lack of AI governance can also result in significant penalties. Bank operators have been issued seven-figure fines for using biased models when determining loan eligibility. The EU plans to add AI regulations to the General Data Protection Regulation (GDPR). GDPR infringements currently can “result in a fine of up to €20 million, or 4% of the firm's worldwide annual revenue from the preceding financial year, whichever amount is higher.”11
Brand reputation is also at risk. One experiment used AI software to learn the speech patterns of young people on social media. Administrative officials removed the software quickly after internet trolls “taught” the tool to create racist, sexist, and anti-Semitic posts.
Problems can arise when companies conduct manual AI governance processes. Data governance can include manual data validation, comparison, and other intervention, which requires familiarity with the data management and handling process. When done manually, model validators may need to get expertise in each type of algorithm being used, which can be slow and costly to the business and can result in human errors. This delay in processes could result in the company falling behind their competitors or being late to hand over information to auditors. With automation, the documentation and validation processes of AI governance would be much more efficient.
The risk manager of one major bank said, “We're looking at automating all handovers. Once a model is developed, there should be no more need to describe the model. Today, this developer needs to document everything about the model manually.” According to a model validator, a model document can be hundreds of pages because the description contains everything about the model.
Therefore, manual AI governance with documentation is not enough. Automation in AI governance is crucial in keeping a competitive edge while meeting regulations.
People are at the core of using AI governance and deciding what data to use for building models. Many kinds of skillsets are needed in the AI lifecycle, including product owners, model developers, model validators, and model deployment engineers. That’s why IBM® offers solutions that not only help automate AI governance processes but also provide the following features:
Watch this webinar for a deep dive on AI governance and which automation tools can benefit your organization.
3 min read
IBM solutions for AI governance are designed to help you achieve the following tasks:
At the core of IBM AI governance are capabilities to deliver model fairness, explainability, and standardized documentation. IBM Cloud Pak® for Data, for example, is a cloud-based, unified AI platform that tracks and measures outcomes from AI across its lifecycle. The solution adapts and governs AI to changing business situations—for models built and running anywhere.
Cloud Pak for Data consists of a full stack of components for every stage of the AI lifecycle, including built-in governance, purpose-built AI model risk management and collaboration tools. Examples of these components include Watson™ Knowledge Catalog, Watson OpenScale™ and Watson Studio. Watson Knowledge Center organizes data for governed use of data, Watson Studio provides a governance-enabled build platform, and Watson OpenScale delivers automation of governance processes and tests.
IBM also offers open-source AI governance toolkits. AI Fairness 360 helps examine, report and mitigate bias in models throughout the AI application lifecycle. AI Explainability 360 includes metrics for explaining a model's processes and decision-making. AI Adversarial Robustness 360 helps researchers and developers defend and verify AI models against adversarial attacks.