Automated governance and trustworthy AI

By | 5 minute read | June 7, 2022

Artificial intelligence is being infused slowly but surely into all aspects of our lives, and it will be ubiquitous in everyday life sooner than we might imagine. Governments and regulatory bodies around the world are working to establish safety standards.

In the U.S., the Consumer Finance Protection Bureau (CFPB) recently outlined options to prevent algorithmic bias in home valuations for mortgage lenders. The proposed rules aim to govern automated valuation models to protect borrowers. In October 2021, the White House Office of Science and Technology Policy announced, “Americans Need a Bill of Rights for an AI-Powered World.” The announcement highlighted the crucial role of training data, and the terrible consequences of using data that “fails to represent American society.”

As governments recognize and regulate the growing use of AI for crucial decisions, enterprises should prepare proactively. As part of their AI adoption, enterprises should define and adopt safety standards to manage their regulatory, financial, operational, technology and brand risks. Organizations must plan to govern and establish trustworthiness metrics for their use of AI. With this governance, AI can help enterprises fight discriminatory outcomes, even when AI was not involved in the original decision-making.

AI safety standards should include both governance automation and trustworthiness

Governance automation enables an enterprise to institutionalize the process, policies and compliance of AI deployments to continuously collect evidence, ensuring consistency, accuracy, timely, efficient, cost effective and scalable deployment of AI. This cannot be sustained by manual efforts. Firms in regulated industries such as financial services, healthcare and telecom will see additional regulations being enforced to ensure AI governance compliance with requirements to document the evidence of it.

As an example, a large financial services firm could easily deploy hundreds or thousands of AI models to assist decision makers in various tasks. The use of AI becomes necessary to take advantage of the massive amounts of transactional and client experience data. These models could include diverse use cases like client credit monitoring, fraud analytics, lending decisions, targeted micro marketing, managing chat bot interactions, call center analytics and others. For banks with multiple lines of business including retail and corporate clients, this becomes a daunting challenge to manage diverse technology, operational, brand and regulatory risk exposure with appropriate process, policies and compliance automation.

Additionally, governance automation needs to be consistent enterprise-wide. It needs to be planned holistically to avoid new technical debt in the governance implementation and to future-proof the investments.

Trustworthiness in AI means the results from AI models are continuously monitored and frequently validated based on model risk characteristics, so that these results can be trusted by decision makers, clients, regulators and other stakeholders. Each stakeholder has their own perspective of trustworthiness. The stakeholders include:

  • The decision maker: the organization using the model outcomes to make a decision
  • The regulator: the internal auditor, validator, third-party organizations and government bodies performing oversight and reviewing the outcomes of the decision
  • The subject: the client or end user involved in the decision process

To understand the different perspectives of these stakeholders, consider a simple credit card or loan approval process. The loan officer makes the decision with the aid of an AI model. This decision maker needs to trust the model for the accuracy of its prediction, quality of the training data used, fairness of the -prediction, explicit confirmation that various biases are eliminated, and that they can explain the decision (based on the model prediction) as they would if the decision was made without assistance from an AI model.

The regulator monitoring the decisions will need all the above across all the decisions made by all decision makers. The regulator will look for evidence to confirm that errors, omissions and biases have not occurred.

The client (the subject of the decision) will want to understand and trust the approval or denial of the loan decision. Explaining the decision to the client easily and at scale has always been as important as the decision. Now it can become a competitive advantage for brands that invest in automating explanations with easy-to-understand visuals.

Clients can and will continue to demand explanations of the decision.

Automate governance and trustworthiness of AI at scale

Organizations with multiple business segments in regional/global domiciles governed by diverse regulatory regimes need an open architecture platform approach to successfully implement governance automation.. An open platform should seamlessly automate the integration of AI compliance processes, operating policies and continuous monitoring of models for various Trustworthiness metrics like accuracy, fairness, quality, explainability, etc. Automation of compliance processes will require configurable workflows and operating policies will vary across the enterprise based on business segment needs. IBM Cloud Pak® for Data provides a platform approach with scalable and configurable services for both governance automation and trustworthiness. It can be deployed on premises or in the cloud.

Case study: IBM Cloud Pak for Data at work in a large financial services firm

Recently IBM deployed AI governance automation and trustworthiness with IBM Cloud Pak for Data for a large financial services firm. The implementation was done in partnership with various teams on the client side to support the governance process holistically for the enterprise. The solution is configurable for business segment needs with seamless integration to their technology platforms that is being used to develop and deploy AI models.  This platform approach ensured existing AI investments in technology and skills were preserved while enabling compliance with governance automation.

IBM configured and customized IBM Cloud Pak for Data services — namely OpenPages®, IBM Watson® OpenScale, IBM Watson Machine Learning and IBM Watson Studio — to automate their AI governance enterprise-wide for the entire model development life cycle, from model idea inception to production.

IBM Cloud Pak for Data will monitor and track the models in production and triggers alerts when models breach their predefined thresholds or trends change across various metrics. Metrics include built-in types like quality, fairness, bias, accuracy, drift, explainability or custom metrics as defined by specific business needs.

The solution automates the generation of facts about the model in terms of documents and metrics collection for the full life of the model. It enables easy tracking and handover of tasks and issues in a seamless workflow across various actors and roles in the model lifecycle, such as the business owner, data scientist, model validator and data steward. By implementing this solution, the bank can confidently manage operational, regulatory and technology risks of AI model deployments by different business segments across the enterprise with a uniform, integrated and automated platform.

Not using AI is not an option for any business. But using AI safely and responsibly requires C-suite commitment. Now is the moment for organizations to commit strategic investments towards holistically governing their use of AI in decision-making with an enterprise governance and trustworthiness platform approach.