Five best practices to improve compliance with AI regulations and standards
9 September 2021
3 min read

In the recent decade, the value of Artificial Intelligence (AI) has been demonstrated in many industries. These trends have increased the interest of many organizations in AI technologies, not just to streamline business operations, but also to gain a competitive advantage.

However, the deployment and use of AI to support business operations may present significant risks to individuals, groups, and even society, if not managed according to clear principles and practices, such as those represented in particular sets of authoritative rules. That management would mitigate the possibility that AI could infringe upon fundamental rights of the individuals and groups subjected to it. For example, screening resumes for prospective employment candidates using AI that is biased towards specific genders or ethnicities would clearly be unacceptable.

Authoritative rules come in different forms and have diverse application domains. They can be:

  • Government laws and regulations, such as Illinois’s new Artificial Intelligence Video Interview Act.
  • International standards, such as the ones in development by ISO AI working groups, or technology-focused standards, such as the IEEE standards for AI.
  • Internal organizational rules based on principles, policies, or procedures.

Therefore, to scale the deployment and use of AI, organizations should establish a compliance management program – one that addresses relevant requirements from applicable AI authoritative rules. Such a program constructs guardrails around the use of AI so it is consistent with the organization’s principles and values, as well as with its stakeholders’ expectations and demands.

Key practices to consider in establishing an AI compliance program

1. Develop a global view of AI compliance

The complexity and ever-changing nature of the authoritative rules an organization must follow can be overwhelming. In addition, introducing new AI rules may negatively impact the state of compliance to some pre-existing rules. It is, therefore, more effective for the organization to handle AI compliance in a systemic way to allow for a consistent compliance approach across the organization and leverage appropriate controls to meet applicable requirements.

2. Get involved in AI compliance intelligence

Updates to existing AI authoritative rules and the emergence of new ones may require significant changes in the way an organization has set up its compliance program and controls. In addition, adapting to new AI requirements may introduce a level of complexity that is beyond what the organization has been prepared to take on.

To efficiently adapt to these changes, organizations should proactively monitor the development and modification of the relevant AI authoritative rules.

3. Enable an AI compliance mapping capability

When an organization is subject to several AI authoritative rules, it is often difficult to narrow down the full set of AI requirements that should be addressed in a specific context. This is because mapping requirements from AI authoritative rules that are issued by different sources for different jurisdictions is a complex task that crosses several expertise areas (e.g., AI, privacy, security).

Using in-house or third-party outsourced resources, such as IBM Promontory Services, organizations can map out common AI requirements that need to be fulfilled consistently and supplemental ones that can be addressed as needed.

4. Invest in AI compliance enablement

An effective AI compliance management approach includes a clear communication of the right practical steps to realize AI compliance objectives.

Organizations should develop appropriate process enablement and education activities to help employees understand their organization’s AI compliance objectives, their role in meeting those goals, as well as how to proceed in practice.

5. Enforce AI compliance positively

Enforcing AI compliance, using relevant technical and organizational measures, is critical.

A positive compliance enforcement approach, based on promoting trust and transparency rather than overemphasizing verification, is often more effective because it allows the organization to get the full support of its employees to meet AI compliance objectives.

How technology can help

Technology plays an important role in supporting an effective AI compliance program. For example, it can help:

  • Manage the set of authoritative rules and underlying requirements allowing an efficient mapping of the requirements to determine appropriate compliance objectives and controls.
  • Enable employees to make the right decisions and take the appropriate actions to meet compliance objectives and controls, while managing associated risks efficiently.
  • Measure and monitor compliance progress and report on that as needed in a transparent way.

With an effective AI compliance program, sponsored by leadership and endorsed by employees, companies can achieve the compliance needed to infuse trustworthy AI throughout the enterprise.

For more information:

 
Author
Abdel Hamou-Lhadj Tech Ethics Offering and Compliance Leader at IBM
Related solutions IBM® watsonx.governance™

Govern generative AI models from anywhere and deploy on cloud or on premises with IBM watsonx.governance.

Discover watsonx.governance
AI governance consulting services

Prepare for the EU AI Act and establish a responsible AI governance approach with the help of IBM Consulting®.

Explore AI governance services
IBM OpenPages®

Simplify how you manage risk and regulatory compliance with a unified GRC platform.

Explore OpenPages
Take the next step

Direct, manage and monitor your AI using a single platform to speed responsible, transparent and explainable AI.

Explore watsonx.governance Book a live demo