For businesses small and large, artificial intelligence (AI) is associated with various exciting words such as innovation, opportunity and competitive advantage. But there’s another word that needs to be on that list: compliance.
Some 73% of businesses are already using analytical and generative AI, and 72% of top-performing CEOs say that competitive advantage depends on who is using the most advanced AI.1
But this boom in AI use and its exciting potential comes with growing concerns about the ethics and safety of AI-powered technologies. If flawed development leads to biased algorithms that perpetuate discrimination (in recruitment, law enforcement or financial decisions, for example) the consequences might be dire and long-lasting.
As a result, companies, countries and policymakers are weighing AI governance and setting new rules for how AI can be used and developed. Take a look at what AI compliance is, why it matters for businesses and what steps companies can take to stay compliant in a fast-evolving regulatory landscape.
AI compliance refers to the decisions and practices that enable businesses to stay in line with the laws and regulations that govern the use of AI systems. These standards include laws, regulations and internal policies designed to help ensure that organizations develop AI models and their algorithms responsibly.
But AI compliance processes go beyond meeting legal requirements. They are also about building trust with stakeholders and promoting transparency and fairness in decision-making. They are also essential to safety. Given that AI can be exploited by malicious actors, robust cybersecurity measures and risk management strategies are at the heart of AI compliance.
AI compliance processes help businesses avoid the financial, legal and reputational risks associated with the use of AI tools.
The more businesses use AI, the more they might encounter situations in which the technology takes unexpected or erroneous turns. For example, one company abandoned its AI recruiting tool after finding it perpetuated gender discrimination due to the materials used to train it.2 And investigations have found that some algorithm-driven loan applications can lead to discrimination against applicants of color.3
Concerns about these issues are prompting a wave of efforts to standardize how AI is developed and used by businesses. In 2024, the European Union became the first major market to impose rules around AI with the launch of the EU AI Act. Other jurisdictions, including the United States and China, are also developing their own AI regulations.
Noncompliance can be costly. Under the EU's General Data Protection Regulation (GDPR), companies can face fines of up to EUR 20 million or 4% of their global annual turnover, whichever is higher. In the United States, the Federal Trade Commission (FTC) can take enforcement actions against companies for AI-related violations, such as the use of biased machine learning algorithms.4
Compliance is also essential for protecting brand reputation. A 2024 survey by KPMG found that 78% of consumers believe that organizations that use AI have a responsibility to help ensure it is being developed ethically.5 Failure to do so can lead to a loss of business and consumer trust.
By helping ensure that AI systems are reliable, transparent and accountable, businesses can drive innovation, improve efficiency and gain a competitive edge in the market.
If regulatory compliance were about meeting one clear set of requirements, the path forward would be simple. However, as quickly as AI technologies evolve, so do the diverse guidelines aimed at governing them.
The technology itself complicates compliance activities. Understanding and interpreting AI models and algorithms can be technically challenging, especially because many AI systems operate in real time. Keeping pace with evolving regulations at this speed can be difficult, and the fast rate of AI advancement requires businesses to constantly adapt their compliance programs.
Countries are in the process of enacting AI standards that might reshape how the technology is governed globally. In addition to these AI-specific laws and regulations, businesses and AI providers also need to comply with a growing web of rules around data privacy, discrimination and cybersecurity. To complicate matters further, these requirements sometimes apply not just to companies and AI providers that operate in their specific region, but also to anyone doing business in the region.
Some key issues and regulations include:
Europe’s GDPR sets specific standards for data privacy, data analysis and personal data use. The EU AI Act, regarded as the world's first comprehensive regulatory framework for AI, prohibits certain AI uses and imposes risk management and transparency requirements on others. It follows a risk-based approach to AI regulation, with stricter mandates for high-risk systems.
The United States does not have a comprehensive regulation yet, but various compliance requirements exist at federal and state levels. For example, the executive order on Maintaining American Leadership in Artificial Intelligence sets guidelines for AI development and use. Industry-specific laws, such as the Health Insurance Portability and Accountability Act (HIPAA) or the Fair Credit Reporting Act (FCRA), might also apply to AI.
In August 2023, China introduced specific regulations for generative AI, called the Interim Measures for the Management of Generative Artificial Intelligence Services. These measures include content standards and rules for data privacy, labeling and generative AI licensing. China also has specific regulations targeting AI-driven recommendation algorithms and deep synthesis technologies, such as deepfakes.
While AI compliance is crucial across all sectors, it is especially important in industries such as:
Use cases for AI in healthcare include disease diagnosis, drug discovery and personalized medicine. Failure to comply with regulations such as the United States Health Insurance Portability and Accountability Act (HIPAA), which protects patient privacy, might lead to fines or legal repercussions. And biased or poorly trained algorithms can lead to misdiagnoses or inadequate treatment plans for patients.
AI has many financial applications, from fraud detection and risk assessment to anti-money laundering activities. However, these AI applications must comply with regulations such as the US Fair Credit Reporting Act (FCRA) and the EU's Markets in Financial Instruments Directive (MiFID II). AI compliance efforts aim to prevent algorithms from discriminating in loan applications and other key decision-making.
HR professionals increasingly use AI-powered tools for the automation of routine tasks and to streamline resume screening, candidate assessment and employee monitoring. But if the algorithms are trained on skewed or inadequate data, they can result in unfair and potentially illegal bias. Compliance with anti-discrimination laws and data protection regulations helps ensure transparency, fairness and privacy.
Businesses are increasingly aware of the need to comply with existing AI regulatory requirements and prepare for future rules. One survey of international compliance and risk experts found that more than half of respondents had concerns about data privacy, algorithmic transparency and misuse or misunderstanding around artificial intelligence.6
Another study of C-suite executives found that 80% plan to increase investment in a responsible approach to artificial intelligence to build trust and confidence in their models.7 As a result, many companies are taking proactive steps to help ensure AI compliance.
Some companies are establishing frameworks that outline internal policies, procedures and responsibilities for the ethical development and use of AI. For example, Microsoft released its Responsible AI Standard, which includes conducting regular risk assessments, implementing data protection measures and prioritizing transparency and accountability in decision-making.8 And Google's AI Principles, updated in 2023, emphasize the importance of fairness, transparency and privacy in AI development.9
Businesses are also actively engaging with regulators and industry stakeholders to stay informed about regulatory changes and compliance issues. An IBM survey of business leaders found that 74% are planning to join discussions with peers or collaborate with policymakers on artificial intelligence. These efforts help businesses prepare for new regulations and participate in the development of future guidelines.
To streamline compliance efforts, businesses are investing in various AI compliance tools and technologies. For example, explainable AI (XAI) tools can help businesses understand and interpret decisions made by AI models, while AI governance portfolios can provide real-time monitoring and auditing capabilities. Governance products, such as IBM® watsonx.governance™ offer toolkits for staying aligned with regulations, evaluating risk and managing model evolution.
As advancements in AI technology continue to emerge, so do the risks and challenges associated with its use. The key is to take a proactive approach, which means to invest in the necessary resources, expertise and technologies to develop and implement robust AI governance frameworks. It also requires fostering a culture of transparency, accountability and trust in the development and use of AI systems. Prioritizing AI compliance helps businesses with the mitigation of these risks and enables them to tap into the full potential of AI.
1 PwC’s 2024 US Responsible AI Survey, PricewaterhouseCoopers, April 2024
2 Amazon scraps secret AI recruiting tool that showed bias against women, Reuters, October 2018
3 The secret bias hidden in mortgage-approval algorithms, Associated Press, August 2021
4 California company settles FTC allegations it deceived consumers about use of facial recognition in photo storage app, Federal Trade Commission, January 2021
5 KPMG Generative AI Consumer Trust Survey, KPMG, January 2024
6 How can Artificial Intelligence transform risk and compliance?, Moody’s, February 2024
7 From AI compliance to competitive advantage: Becoming responsible by design, Accenture, June 2022
8 Microsoft’s Responsible AI Standard, Microsoft, June 2022
9 Google AI: Our Principles, Google, March 2023
Learn about the new challenges of generative AI, the need for governing AI and ML models and steps to build a trusted, transparent and explainable AI framework.
Read about driving ethical and compliant practices with a portfolio of AI products for generative AI models.
Gain a deeper understanding of how to ensure fairness, manage drift, maintain quality and enhance explainability with watsonx.governance™.
We surveyed 2,000 organizations about their AI initiatives to discover what’s working, what’s not and how you can get ahead.
Govern generative AI models from anywhere and deploy on cloud or on premises with IBM watsonx.governance.
See how AI governance can help increase your employees’ confidence in AI, accelerate adoption and innovation, and improve customer trust.
Prepare for the EU AI Act and establish a responsible AI governance approach with the help of IBM Consulting®.