My IBM Log in
Why Europe's AI Moment Is Now
Oct 09,2023

AI will be among the issues topping the agenda when French President Emmanuel Macron and German Chancellor Olaf Scholz meet for bilateral talks in Hamburg this week. Franco-German alignment on advancing a responsible, pro-innovation AI regulatory framework is key as European policymakers enter the final stretch of negotiations on EU AI Act.

 

The stakes are high. AI is projected to fuel economic growth, boost GDP, and offer a competitive edge to the individuals, governments, and organizations who effectively leverage its capabilities for greater productivity. And the EU AI Act, on course to become the most sweeping and sophisticated AI law in the world, has the potential to serve as a global blueprint for smart, responsible, and effective AI regulation globally. To meet this ambition, EU leaders should prioritize three policy areas.

 

First, European policymakers should preserve the EU AI Act’s risk-based, technology-neutral approach. Not all uses of AI carry the same level of risk. While the vast majority of today’s systems carry minimal risk, some can have far-reaching consequences, such as propagating disinformation or compromising election integrity. Because each application is unique, regulation must account for the context in which AI is deployed to ensure that only the uses with the potential to cause significant harm are regulated closely. Consistent with IBM’s belief that AI should complement human intelligence, we support the idea that if an AI system does not replace a human assessment, then it should not be classified as high risk.

 

Second, EU policymakers should clarify and differentiate the compliance responsibilities of developers and deployers. Legal certainty is critical to ensuring AI adoption. Just as EU privacy and security laws distinguish between different types of companies that handle consumers’ personal data, legislators should carefully consider the different roles of AI providers and deployers and hold them accountable in the context in which they develop or deploy AI. The allocation of responsibilities in the EU AI Act should consider that, in most cases, the deployer determines a system’s intended purpose – and therefore is closest to the point of risk – so the deployer is best placed to legally comply with the Act.

 

Third, calls for overly restrictive rules on foundation models and general-purpose AI systems will hinder innovation and AI uptake. What’s more, over-regulation will negatively impact all providers, at the risk of further entrenching the positions of a handful of large companies. At IBM, we believe a vibrant, open AI ecosystem is good for competition, innovation, skilling, and security – and helps guarantee that diverse and inclusive voices shape AI models. To that end, policymakers should strengthen the provider-deployer relationship by encouraging information-sharing and documentation so deployers can comply with the Act’s requirements when they deploy foundation models in high-risk uses.

 

Consistent with a risk-based approach to AI regulation, we believe that only those models that could potentially cause harm to a significant number of consumers should be associated with greater regulatory obligations. To designate such models, EU lawmakers could draw from the criteria used in the DSA to classify very large online platforms and only capture those systems in direct contact with over 45 million users.

 

AI has the potential to help address some of our most pressing challenges, whether it’s pioneering drug discovery, improving vital infrastructure maintenance, or confronting climate change, as long as it is developed, deployed, and governed responsibly. As the EU AI Act takes its final shape, Europe has an opportunity to lay the regulatory foundations for AI safety worldwide. Our leaders cannot miss this moment.

 

 

 

 

 

 

By Jean-Marc Leclerc, Head of EU Policy at IBM and Co-director of the IBM Policy Lab

 

 

 

 

 

 

 

 

Read more from IBM CEO Arvind Krishna on how companies and governments should advance trusted AI.

 

 

Share this post: