How Europe and the US Should Move Forward on AI

Share this post:

On 15 June 2021, the European Union and United States took a concrete first step towards stronger trans-Atlantic cooperation on digital, technology and trade issues with their joint launch of the Technology and Trade Council (TTC).

IBM strongly welcomed this announcement, citing the importance of having a structured framework for both sides to cooperate on digital policy principles, approaches, and standards, all of which are critical as democratic allies work together on challenges and opportunities linked to the digital economy. It is therefore exciting to see the TTC effort getting underway with government representatives from both sides headed to Pittsburgh, Pennsylvania for initial meetings on September 29th.

With emerging technologies like artificial intelligence prompting societal and political interest in their long-term impacts, we commend this group for putting these critical topics at the forefront of their agenda. IBM has long held the view that powerful technologies such as AI need to be transparent and guided by targeted government regulation designed to promote trust without hindering innovation. We fully support the TTC’s focus on defining shared transatlantic principles for trustworthy AI based upon transparency, accountability, robustness, fairness and non-discrimination.

Drawing-upon IBM’s global expertise in the development and deployment of AI systems and our extensive comments to governments worldwide on how best to approach the technology, we respectfully submit these further policy recommendations for the delegates bound for Pittsburgh:

Chart a common path for the ethical advancement of emerging technologies, especially AI, but also quantum computing, semiconductors, blockchain and 5G. These solutions will be critical to capability building on both sides and to help tackle urgent societal challenges such as climate change, global health crises and supply chain security. Establishing transatlantic leadership in emerging technologies will depend on concrete collaboration and agreements, such as:

  1. Establishing test sandboxes to build societal confidence in AI and blockchain.
  2. Providing resources to help all organizations, not just large corporations, adopt and deploy AI responsibly.
  3. Advancing research and development focused on detection and mitigation of bias in AI systems.
  4. Coordinating investment policies in semiconductor research and manufacturing.
  5. Expanding university and private sector research partnerships in quantum computing.
  6. Defining common trade protocols that encourage the safe and secure sharing of health data, such as vaccine and research data, to accelerate medical research and data-driven AI innovations.
  7. Seeking agreement on technology procurement, ensuring equal transatlantic access to public procurement markets for emerging tech, based on shared values and standards, such as open architectures and technology choice.

Work towards a common approach to rules of the road for “Good Tech:” by tackling the entrenched and growing market power of online platforms and their impact on businesses and consumers, as well as the ethical development and deployment of AI. Specific actions could include:

  1. Collaborating on reasonable care” standards for consumer platforms to ensure the safety of users online and stem the flood of harmful and illegal content that is damaging to society.
  2. Agreeing common approaches to digital markets regulatory frameworks that establish a level playing field online and rules preventing behaviors that could distort competition and impeded the establishment of contestable digital markets.
  3. Advancing a precision regulation” approach to AI, where the greatest regulatory control is focused on end uses with the greatest risk of societal harm. This approach also should require bias testing and mitigation for certain high-risk AI applications, such as law enforcement, and the TTC process could help accelerate consensus around clear and consistent standards for regular testing, transparency reporting and certifications.
  4. For facial recognition, we recommend that any shared principles encourage securing notice and consent for the use of the technology on social media platforms or public uses such as outdoor signage. With respect to law enforcement use cases, we urge strong safeguards to prevent misuse of facial recognition for mass surveillance, racial profiling and violations of basic human rights and freedoms. IBM no longer offers general purpose facial recognition and analysis software, and we have made clear that we would not condone any use of technology that conflicts with these values.
  5. Finally, we believe it makes great sense for the EU and US to work together to tighten export controls of facial recognition technology so that this powerful category of innovation is not used beyond either of our borders in ways that run counter to our shared values or commitment to upholding basic human rights and freedoms.

Establish a robust, privacy-centric data transfer mechanism between the two geographies. This has been sorely lacking since Privacy Shield was struck down, and there cannot be truly effective and competitive transatlantic cooperation on AI, innovation or the digital economy without smooth and trusted data flows.

IBM applauds officials on both sides of the Atlantic for tackling some of the most pressing digital economy issues first. Setting an ambitious agenda is key and we are confident that if Europe and the U.S. maintain their political commitment to this process, consult carefully with private sector and third party experts, and allow each side freedom to legislate in their own ways based on shared principles, the TTC will be well on its way to achieving its full potential.


Christopher A. Padilla
Vice President, IBM Government and Regulatory Affairs


Latest Tweets

Connect with us

Media Contact

Email Us