My IBM Log in Subscribe

Reflecting on IBM’s AI Ethics Board: Insights from the past 5 years for the future

11 November 2024

When IBM’s AI Ethics Board received its mandate in 2019, the landscape of responsible artificial intelligence (AI) looked markedly different. AI was generally not covered by specific regulations; in fact, the EU AI Act, the world’s first comprehensive AI regulation, would not be proposed for another 2 years.

Many enterprises, including IBM, had only recently introduced guiding principles for responsible AI development and use. Many nonprofit coalitions advocating for responsible AI, such as the Partnership on AI, were in their infancy.

Although no one had a crystal ball to foresee AI’s future, it was clear that the industry was poised for great innovation. At IBM, we recognized that trust had to be central to this progress, starting with a multidisciplinary AI Ethics Board responsible for governance and decision-making in responsible AI.

As we reflect on the IBM AI Ethics Board’s 5th anniversary, 5 key changes stand out as especially meaningful to our business, the industry and the world.

3D design of balls rolling on a track

The latest AI News + Insights 


Discover expertly curated insights and news on AI, cloud and more in the weekly Think Newsletter. 

5 key changes to AI

1. Generative AI is everywhere

Generative AI (gen AI) is unlocking capabilities and use cases that seemed impossible just 5 years ago. However, its potential for positive impact can only be realized if developed ethically and brought to market responsibly. As gen AI becomes more ubiquitous in our everyday lives, understanding and mitigating potential risks is more critical than ever. Gen AI introduces a host of new risks and amplifies many known risks, each with significant implications for responsible development and use.

In 2024, we developed the AI Risk Atlas. Embedded within IBM watsonx™, the AI Risk Atlas maps risks across the AI lifecycle, describing their potential impacts and providing practical examples. In keeping with our commitment to an open innovation ecosystem, we’ve made it freely available so that practitioners worldwide can access a comprehensive resource for mitigating risk.

AI Academy

Trust, transparency and governance in AI

AI trust is arguably the most important topic in AI. It's also an understandably overwhelming topic. We'll unpack issues such as hallucination, bias and risk, and share steps to adopt AI in an ethical, responsible and fair manner.

2. AI governance is a must-have

As the power and potential of AI have multiplied, so too have its risks and potential for misuse. AI governance is no longer a nice-to-have; it’s a must-have.

Even AI developed with high-quality data and ethical guardrails needs ongoing monitoring and maintenance when in production. Every company developing, deploying or using AI must establish strong AI governance practices to be regulation-ready and mitigate potential risks and harm. Also, AI governance can deliver both tangible and intangible returns on investment. Now more than ever, good governance is good business.

3. AI regulations and voluntary commitments are on the rise

Regulatory activity involving AI has accelerated at a sometimes-dizzying pace. From regional laws such as the EU AI Act to country-specific and even local regulations, such as New York City’s Automated Employment Decision Tool law, companies must stay current and help ensure that their governance frameworks support compliance with the growing patchwork of incoming regulatory requirements.

Since the inception of its AI Ethics Board, IBM has advocated for precision regulation of AI and has supported the risk-based approach of the EU AI Act. We are pleased that this approach aligns with the AI ethics framework that guides our practitioners in building responsible AI systems.

This increased regulatory activity, coupled with voluntary but non-binding commitments, such as the European Commission AI Pact, the Seoul AI Business Pledge and the Rome Call for AI Ethics has created a complex landscape for both deployers and users of AI. With all these regulations, initiatives and frameworks in play, interoperability will be key, an outcome IBM has long championed to foster transparency and build trust.

 

4. The future of AI is open

An open innovation ecosystem for AI is critical to help ensure that the benefits of AI are distributed broadly throughout society and that development coexists with safety. Open innovation doesn’t just mean open source software.

Open source and permissively licensed AI models are a key part of an open innovation ecosystem for AI, as are open source toolkits and resources, open data sets, open standards and open science. IBM has a long history of innovation in the open community. Some recent highlights include:

  • IBM® Granite™ models: In May 2024, IBM released a family of Granite models into open source, inviting clients, developers and global experts to push the boundaries of what AI can achieve in enterprise environments.
  • InstructLab: In May 2024, IBM and Red Hat launched InstructLab, an open source project aimed at enhancing large language models through constant incremental contributions, much like software development has functioned in open source for decades.
  • The AI Alliance: In December 2023, IBM and Meta co-founded the AI Alliance, which has grown from 50 founding members and collaborators to an active, international community of more than 100 leading organizations across industry, startups, academia, research and government, all working together to support open innovation and open science in AI.

5. It’s all about the data

Trustworthy data is the bedrock of trustworthy AI. Considerations about data lineage and provenance weren’t as prominent as they are today, nor were there as many frameworks and tools available to help practitioners evaluate data quality. That is why IBM became a founding member of the Data & Trust Alliance in 2020 and led testing and adoption of its landmark Data Provenance Standards in 2024.

The broader the adoption of data quality measures such as the Data Provenance Standards across the data ecosystem, the simpler it becomes for data consumers to build responsible models and systems. We welcome more efforts like these that make responsible data use more accessible.

The way forward with AI

These past 5 years have brought significant change to the industry, but IBM’s commitment to trust has remained constant. What this period has proven is that the tremendous potential of AI and other emerging technologies can be achieved when built with ethics at the core and with diverse industry collaboration. Whatever new technological breakthroughs or digital transformations lie ahead, the way forward is rooted in responsibility.

Learn more about IBM’s approach to responsible AI

Trace the history and highlights of the IBM AI Ethics Board

Related solutions

Related solutions

IBM® watsonx.governance™

Govern generative AI models from anywhere and deploy on cloud or on premises with IBM watsonx.governance.

Discover watsonx.governance
AI governance solutions

See how AI governance can help increase your employees’ confidence in AI, accelerate adoption and innovation, and improve customer trust.

Discover AI governance solutions
AI governance consulting services

Prepare for the EU AI Act and establish a responsible AI governance approach with the help of IBM Consulting®.

Explore AI governance services
Take the next step

Direct, manage and monitor your AI with a single portfolio to speed responsible, transparent and explainable AI.

Explore watsonx.governance Book a live demo