My IBM Log in Subscribe

AI ethics and governance in 2025: A Q&A with Phaedra Boinodiris

06 December 2024

Author

Phaedra Boinodiris

Global Leader for Trustworthy AI

IBM Consulting

As we approach the end of a seemingly nonstop year of innovation, scandal and wonderment in AI, the technology’s ethics and governance has never been more important or uncertain. In this three-part Q&A, we turn to IBM Consulting’s Global Trustworthy AI leader, Phaedra Boinodiris, for a look forward at her 2025 predictions.

3D design of balls rolling on a track

The latest AI News + Insights 


Discover expertly curated insights and news on AI, cloud and more in the weekly Think Newsletter. 

Looking at 2025, what would you say is the single most important ethical issue for AI?

That’s simple: literacy. AI literacy points to the ability to understand, use and evaluate artificial intelligence. AI has become ubiquitous in the news; every day brings us a new science-fiction-worthy headline. Yet, people all over the world, in all different types of roles and industries, still don’t even know that they’re using it. So while it’s critical to solve for the more news-worthy (and very real) issues like biased algorithms and data privacy and environmental impact and job displacement and so on and so forth, none of this can be accomplished without an AI-literate world from the workforce to government to school systems and beyond.

A very close second would be accountability. We need people in funded positions of power who are held accountable for the outcomes of these models.

AI Academy

Trust, transparency and governance in AI

AI trust is arguably the most important topic in AI. It's also an understandably overwhelming topic. We'll unpack issues such as hallucination, bias and risk, and share steps to adopt AI in an ethical, responsible and fair manner.

What’s the biggest hurdle that you foresee in the actual development of ethical AI? 

For development teams to recognize that creating ethical AI is not strictly a technical problem but a socio-technical problem, and that the team designing the model should be multidisciplinary rather than siloed. For decades, we’ve been communicating that those who don’t have traditional domain expertise don’t belong in the room. That’s a huge misstep.

To build responsibly curated AI models which, by the way, are also more accurate models, you need a team composed of more than just data scientists who can weigh in from the get-go on questions such as “is this AI solving the problem we need it to? Is this even the right data according to domain experts? What are its unintended effects? How can we mitigate those effects?” Bring in linguistics and philosophy experts, parents, young people, everyday people with different life experiences from different socio-economic backgrounds. The wider the variety, the better. It’s not about morality; it’s about math.

What direction is AI governance headed globally?

Around the world, we’re witnessing the age-old push-and-pull between innovation and compliance as it relates to AI. I just returned from presenting at the European trilateral commission, though, and I’m feeling very inspired by the EU’s bold vision for AI.

It is true that there is far less investment capital being spent on AI in the EU right now compared to the US, and I truly hope this will change soon. I think the EU has an enormous opportunity to double down its efforts on showing the world how to enable domain experts to have more control over the data and how it is used to train AI. I think the EU could show the world how to have holistic approaches to AI literacy that embrace multidisciplinary programs. It could show the world how to certify third-party auditors that could hold organizations accountable for rogue models.

To me, it feels like Europe is leading the charge in a lot of ways right now in terms of AI governance, and I hope to see other countries follow suit. Europe’s commitment to embedding ethical principles into AI development is unparalleled. There’s an emphasis on human rights in terms of protecting privacy, promoting transparency and mitigating unwanted bias. And interdisciplinary collaboration is huge with programs like Horizon Europe and ‘How to change the World’ and the EU’s efforts to forge alliances with like-minded countries and organizations.

Responsible AI isn’t just about what we can build, it’s about why and how we build it. Diversity, equity and inclusion are core to an AI innovation strategy not only because that’s the ethical path but because diverse perspectives drive more creative problem-solving, equitable access ensures broader societal impact and inclusive design reduces unwanted bias, creating technology that works for everyone.

Listen to the IBM Mixture of Experts podcast to learn more

Related solutions

Related solutions

IBM® watsonx.governance™

Govern generative AI models from anywhere and deploy on cloud or on premises with IBM watsonx.governance.

Discover watsonx.governance
AI governance solutions

See how AI governance can help increase your employees’ confidence in AI, accelerate adoption and innovation, and improve customer trust.

Discover AI governance solutions
AI governance consulting services

Prepare for the EU AI Act and establish a responsible AI governance approach with the help of IBM Consulting®.

Explore AI governance services
Take the next step

Direct, manage and monitor your AI with a single portfolio to speed responsible, transparent and explainable AI.

Explore watsonx.governance Book a live demo