There’s no such thing as trustworthy AI without human accountability

21 January 2025

Author

Phaedra Boinodiris

Global Leader for Trustworthy AI, IBM Consulting

We often consider an organization’s AI maturity in terms of their infrastructure, models and operations. But in reality, an organization cannot truly be mature in AI unless it has done the work to earn people’s trust. Earning trust is not strictly a technical challenge with a technical solution, but one that is sociotechnical—which means it requires a holistic approach. You need to think about people, processes and tools.

Thinking about people means thinking about the organizational culture required to create or implement AI responsibly. Processes mean guidelines and practices for AI governance and tools mean AI engineering frameworks.

Can you guess which one is the hardest? If you guessed people, you’re right.

3D design of balls rolling on a track

The latest AI News + Insights 


Discover expertly curated insights and news on AI, cloud and more in the weekly Think Newsletter. 

Accountability in people

When I ask large audiences at AI summits, “Who in your organization is accountable for responsible outcomes from AI?” The top 3 answers that I get are worrying.

The most common answer that I get is “no one,” which is overtly concerning.

The second most common answer is, “We don’t use AI.” The fact is, their employees are likely using AI, whether the organization is keeping track of it in a formal way or not. More and more of the software that organizations have already procured has AI embedded in it.

And rounding out the top 3 answers is “everyone.” But if everyone is accountable for AI governance, is anyone truly accountable?

Responsible AI governance is a lot of work, and that work is expanding. Those who are accountable must get value alignment within the organization. Value alignment requires that AI designers and developers collaborate to ensure that AI systems embody the values of the organization, their users and those they impact. They must also keep track of AI model inventory and closely monitor existing and upcoming regulations. 

The purview of accountable parties must also include ethics, which begins with cultivating AI literacy in the organization. AI literacy, including applied training, is essential to understand how models might be “lawful but awful.” Those who are building, governing or procuring models must ensure that their AI models reflect the ethics of the organization, not merely be compliant with regulatory requirements.

Mixture of Experts | 14 February, episode 42

Decoding AI: Weekly News Roundup

Join our world-class panel of engineers, researchers, product leaders and more as they cut through the AI noise to bring you the latest in AI news and insights.

Accountability in process and tools

There are 2 critical components to establish scalable, sustainable AI governance:

1.        Organizational AI governance
This addresses organizational strategy and planning. It includes answering questions such as:

•    What are we communicating to our employees about the relationship we want to have with AI?
•    Which are the principles that we want to see reflected in our AI models and which are the functional and nonfunctional requirements needed to operationalize those principles?
•    Do we have AI literacy programs?
•    Do we have people in place to do the work of AI governance?
•    Do we have processes in place to manage intake and keep track of AI model inventory?

Who’s involved: business leaders, AI ethics boards, data and AI leaders, policy and regulations leaders, chief product officers (CPOs) and chief information security officers (CISOs).

2.        Automated AI model governance
This component addresses development and deployment, operations and monitoring and portfolio management of the AI models themselves. It includes using tools such as watsonx.governance™ to answer where your data comes from, what models are in use, what your risks are and whether models are behaving as planned.

Who’s involved: development teams, IT leaders, chief data and analytics officers (CDAOs), software and data science leaders and MLOps teams.

IBM helps clients with both challenges.

How IBM Consulting® can help

Clients can sometimes feel paralyzed by the scope and responsibility of governing AI. They often ask me, “What can we do to get unstuck?” In my opinion, the best way to get moving is to practice, to learn how to do the work of responsible AI in a safe environment.

The work that IBM Consulting does in a 3- to 6-month period begins with AI literacy programs aimed at those who will govern, build and procure AI models on behalf of the organization. This is not done only through standardized learning, videos and playbooks but also through applied training on use cases relevant to their organization, giving diverse, multidisciplinary teams practical insight into the work of AI governance.

These programs help all kinds of personnel answer important questions: Why is this AI investment aligned to business strategy? What are its risks? How can we mitigate risks by detailing functional and nonfunctional requirements for models and the systems they support?

We introduce AI Factsheets—what they are and how to ensure that they are interpretable—and provide an introduction to audits and how to interpret results of a model. By the end of the period, these teams have hands-on practice and a set of artifacts that can be shown to their nascent governing councils. This practice is very useful to those who will be governing models, those who are building models and even buying models on behalf of the organization. We teach teams how to use design-thinking frameworks created by IBM’s AI design guild.

In parallel to this effort, we conduct current-state blueprints and journey maps. These are targeted recommendations about the work that organizations must do to stand up their AI governance framework. Many of our clients have been creating or procuring models with nonexistent or inadequate governance frameworks in place. We help them set up a watsonx.governance pilot to show them how to audit and manage their models. We provide tactical recommendations on personnel, skill sets, processes, communication plans, tools and engineering frameworks—everything needed for an organization to move forward.

Contact IBM AI Consulting for applied governance training

Related solutions
IBM watsonx.ai

Train, validate, tune and deploy generative AI, foundation models and machine learning capabilities with IBM watsonx.ai, a next-generation enterprise studio for AI builders. Build AI applications in a fraction of the time with a fraction of the data.

Discover watsonx.ai
Artificial intelligence solutions

Put AI to work in your business with IBM’s industry-leading AI expertise and portfolio of solutions at your side.

Explore AI solutions
AI consulting and services

Reinvent critical workflows and operations by adding AI to maximize experiences, real-time decision-making and business value.

Explore AI services
Take the next step

Get one-stop access to capabilities that span the AI development lifecycle. Produce powerful AI solutions with user-friendly interfaces, workflows and access to industry-standard APIs and SDKs.

Explore watsonx.ai Book a live demo