Who is accountable for responsible AI? The answer might surprise you

28 April 2025

Author

Phaedra Boinodiris

Global Leader for Trustworthy AI

IBM Consulting

Sourcing, Procurement and Vendor Management (SVPM) leaders have substantial work to do as it relates to including responsible AI governance into their software and contracts. A recent Gartner report warns that SPVM leaders who overlook the necessity of incorporating AI governance are exposing their organization to significant risk.

“Most software and cloud vendor contracts lack explicit commitments that make them accountable for providing responsible AI, while other contracts include disclaimers that remove liability for irresponsible AI systems and output,” said Gartner.

When I speak to a large audience and pose the question Who in an organization should be accountable for responsible outcomes from AI? The top three answers I get are quite concerning. 

The three most common answers I get are: "no one," "we don’t use AI," and "everyone." None of these answers are right and all are worrisome.

The first and most common answer, that no one is accountable for AI model outcomes, is overtly concerning. There is no excuse here. It should not, cannot be acceptable to settle for zero accountability around AI. 

The second answer, we don’t use AI, is laughable considering that AI is already being embedded in many of the enterprise software application licenses that they already use. Furthermore, it shows that an organization is not tracking AI in their inventory or, even worse, not communicating appropriate and not appropriate uses of AI to their employees. That is a significant problem. 

And the final answer, that it’s everyone’s responsibility, is at least a noble answer. Everybody who touches an AI model across its lifecycle is indeed accountable. However, if everyone is being considered accountable for governance, is anyone actually considered accountable for governance? There must be more to AI responsibility than the idea that if everyone handles their own governance, we’ll be fine.

3D design of balls rolling on a track

The latest AI News + Insights 


Discover expertly curated insights and news on AI, cloud and more in the weekly Think Newsletter. 

What does it take to be accountable or hold vendors contractually accountable? 

Clearly, there’s a disconnect between how an organization thinks its handling its responsibility for AI and the reality. Let me explain what it takes for an organization to actually be accountable. 

Value alignment

Those who are accountable for AI governance and ethics have to manage a lot of moving parts, with the most pivotal being value alignment within their organizations. What this means is getting all their peers to recognize how critically important this work is for the individual and the organization. This is not always possible without consistent CEO or Board of Directors communications and support. These AI governance leaders need to help ensure that peers like the CISO are being invited to meetings about AI investments. 

These AI governance leaders initially may be viewed as a hurdle slowing progress. Funded and empowered AI governance leaders have the opportunity to establish comprehensive AI governance platforms which can speed up trusted deployments of models in a fraction of the time. By having pre-blessed datasets and vetted processes, AI solution builders can know in advance the use cases, data and methods that have already been pre-approved by the organization.

AI model inventory

Aside from value alignment, the person or team in charge of responsible AI needs to keep track of the organization’s AI model inventory. You can’t govern what you can’t see. Therefore, everything that the organization has purchased or built that may have AI or machine learning (ML) in it needs to be tracked, along with the metadata associated with each of those models. 

This metadata is traditionally stored in a factsheet or as it is sometimes called, an AI model card. This should tell you what the model’s purpose is, who is accountable for it, where the data came from, what it is audited for, how regularly it is audited, the results of those audits, and so on.

Audit

Then, they need to know how to audit these AI models and solutions to monitor if the technology is behaving in the way that you intended them to behave. This is a critical consideration for holding your vendors contractually accountable for their models.

Regulations

And then they need to track regulations. The regulatory landscape is constantly changing, especially as it relates to AI. Many countries, states and cities have laws specific to the use and risks of AI and last year the EU rolled out the EU AI Act. Many companies and government agencies have already been successfully sued for their errant models.

Demo

Take procurement to the next level with watsonx Orchestrate

Transform your procurement processes with automation and generative AI from the IBM industry-leading assistant technology, watsonx Orchestrate, enabling your procurement professionals to unlock their full potential and boost productivity.

AI literacy

There's a recognition that you can have models that be “lawful but awful” which means you must push into the ethics surrounding AI. And anytime you push into the subject of ethics, you must be a great teacher. 

You must be able to teach the people who are building models on your behalf or even buying models on your behalf to do so in a way that reflects your organization's ethics.

Just look at the human value of ‘Fairness.’ We all expect AI models to be fair, but whose worldview of fairness is being represented in the AI model? As a society, we cannot agree that there is a singular worldview on fairness.

What this means pen to paper is that the people responsible for AI governance and ethics programs within their organizations are also finding themselves in charge of AI literacy programs. 

These AI literacy programs are not limited to teaching employees how to use AI in their jobs to be more productive. AI literacy starts with learning what is the relationship that the organization wants to have with AI. These literacy programs introduce the core principles that the organizations expect to see reflected in any AI solution that is built or bought. It teaches how to operationalize each of those principles or help them come to life across varying levels of risk.

These AI literacy programs are necessarily multi-disciplinary because earning trust in AI is not strictly a technical problem at all but one that is socio-technical. Building models that reflect human values can be one of the hardest things we as human being have set out to do. It requires people who have backgrounds in all kinds of disciplines.

Being accountable for AI is a significant job and it’s why these individuals need a tremendous amount of power and a funded mandate to do this work. It can't just be a side gig. It takes a lot of work to get those problems right. All organizations should be working to delegate AI accountability in a responsible way.

Establish incentive structures and design

Simply put, you get more of the human behaviors that you measure. What employee behaviors are you incentivizing and measuring as it pertains to the responsible curation of models? Consider ways in which you can both incentivize and measure the right behaviors but also how to better embed teaching and learning in your AI governance processes. Many are not incentivized to be thoughtful when filling out AI model inventory forms or worse, they may be totally unaware of the risks of the model that they are championing.

In summary, I want you to remember 4 key takeaways:

  1. Ready or not, AI is being used in your organization right now. Whether you are keeping an inventory of AI models that you have procured or not, by 2026 80% of software vendors will have embedded AI in their enterprise applications. Be proactive and establish a rigorous AI strategy that enables you to govern AI and operate it efficiently.
  2. AI governance leaders need a funded mandate because AI governance requires a tremendous amount of work, and the nature of that work is growing. Fund your people and give them the power to do this work.
  3. Implement your AI ethically and be intentional about human outcomes. We all expect that AI models that we use or that are used on us reflect our own human values, that they will not ‘lie’ to us, that they will be fair, that they are safe to use. Recognize that this is not strictly a technical challenge but one that is socio-technical. It will require a holistic approach and expertise on human behavior.
  4. It takes a lot to de-risk AI. Scaled AI requires the right strategy, data, architecture, security and governance. It requires the right understanding between your vendors, too. Choose a pragmatic partner to get this right and make sure that mistakes don’t happen. 
Related solutions
AI agents for procurement

Accelerate AI procurement transformation with enterprise-ready, prebuilt watsonx Procurement Agents

Explore watsonx Orchestrate
Procurement solutions

Maximize value from source to pay by collaborating with suppliers and industry partners.

    Explore procurement solutions
    Procurement consulting services

    Transform your procurement operations with IBM's procurement consulting and outsourcing services.

    Explore procurement services
    Take the next step

    Automate manual procurement tasks and accelerate candidate search with IBM watsonx Orchestrate.

    Explore watsonx Orchestrate Book a demo