What an organization needs to curate AI responsibly

Coworkers collaborating and discussing work

Authors

Teaganne Finn

Staff Writer

IBM Think

I’m just going to say it: many organizations use artificial intelligence (AI) whether they realize it or not. The question is, how. How are organizations using AI and how do they know that they are doing it in a responsible way?

Phaedra Boinodiris, the global leader for Trustworthy AI for IBM Consulting®, says that earning trust in AI is a socio-technical challenge, with the most challenging part being the human side. The comprehensive approach necessary can be broken down into three components: people, processes and tools, along with some key tenets for approaching them within an organization.

When thinking about people, the focus is to figure out the right organizational culture required to curate AI in a responsible way. AI governance processes are needed to gather inventory and appropriate metadata, do risk assessments, and more. Organizations must also have the right tools in AI engineering frameworks, used throughout the AI lifecycle, to help ensure that these models are reflecting the correct intent.

3D design of balls rolling on a track

The latest AI News + Insights 


Discover expertly curated insights and news on AI, cloud and more in the weekly Think Newsletter. 

Three key tenets

Boinodiris points out that of the three components, people are the most difficult part of striking the right balance of responsible AI in an organization.  

“Getting the right organizational culture is challenging. And when thinking about, what is that right organizational culture required to curate AI responsibly, there are some key tenets,” said Boinodiris.

Tenet 1: Humility 

The first is approaching the space with “tremendous humility” because members of an organization are coming into it with plenty of learning and unlearning to do. What Boinodiris means when she says to unlearn is unlearning who gets a seat at the table during these conversations. The discussion around AI must include individuals in multiple disciplines with varying backgrounds in order to craft AI in a holistic manner.

That means that having a growth mindset is going to be key to fostering responsible AI. Boinodiris adds that organizations need to give their people psychological safety and a safe space to conduct what can be difficult conversations surrounding the subject of AI.

Tenet 2: Varying world views

The second tenet is to recognize that people come from different world experiences and all perspectives matter. Organizations must recognize the diversity of their workforce and the people who are building the AI models and governing them. It is not just their gender, race, ethnicity or sexual orientation. It's about perspective and lived world experience.

“What I earnestly mean is that people who have lived different world experiences must be at the table when thinking about things like: Is this appropriate? Is this going to solve the problem? Is this the right data? What could potentially go wrong? What could harm look like?” said Boinodiris.

Tenet 3: Multidisciplinary

Finally, the people or teams building and governing these AI models for an organization need to be multidisciplinary. What this looks like is having a team of individuals with varying backgrounds, such as sociologists, anthropologists and legal experts represent the key to creating AI responsibly.

AI Academy

Uniting security and governance for the future of AI

While grounding the conversation in today’s newest trend, agentic AI, this AI Academy episode explores the tug-of-war that risk and assurance leaders experience between governance and security. It’s critical to establish a balance and prioritize a working relationship for both to achieve better, more trustworthy data and AI your organization can scale.

A recognition of bias

According to Boinodiris, one of the common myths around the subject of AI is that 100% of the effort is coding. She says that this is not true.

“We know well that over 70% of the effort is just figuring out, is this the right data to use? What's fascinating about data, my favorite definition, is that it’s an artifact of the human experience,” she said.

The humans are the ones generating the data and making the machines that generate the data, but there must be a recognition that everyone has biases; 188 to be exact, according to Boinodiris. Humans have had these biases since the dawn of time and for a good reason. 

Boinodiris compares AI to a mirror in the way that it reflects a person’s biases back to them. And what it comes down to is being brave enough to look introspectively into that so-called mirror and deciding whether the reflection aligns with the organization’s values.

Drive transparency

Organizations in charge of these AI models need to be transparent about why they made a particular decision regarding data or chose a certain approach. Or why they chose one methodology over another. People must create a fact sheet for their AI solutions that detail important information about the model.

For instance, things such as appropriate intended use, where the data came from, what is the methodology, who is accountable, when is the model audited and at what frequency? What is it audited for, what are the results of the audit, and so on.

Separately, people need to be self-aware enough to realize whether that reflection doesn’t align with their values; if it does not, they need to change the approach.

“So, when you hear people say, ‘my AI model has no bias in it whatsoever’, just remember this: All data is biased. The key is to be transparent about why you felt this data was the most important thing to have in your model,” said Boinodiris. And remember to be constantly introspective as our points of view, our ethos, will change over time.

Trust in AI is earned, not given. Have difficult conversations with team members about where their biases might stem from and recognize that creating a responsible and dependable AI model isn’t linear and requires hard work.

Related solutions
IBM watsonx.governance

Govern generative AI models from anywhere and deploy on the cloud or on premises with IBM watsonx.governance.

Discover watsonx.governance
AI governance solutions

See how AI governance can help increase your employees’ confidence in AI, accelerate adoption and innovation, and improve customer trust.

Discover AI governance solutions
AI governance consulting services

Prepare for the EU AI Act and establish a responsible AI governance approach with the help of IBM Consulting.

Discover AI governance services
Take the next step

Direct, manage and monitor your AI with a single portfolio to speed responsible, transparent and explainable AI.

Explore watsonx.governance Book a live demo