I’m just going to say it: many organizations use artificial intelligence (AI) whether they realize it or not. The question is, how. How are organizations using AI and how do they know that they are doing it in a responsible way?
Phaedra Boinodiris, the global leader for Trustworthy AI for IBM Consulting®, says that earning trust in AI is a socio-technical challenge, with the most challenging part being the human side. The comprehensive approach necessary can be broken down into three components: people, processes and tools, along with some key tenets for approaching them within an organization.
When thinking about people, the focus is to figure out the right organizational culture required to curate AI in a responsible way. AI governance processes are needed to gather inventory and appropriate metadata, do risk assessments, and more. Organizations must also have the right tools in AI engineering frameworks, used throughout the AI lifecycle, to help ensure that these models are reflecting the correct intent.
Boinodiris points out that of the three components, people are the most difficult part of striking the right balance of responsible AI in an organization.
“Getting the right organizational culture is challenging. And when thinking about, what is that right organizational culture required to curate AI responsibly, there are some key tenets,” said Boinodiris.
The first is approaching the space with “tremendous humility” because members of an organization are coming into it with plenty of learning and unlearning to do. What Boinodiris means when she says to unlearn is unlearning who gets a seat at the table during these conversations. The discussion around AI must include individuals in multiple disciplines with varying backgrounds in order to craft AI in a holistic manner.
That means that having a growth mindset is going to be key to fostering responsible AI. Boinodiris adds that organizations need to give their people psychological safety and a safe space to conduct what can be difficult conversations surrounding the subject of AI.
The second tenet is to recognize that people come from different world experiences and all perspectives matter. Organizations must recognize the diversity of their workforce and the people who are building the AI models and governing them. It is not just their gender, race, ethnicity or sexual orientation. It's about perspective and lived world experience.
“What I earnestly mean is that people who have lived different world experiences must be at the table when thinking about things like: Is this appropriate? Is this going to solve the problem? Is this the right data? What could potentially go wrong? What could harm look like?” said Boinodiris.
Finally, the people or teams building and governing these AI models for an organization need to be multidisciplinary. What this looks like is having a team of individuals with varying backgrounds, such as sociologists, anthropologists and legal experts represent the key to creating AI responsibly.
According to Boinodiris, one of the common myths around the subject of AI is that 100% of the effort is coding. She says that this is not true.
“We know well that over 70% of the effort is just figuring out, is this the right data to use? What's fascinating about data, my favorite definition, is that it’s an artifact of the human experience,” she said.
The humans are the ones generating the data and making the machines that generate the data, but there must be a recognition that everyone has biases; 188 to be exact, according to Boinodiris. Humans have had these biases since the dawn of time and for a good reason.
Boinodiris compares AI to a mirror in the way that it reflects a person’s biases back to them. And what it comes down to is being brave enough to look introspectively into that so-called mirror and deciding whether the reflection aligns with the organization’s values.
Organizations in charge of these AI models need to be transparent about why they made a particular decision regarding data or chose a certain approach. Or why they chose one methodology over another. People must create a fact sheet for their AI solutions that detail important information about the model.
For instance, things such as appropriate intended use, where the data came from, what is the methodology, who is accountable, when is the model audited and at what frequency? What is it audited for, what are the results of the audit, and so on.
Separately, people need to be self-aware enough to realize whether that reflection doesn’t align with their values; if it does not, they need to change the approach.
“So, when you hear people say, ‘my AI model has no bias in it whatsoever’, just remember this: All data is biased. The key is to be transparent about why you felt this data was the most important thing to have in your model,” said Boinodiris. And remember to be constantly introspective as our points of view, our ethos, will change over time.
Trust in AI is earned, not given. Have difficult conversations with team members about where their biases might stem from and recognize that creating a responsible and dependable AI model isn’t linear and requires hard work.
Govern generative AI models from anywhere and deploy on the cloud or on premises with IBM watsonx.governance.
See how AI governance can help increase your employees’ confidence in AI, accelerate adoption and innovation, and improve customer trust.
Prepare for the EU AI Act and establish a responsible AI governance approach with the help of IBM Consulting.