04/05/2021 | Written by: Wouter Denayer and Wouter Oosterbosch
Share this post:
Artificial Intelligence (AI) has the potential to solve many of the most difficult problems of today and tomorrow, impacting every aspect of our lives. Most people are likely to agree that AI should be used to solve problems and make (all) humans more prosperous and healthier. Actually, achieving this is easier said than done, especially when you take human nature into consideration. How do we want to use artificial intelligence in our society? How do we use new technologies in an ethically and democratically responsible way?
AI and human nature
To put it another way: AI experts believe that building an AI has the potential to teach us more about human nature than about AI. Why? It all comes down to our inherent biases that allow us to simplify the world around us, so we are able to model it. When programming an AI, those inherent biases are unintentionally replicated, which would be fine if all humans were logical and rational. However, our current AI models show us a reflection of our imperfect selves. We suffer for example from anchoring bias, where we make decisions mostly relying on the first piece of information we receive on that subject or from confirmation bias, which means that we tend to focus on information that confirms our preconceptions about a topic. The ultimate goal, of course, is not to advance AI, per se, but to advance human beings and our values through the use of technologies including AI.
“The appearance of artificial intelligence in our daily lives constitutes a unique opportunity to focus on the essentials of women and men, and of our society. What skills are we going to develop? What society do we want? These questions are underpinned by a reflection encompassing each citizen on the common values that we want to carry.“
– Nathanaël Ackerman, AI4Belgium Lead and AI expert at SPF BOSA
We need to find a way to design an aspirational AI that helps humanity to move forward. We believe that this will become a beneficial relationship, with machines augmenting humans.
Ethical AI is the way forward. It effectively achieves results that benefit both humans and organisations, while reducing risks and adverse outcomes for all stakeholders while prioritising human agency and wellbeing.
When it comes to AI, at IBM we look at each project from five different perspectives: accountability, explainability, fairness, value alignment, and user data rights, as addressed in the next questions.
It is important to ask the question who is responsible for the AI’s actions?
AI developers, data scientists, designers, and leaders are all responsible for considering the AI design, development, decision process, and outcomes. This means they need to consider not only the primary impact of the AI, but also the secondary impact, which might not be visible until the AI goes to market, and the tertiary impact, which is often unintentionally negative. Like clinical prediction models trained on potentially biased data could produce unfair outcomes for patients. The reason to investigate ways to help reduce bias in healthcare AI by IBM Research as you can read in this blog >
Is the outcome of the AI system ethical?
AI should be designed in such a way that humans can easily perceive, detect, and understand its decision-making process. This means moving away from black box algorithms, where even its designers cannot clearly explain how an AI came to a specific decision, to an AI that gives transparent outcomes.
Is the AI system ethical?
AI must be designed to minimise negative bias and promote inclusive representation leading to fairer outcomes. However, this raises another ethical query: what is fair? Should we focus on what people deserve, what people need, or provide equal benefit to everyone? Instead of defining fairness, one solution is to reduce the sources of unfairness. For example, avoiding subjective measurement of data, self-fulfilling predictions, and reinforcement in feedback loops. By doing so, not only can we avoid creating AI that replicates or amplifies our own biases, but we can also use AI to help humans themselves be more fair.
Does the AI follow our values, or has it been programmed with biases?
AI should be designed to align with the norms and values of your user group and context. This requires all AI developers, data scientists, designers, and leaders to consider the context in which the AI will be used, including how the context could evolve over time, to avoid reinforcing negative impacts.
Should we collect data in this way?
AI should consider the data it collects, analyses, and stores to ensure that these processes do not violate user privacy in any way. This includes obtaining clear consent from users for the use of their data in a particular way. One connected debate asks if it is okay to use the same data for another purpose. Most would say no, this should be validated with the users.
“AI is just one way to deliver transformative change with a human touch.”
Petra De Sutter, Deputy Prime Minister of Belgium, recently spoke to Hanne Decoutere about how to deal with change as a constant, keeping humanity at the core of change, and how to accelerate digital transformations by using new technologies for a more sustainable future for us all.
Watch the full interview here >
The next steps for human-centred AI
The tools and techniques that we need to ensure AI is human centred already exist and are evolving all the time. At IBM, for example, we focus on:
- Neurosymbolic AI – we integrate neural and symbolic techniques to build AI that can perform complex tasks by understanding and reasoning more like we do.
- AI hardware – our digital and analogue accelerators drive massive improvements in computational power while remaining energy-efficient.
- Secure, trusted AI – we build tools to help you ensure that trust and security are at the core of any AI you put out into the world.
- AI engineering – our tools help AI creators reduce the time they spend training, maintaining, and updating their models.
Want to learn more on AI?
IBM has a range of interesting materials available:
Advancing AI ethics beyond compliance – a report on AI ethics and how organisations can proactively address the topic while addressing their competitive future.
Proven concepts for scaling AI – expert insights from the IBM Institute for Business Value on how to launch and scale new AI projects.
The Call for Code Global Challenge encourages developers and problem solvers to build open-source solutions for projects that address social and humanitarian issues.