AI

Why do we need to talk about ethics and bias in AI?

Share this post:

I wonder if it is a form of guilt that has started to hit the tech industry. Everywhere you start talking about ethics, and terms like surveillance capitalism and economics start to creep into the language.

Overall, we – or at least large parts of the industry – have a feeling that something may have become too much, too overarching or just not exactly what we had dreamed of.

It is probably because digitization has begun to grow up. When I attended a number of debates at the 2018 “Folkemøde” (Danish Democratic Festival), all of which were about digitization, it came as a surprise that they somehow ended up in a discussion about ethics.

Typical questions: Whether it might be too much with smart cities, whether we want to share our personal health data with the workplace, or where to set the limit for digital monitoring. This humanistic-philosophical point of view wasn’t really the point, but it just ended up that way. And it was something different from the previous year, where everything was about disruption and how we could become digital as quickly as possible.

DID YOU READ: Artificial Intelligence, Machine Learning and Cognitive Computing

How far can we extend the digitization?

This year was completely different. Thus, the debates were about ethics and how far we can extend digitization. Some of them were even real dilemma “games” where you could test the limits and how far digitization can move. Suddenly we also got into discussions about whether it has all become too much and whether we should say no to more digitization.

So, what happened right there? Perhaps a combination of growing concern over digitalization, a finding of rising digital surveillance combined with the unveiling of the Cambridge Analytica scandal, which exploded in early 2018.

The artificial intelligence

This new agenda has to a great extent been catalyzed by various technologies and methods in the AI field. Artificial intelligence, along with blockchain, is probably one of the IT industry’s favorite buzzwords these years, and there is no limit to what these technologies can be used for and what value they can bring to individuals, businesses, and communities. We already use it every day – on our phones, social media and in lots of other contexts, often without thinking about them.

IBM AI robot reaches out hand.

But there is also a downside: When we talk about decision support systems in the traditional technology world, they often have a remarkable authority.

It seems like it is perfectly natural to fail as a human being, but when a computer processes a calculation, we always assume that the result is correct.

But artificial intelligence, machine learning, neural networks, or whatever other terms we use for this category of systems, are trained on real-world data and are just advanced statistical models.

DID YOU READ: Chatbots: The Modern Artificial Intelligence Helper

Human prejudice is encoded into the algorithms

The problem is that when we train the machines with real-world observations (often data based on human decisions), we often inadvertently encode human prejudice, bias, and wrong decisions into our algorithms.

After all, we probably already know that humans are biased and driven by often illogical, and perhaps even unfair, attitudes to other individuals on the planet.

“That is just the way it is”. It is only human. However, when the machine gains this algorithmic authority, we tend to believe more in it than in – human – humans. Thus, we must learn to be skeptical, and we need to know where to draw the line for the algorithm(s).

The bias problem can have quite unpleasant consequences. For example, the US legal system uses up to several algorithms to calculate a risk profile for people under criminal activity charges. We know that these algorithms are biased and that they assign far greater risk to colored than whites, and there are numerous examples where algorithms interfere in the judicial process.

The primary reason, of course, is that these algorithms are trained with historical data: past convictions, behavioral data, and more. Since judges – though they probably won’t directly admit it – are probably biased and will sentence certain groups to harsher penalties, their decisions and behaviors will turn into algorithmic authority as soon as data becomes algorithmic. However… it should be mentioned that it can go the other way too: Artificial intelligence is not in itself guilty of imbalances, but can also be used to detect and eliminate the weaknesses of human decision-making.

Why talking about ethics and bias in AI is important?

That is why we need to start a discussion of how to best utilize the algorithms, how to understand if they contain bias, whether we are using them correctly, and whether we are setting boundaries in the right places. The challenge is that controlling the algorithms can have quite large consequences. Not just in the justice system, but in society as a whole.

That is why we have started talking about ethics. The Danish Council of Ethics‘ definition of ethics is:

“Ethical questions are about how you treat other people and other living beings. Ethics is about what good life is and the importance of taking into account others and not just looking at oneself and one’s own needs.” You can appropriately play with formulations here and replace the first “you” with ‘artificial intelligence’ or just ‘machines’.

A sign that says Right or Wrong directions.

The more digitization creeps into our world, the more decisions it will make – or help us make – and the greater ethical issues we can face if we are not very careful.

If I was to mention the areas where technology faces us with ethical challenges or dilemmas, the first questions that come to mind are:

  • Can we understand how it changes our behavior?
  • Can we see it’s there when it’s there?
  • Can we see if we see the truth?
  • Can we trust those who deliver it?
  • Can we be sure that it is not prejudiced?
  • Can we understand when taken too far?
  • Can we tell if it works?
  • Can we live with the removal of humanity?
  • Can we be sure that it is making the right decisions?
  • Do we know how to navigate the world of algorithms as individuals?

These are complex questions, and often there are no simple answers. After all, an ethical dilemma per definition do not have a clearly defined answer (otherwise it would be morality), so often we have to discuss them to agree on a consensus.

And if that fails, we must settle with the discussion. That’s the most important thing.

If you have any further questions regarding ethics and bias in AI, please do not hesitate to contact me at escherich@dk.ibm.com or read more here.

Executive Innovation Architect

More AI stories

How to act in the new regulation of financial sector

Our world is changing. Because of that regulators around the world are taking ambitious steps to improve the sustainability of the financial sector and guide capital towards sustainable economic activity. Especially in EU we are seeing a high level of regulations. These regulatory interventions present complex and sensitive legal challenges for financial sector firms, which […]

Continue reading

Private cloud or public cloud? New server technology offers more choice

In September, we launched the new IBM Power E1080 high-end server, for corporate use based on the  new Power10 architecture, the Power E1080. The server can – among many other things – handle a large number of applications and workloads securely, at scale and with highest availability. Going into the spring of 2022, we will […]

Continue reading

10 Questions regarding SDG to the company’s management and board

We have all together manged to create the most serious sustainability deficit and our greatest challenge is the ecological debt – a dept which we are running up by overusing and depleting our natural resources and thereby threatening our ability to meet the needs of future generations.  Worldwide, the strains on key resources, from fresh […]

Continue reading