AI

Why do we need to talk about ethics and bias in AI?

Share this post:

I wonder if it is a form of guilt that has started to hit the tech industry. Everywhere you start talking about ethics, and terms like surveillance capitalism and economics start to creep into the language.

Overall, we – or at least large parts of the industry – have a feeling that something may have become too much, too overarching or just not exactly what we had dreamed of.

It is probably because of the fact that digitization has begun to grow up. When I attended a number of debates at the 2018 “Folkemøde” (Danish Democratic Festival), all of which were about digitization, it came as a surprise that they somehow ended up in a discussion about ethics.

Typical questions: Whether it might be too much with smart cities, whether we want to share our personal health data with the workplace, or where to set the limit for digital monitoring.

This humanistic-philosophical point of view wasn’t really the point, but it just ended up that way. And it was something different from the previous year, where everything was about disruption and how we could become digital as quickly as possible.

DID YOU READ: Artificial Intelligence, Machine Learning and Cognitive Computing

How far can we extend the digitization?

This year was completely different. Thus, the debates were about ethics and how far we can extend digitization.

Some of them were even real dilemma “games” where you could test the limits and how far digitization can move.

And suddenly we also got into discussions about whether it has all become too much and whether we should say no to more digitization.

So what happened right there? Perhaps a combination of growing concern over digitalization, a finding of rising digital surveillance combined with the unveiling of the Cambridge Analytica scandal, which exploded in early 2018.

The wonderful activist vision of the Internet’s childhood on democratization and free communication was suddenly replaced by deep skepticism towards the biggest players on the same network, and we began to think a little bit more.

Furthermore, we probably also began to understand that the activist vision of freedom disappeared in parallel with the emergence of new economic models.

Namely, those who rely on the economic value of someone being able to predict your and my behavior.

In other words, it is you and me who are the product of this new economic agenda. And it is the artificial intelligence that monitor, analyze and predict any kind of patterns.

The artificial intelligence

This new agenda has to a great extent been catalyzed by various technologies and methods in the AI field.

Artificial intelligence, along with blockchain, is probably one of the IT industry’s favorite buzzwords these years, and there is no limit to what these technologies can be used for and what value they can bring to individuals, businesses, and communities.

And we already use it every day – on our phones, social media and in lots of other contexts, often without thinking about them.

But there is also a downside: When we talk about decision support systems in the traditional technology world, they often have a remarkable authority.

It seems like it is perfectly natural to fail as a human being, but when a computer processes a calculation, we always assume that the result is correct.

But artificial intelligence, machine learning, neural networks, or whatever other terms we use for this category of systems, are trained on real-world data and are in reality just advanced statistical models.

They are not logical, transparent “if-then rules” that explain how the algorithm determines a given conclusion, but only models of data.

But in return, there are plenty of data. Real-world data used to train the models to learn patterns that they can then use to evaluate new data.

No data, no artificial intelligence. In other words, we are in the midst of a paradigm shift from procedural programming to data-driven algorithms.

DID YOU READ: Chatbots: The Modern Artificial Intelligence Helper

Human prejudice is encoded into the algorithms

The problem is that when we train the machines with real-world observations (often data based on human decisions), we often inadvertently encode human prejudice, bias, and wrong decisions into our algorithms.

After all, we probably already know that humans are biased and driven by often illogical, and perhaps even unfair, attitudes to other individuals on the planet.

“That is just the way it is”. It is only human.

However, when the machine gains this algorithmic authority, we tend to believe more in it than in – human – humans.

Thus, we must learn to be skeptical, and we need to know where to draw the line for the algorithm(s).

The bias problem can have quite unpleasant consequences. For example, the US legal system uses up to several algorithms to calculate a risk profile for people under criminal activity charges.

We know that these algorithms are biased and that they assign far greater risk to colored than whites, and there are numerous examples where algorithms interfere in the judicial process.

The primary reason, of course, is that these algorithms are trained with historical data: past convictions, behavioral data, and more.

And since judges – though they probably won’t directly admit it – are probably biased and will sentence certain groups to harsher penalties, their decisions and behaviors will turn into algorithmic authority as soon as data becomes algorithmic.

But… it should just be mentioned that it can go the other way too:

Artificial intelligence is not in itself guilty of imbalances, but can also be used to detect and eliminate the weaknesses of human decision-making.

Why talking about ethics and bias in AI is important?

That is why we need to start a discussion of how to best utilize the algorithms, how to understand if they contain bias, whether we are using them correctly, and whether we are setting boundaries in the right places.

The challenge is that controlling the algorithms can have quite large consequences. Not just in the justice system, but in society as a whole.

That is why we have started talking about ethics. The Danish Council of Ethics‘ definition of ethics is:

»Ethical questions are about how you treat other people and other living beings. Ethics is about what good life is and the importance of taking into account others and not just looking at oneself and one’s own needs. ”

… and you can appropriately play with formulations here and replace the first “you” with ‘artificial intelligence’ or just ‘machines’.

The more digitization creeps into our world, the more decisions it will make – or help us make – and the greater ethical issues we can face if we are not very careful.

If I was to mention the areas where technology faces us with ethical challenges or dilemmas, the first questions that come to mind are:

  • Can we understand how it changes our behavior?
  • Can we see it’s there when it’s there?
  • Can we see if we see the truth?
  • Can we trust those who deliver it?
  • Can we be sure that it is not prejudiced?
  • Can we understand when taken too far?
  • Can we tell if it works?
  • Can we live with the removal of humanity?
  • Can we be sure that it is making the right decisions?
  • Do we know how to navigate the world of algorithms as individuals?

These are actually some pretty complex questions, and often there are no simple answers.

After all, an ethical dilemma per definition do not have a clearly defined answer (otherwise it would be morality), so often we have to discuss them to agree on a consensus.

And if that fails, we must settle with the discussion. That’s the most important thing.

If you have any further questions regarding ethics and bias in AI, please do not hesitate to contact me at escherich@dk.ibm.com or read more here.

 

Executive Innovation Architect

More AI stories

The transformative value of data

The transformative value of data Today, organizations in the industrial-related industries are dealing with unprecedented external challenges from both traditional and new market entrants, that is resulting in the need to redefine their relationships with their customers and also reinventing how indeed they deliver their core services. In IBM, we see our customers driven by […]

Continue reading

Prescribing your future with AI Machine Foresight

New AI capabilities are emerging, including extensions to conventional predicting and forecasting techniques that are quite often using machine learning technologies to model the real world. In a nutshell, while a large part of machine learning (mostly classification) aims are predicting what will happen next, decision makers may also be interested in various future scenarios […]

Continue reading

Nordic Organizations Now Need More than 11 Times Longer to Close Today’s Skills Gap

Over 120 million workers in the world’s largest 12 economies may need to be retrained or reskilled as a result of AI and intelligent automation in the next three years. Our skills gap – also in the Nordics – is real and it is now taking 12 times longer than four years ago for organizations […]

Continue reading