05: The ethics of 
AI and automation
Decisions made today will have ramifications for years to come
Isometric illustration of a person working on a computer with a screen projected in front of the monitor
A question of ethics

Gen AI, a relatively new technology, has managed to put AI at a critical, historical moment—one that’s expected to redefine the very nature of business. In this climate of rapid and expanding AI investment, particularly given the possibilities presented by gen AI and foundation models, AI ethics is absolutely required to ensure the longevity, compliance, and validity of those investments. We believe trust starts with a culture of ethics within the company, but for scalable AI, what’s needed is governance that operates end to end. The decisions that companies make today about where and how to bring AI into the mix will have ramifications for years to come.

View all chapters
Additional information Report

The CEO’s guide to generative AI

Fact: While 79% of executives surveyed say AI ethics is important to their enterprisewide AI approach, less than 25% have operationalized common principles of AI ethics. The CEO’s guide to generative AI
Instituting guardrails

The ethics and governance of AI must dominate organizational discussions from the board level down because there’s an important distinction between what you can do and what you should do. This applies to how AI is used internally—weighing the potential impact on workers—and externally—weighing the impact on customers and the wider world.

“One of the most critical things that a company can do before they start rolling out AI is to have guardrails in place to know where and when that AI should be used, including the impact of AI on job roles from the lowest to the highest level.” —Melissa Long Dolson

Foundational model health, employee and customer privacy protections, maintenance of intellectual property, risk management, and regulatory compliance must all be part of those guardrails. There should also be a clear vision of what protections and layers of accountability will be put in place. For example, IBM puts forth three principles  when it comes to responsible AI:


  • The purpose of AI is to augment human intelligence
  • Data and insights belong to their creator
  • New technology, including AI systems, must be transparent and explainable

 



AI governance helps build responsible ↪ AI workflows
One of the most critical things that a company can do before they start rolling out AI is to have guardrails in place to know where and when that AI should be used. Melissa Long Dolson
New roles and conversations

You must also consider new roles to effectively govern AI and IT automation. For example, should you hire a chief AI ethics officer? Will you also need an AI ethicist or an AI ethics council? Or perhaps you need all three. There are also new roles quickly emerging outside of these oversight-type jobs, including AI prompt and deep learning engineers, AI chatbot developers, AI designers, and AI auditors, to name just a few. These roles will all need clear guidelines on how to operate responsibly.

These complex conversations don’t have simple answers, but they do need to happen now. Seismic changes are arising from AI—the World Economic Forum predicts that by 2025, these new technologies will have disrupted 85 million jobs globally. However, the Forum also estimates the creation of 97 million new roles.1 This emerging workforce is one in which humans, augmented by AI, will need to be guided by rules based on a strict sense of ethics.

For a deeper dive into this topic, we invite you to explore Responsible AI & ethics, part of IBM’s guide to gen AI.

Ethics can’t be delegated
 
Chapter 06 →
How to get started on 
IT automation with IBM
Read chapter 06 View all chapters