Cognitive Enterprise

The code of ethics for AI and chatbots that every brand should follow

Share this post:

Key Points:
– Businesses often overlook important issues related to morals and ethics of chatbots and AI.
– Customers need to know when they are communicating with a machine and not an actual human.
– Ownership of information shared with a bot is another key ethical consideration and can create intellectual property issues.
– The privacy and protection of user data is paramount in today’s interconnected world.

Learn to build a customer service bot


(Read the full article “Ethics And Artificial Intelligence With IBM Watson’s Rob High” on You can also listen to The Modern Customer Podcast with Rob High here.)

Businesses are rapidly waking up to the need for bots. From automating basic communications and customer service, to reducing costs and providing a platform for conversational commerce, bots offer many new opportunities to delight and better serve customers.

Chatbots can offer 24/7 customer service, rapidly engaging millennials, answering their queries as whenever they arrive. Millennials in particular are impatient when engaging with brands and expect real-time responses. More than 22% of millennials expect a response within 10 minutes of reaching out to a brand via social media, according to a recent study. And 52% of them will abandon online purchases if they can’t find a quick answer. The need for speed in customer service has never been higher. Leading brands like Staples are increasingly turning to chatbots to provide a solution for this need for speed.

While chatbots are the only viable solution to this emerging market space, businesses often overlook important issues related to morals and ethics of bots and AI. The topic of bot ethics is complex and spans a wide area including privacy, data ownership, abuse and transparency.

Rob High, CTO of IBM Watson was recently featured in an article on titled “Ethics And Artificial Intelligence With IBM Watson’s Rob High.” In the article, Rob talks about how in order to keep AI ethical, it needs to be transparent. Rob advises that when customers interact with a brand’s chatbot, for example, they need to know they are communicating with a machine and not an actual human.

AI, like most other technology tools, is most effective when it is used to extend the natural capabilities of humans instead of replacing them. That means that AI and humans are best when they work together and can trust each other.
— Rob High, CTO IBM Watson

Ethics form the foundation of how a bot is built, and more importantly, they dictate how a bot interacts with users. How a bot behaves has the potential to influence how an organization can be perceived and unethical behavior can lead to consumer mistrust and litigation issues. Ethical bots can promote brand loyalty and help boost profit margins.

1. Who should a bot serve?

When building a bot, an organization must decide who the bot primarily serves: the needs of the business or the needs of a customer? Amir Shevat, Director of Developer Relations at Slack discusses this topic in his blog post “Hard questions about bot ethics.

Here, you must determine the exact purpose and business value of the bot. A bot built mainly to provide recommendations to customers can only be ethical if it meets the needs of the customer. Whereas a bot built for internal business improvement should be made to suit the company’s need.

In general, where or not a bot is customer-facing, an ethical organization should always put the needs of the customer before the needs of the business. This means providing the product best suited to those customers, rather than the one with the best profit margin or the speediest implementation. An option for users to provide feedback on the service will help detect issues, improve customer satisfaction and maintain ethical behavior. Bots utilizing machine learning and algorithms to display product offerings or recommendations should also have regular health checks built in for this exact purpose.

2. Am I talking to a bot or a human?

Building trust between humans and machines is just like building trust between humans. Brands can build trust by being transparent, aligning expectations to reality, learning from mistakes and continually correcting them, and listening to customer feedback.

When building a bot, transparency is a critical consideration. This boils down to the question – is it clear whether the user is talking to a bot or a human? Customers are savvy enough to be able to tell the difference and expect brands to be honest with them. Customers don’t expect bots to be perfect, but they want to know what your bot can and cannot do, and that your bot is reliable — within reason. Transparency about both failure and success can build trust faster than virtually any other approach.

To work on transparency and reliability, start by asking yourself some basic questions like:

  • Who is the bot interacting with?
  • Where is the bot being used?
  • What type of information is being discussed? Is any of it sensitive?
  • What are the implications of the interaction?

Where sensitive information (like bank details) and life-altering interactions (health and finance) is being communicated, you need to build additional checks for transparency and security. This means providing the user with clarity. Be upfront and build into the introduction that the user is talking to a bot and what personal information is being accessed, analyzed, saved or shared and with whom. Also always provide an option where the user can be immediately be connected to a human if they have concerns that a bot cannot address..

3. Who owns the data shared with a bot?

Ownership of information shared with a bot is another key ethical consideration and can create intellectual property issues if not handled correctly.

Does the bot service-provider or the user own their favorite custom pizza creation? If a bot builds a playlist based on the users preferences – who owns it? These are the kind of ethical questions that need to be considered and the answer can fluctuate based on the intent of a bot. A personal-assistant bot would lean towards user ownership, while a representative-bot leans towards service-provider ownership.

Whatever the type of bot, this is another question of transparency. Businesses building bots should provide clarity about who owns what and should include language asking users to agree with their terms of service first.

4. Preventing bot abuse

When building a bot, it is important to consider how a bot handles abuse. This includes both the giving and receiving of abuse. Here, the ethical stance is to follow Isaac Asimov’s Three Laws of Robotics: “a robot may not injure a human being or, through inaction, allow a human being to come to harm”.

A bot should be built with profanity recognition. Upon receiving abuse, the developer has two options. The first is to ignore the abuse by building in a non-response situation where the user abuses the bot. Or, add a default neutral response such as “I’m sorry, I don’t understand your request.” Depending on the severity of abuse — for example death threats or racism — it is important to build in a report function, sending the transcript to a relevant party.

It is of critical importance that bots do not abuse humans even if it’s learned behavior that’s a result of what the human has been feeding the bot. Requests from users to end communication should have a built in protocol to end the chat, preventing the bot from harassing or spamming a user. Language filters should be applied for any bots utilizing machine learning algorithms. There have been a few instances over the last year where some bots went rogue after being subverted by online trolls and began tweeting racist propaganda.

5. How should bots handle privacy?

The privacy and protection of user data is paramount in today’s interconnected world. The launch of the General Data Protection Regulation protecting citizens of the European Union is a reflection of this.

When building a bot, developers should consider the ethics of user privacy. This will help answer questions like:

  • Does your bot share user information with other bots your company owns? Are you letting the user know about this sharing?
  • Should companies give users the right to be forgotten?
  • Can user-bot conversations be studied for optimization and UX improvement?

In this situation, businesses can take direction from existing online interactions. Transparency is the best course of action and a publicly-available privacy policy is a must have for any organization. Developers should also build in mechanisms to ensure the privacy of user information in any interaction — an unspoken user-bot confidentiality agreement. This means encryption of all communications and, depending on the sensitivity of the data, transcription deletion after completion of the interaction.

Ethics should be a core consideration of any action taken by a business. With bots still in a stage of relative infancy, the discovery of new ethical issues is likely to continue. Businesses should continue to learn from these emerging cases and build their guiding principles and ethical standards. If in doubt, side with the customer, and always provide transparency.

(Read the full article “Ethics And Artificial Intelligence With IBM Watson’s Rob High” on You can also listen to The Modern Customer Podcast with Rob High here.)

How can you get started with building a chatbot for your business? Learn more about getting started with Watson Conversation Service.

Learn to build a chatbot with our free 30-day Bluemix trial.

Add Comment
No Comments

Leave a Reply

Your email address will not be published.Required fields are marked *

More Developers Stories
July 26, 2017

How cognitive computing will revolutionize the retail industry

Cognitive computing is already redefining the retail industry worldwide. 91% of retail executives familiar with cognitive computing believe it will play a disruptive role in their organization. Opportunities for cognitive insights in the retail industry will continue to grow in the near future. Learn more.

Continue reading

June 23, 2017

How to use Watson Speech to Text utilities to increase accuracy

Learn how to use Watson's Speech to Text utilities to increase the accuracy of your transcription results. In this blog post, we've included links so you can download the S2T utilities. We've also provided Sample .wav files and Python code. Get started with Watson's Speech to Text API by signing up for a free trial today. Learn more.

Continue reading

June 14, 2017

ALERT: Disabling support for 3DES Cipher Suites in TLS connections to eliminate a vulnerability

A vulnerability, Sweet32, was recently identified in cipher suites that use the 3DES block cipher algorithm. The support for 3DES cipher suites in TLS connections made to Watson Developer Cloud services is being disabled to eliminate this vulnerability.

Continue reading