Conversational Services

The code of ethics for AI and chatbots that every brand should follow

Share this post:

Key Points:
– Businesses often overlook important issues related to morals and ethics of chatbots and AI
– Customers need to know when they are communicating with a machine and not an actual human
– Ownership of information shared with a bot is another key ethical consideration and can create intellectual property issues
– The privacy and protection of user data is paramount in today’s interconnected world

See how AI is shaping customer care

 

(Read the full article “Ethics And Artificial Intelligence With IBM Watson’s Rob High” on Forbes.com. You can also listen to The Modern Customer Podcast with Rob High here.)

Businesses are rapidly waking up to the need for chatbots and other self-service technology. From automating basic communications and customer service, to reducing call center costs and providing a platform for conversational commerce, chatots offer many new opportunities to delight and better serve consumers.

Chatbots can offer 24/7 customer service, rapidly engaging users, answering their queries as whenever they arrive. Millennials in particular are impatient when engaging with brands and expect real-time responses. More than 22% of millennials expect a response within 10 minutes of reaching out to a brand via social media, according to a recent Desk.com study. And 52% of them will abandon online purchases if they can’t find a quick answer.

The need for speed in customer service has never been higher. Leading brands like Staples are increasingly turning to chatbots to provide a solution for this need for speed.

While chatbots are the only viable solution to this emerging market space, businesses often overlook important issues related to ethics of chatbots and AI. 

The topic of chatbot ethics is complex and spans a wide area including privacy, data ownership, abuse and transparency.

Rob High, CTO of IBM Watson was recently featured in an article on Forbes.com titled “Ethics And Artificial Intelligence With IBM Watson’s Rob High.” In the article, Rob talks about how in order to keep AI ethical, it needs to be transparent. Rob advises that when customers interact with a brand’s chatbot, for example, they need to know they are communicating with a machine and not an actual human.

AI, like most other technology tools, is most effective when it is used to extend the natural capabilities of humans instead of replacing them. That means that AI and humans are best when they work together and can trust each other.
— Rob High, CTO IBM Watson

Ethics form the foundation of how a bot is built, and more importantly, they dictate how a bot interacts with users. How a bot behaves has the potential to influence how an organization can be perceived and unethical behavior can lead to consumer mistrust and litigation issues. Ethical bots can promote brand loyalty and help boost profit margins.

1. Who should a chatbot serve?

When building a chatbot, an organization must decide who it primarily serves: the needs of the business or the needs of a customer. Amir Shevat, Director of Developer Relations at Slack discusses this topic in his blog post “Hard questions about bot ethics.

Here, you must determine the exact purpose and business value of the chatbot. One built mainly to provide recommendations to customers can only be ethical if it meets the needs of the customer. Whereas a bot built for internal business improvement should be made to suit the company’s need.

In general, where or not a bot is customer-facing, an ethical organization should always put the needs of the customer before the needs of the business. This means providing the product best suited to those customers, rather than the one with the best profit margin or the speediest implementation. An option for users to provide feedback on the service will help detect issues, improve customer satisfaction and maintain ethical behavior. Bots utilizing machine learning and algorithms to display product offerings or recommendations should also have regular health checks built in for this exact purpose.

2. Am I talking to a chatbot or a human?

Building trust between humans and machines is just like building trust between humans. Brands can build trust by being transparent, aligning expectations to reality, learning from mistakes and continually correcting them, and listening to customer feedback.

When building a chatbot, transparency is a critical consideration. This boils down to the question – is it clear whether the user is talking to a bot or a human? Customers are savvy enough to be able to tell the difference and expect brands to be honest with them. Customers don’t expect chatbots to be perfect, but they want to know what they can and cannot do, and that they are reliable — within reason. Transparency about both failure and success can build trust faster than virtually any other approach.

To work on transparency and reliability, start by asking yourself some basic questions like:

  • Who is the chatbot interacting with?
  • Where is the chatbot being used?
  • What type of information is being discussed? Is any of it sensitive?
  • What are the implications of the interaction?

Where sensitive information (like bank details) and life-altering interactions (health and finance) is being communicated, you need to build additional checks for transparency and security. This means providing the user with clarity. Be upfront and build into the introduction that the user is talking to a bot and what personal information is being accessed, analyzed, saved or shared and with whom. Also always provide an option where the user can be immediately be connected to a human if they have concerns that a bot cannot address..

3. Who owns the data shared with a chatbot?

Ownership of information shared with a bot is another key ethical consideration and can create intellectual property issues if not handled correctly.

Does the bot service-provider or the user own their favorite custom pizza creation? If a bot builds a playlist based on the users preferences – who owns it? These are the kind of ethical questions that need to be considered and the answer can fluctuate based on the intent of a bot. A personal-assistant bot would lean towards user ownership, while a representative-bot leans towards service-provider ownership.

Whatever the type of bot, this is another question of transparency. Businesses building bots should provide clarity about who owns what and should include language asking users to agree with their terms of service first.

4. Preventing chatbot abuse

When building a chatbot, it is important to consider how a bot handles abuse. This includes both the giving and receiving of abuse. Here, the ethical stance is to follow Isaac Asimov’s Three Laws of Robotics: “a robot may not injure a human being or, through inaction, allow a human being to come to harm”.

A chatbot should be built with profanity recognition. Upon receiving abuse, the developer has two options. The first is to ignore the abuse by building in a non-response situation where the user abuses the bot. Or, add a default neutral response such as “I’m sorry, I don’t understand your request.” Depending on the severity of abuse — for example death threats or racism — it is important to build in a report function, sending the transcript to a relevant party.

It is of critical importance that chatbots do not abuse humans even if it’s learned behavior that’s a result of what the human has been feeding the bot. Requests from users to end communication should have a built in protocol to end the chat, preventing the bot from harassing or spamming a user. Language filters should be applied for any bots utilizing machine learning algorithms. There have been a few instances over the last year where some bots went rogue after being subverted by online trolls and began tweeting racist propaganda.

5. How should chatbots handle privacy?

The privacy and protection of user data is paramount in today’s interconnected world. The launch of the General Data Protection Regulation protecting citizens of the European Union is a reflection of this.

When building a chatbot, developers should consider the ethics of user privacy. This will help answer questions like:

  • Does your chatbot share user information with other chatbots your company owns? Are you letting the user know about this sharing?
  • Should companies give users the right to be forgotten?
  • Can user-bot conversations be studied for optimization and UX improvement?

In this situation, businesses can take direction from existing online interactions. Transparency is the best course of action and a publicly-available privacy policy is a must have for any organization. Developers should also build in mechanisms to ensure the privacy of user information in any interaction — an unspoken user-bot confidentiality agreement. This means encryption of all communications and, depending on the sensitivity of the data, transcription deletion after completion of the interaction.

Ethics should be a core consideration of any action taken by a business. With chatbots still in a stage of relative infancy, the discovery of new ethical issues is likely to continue. Businesses should continue to learn from these emerging cases and build their guiding principles and ethical standards. If in doubt, side with the customer, and always provide transparency.

(Read the full article “Ethics And Artificial Intelligence With IBM Watson’s Rob High” on Forbes.com. You can also listen to The Modern Customer Podcast with Rob High here.)

How can you get started with integrating AI and chatbots into your customer support channels? Learn more about how Watson can help.

AI is shaping the future of call centers. Discover how Watson can help you deliver exceptional customer experiences.

Contributor

More Conversational Services stories
October 15, 2018

IBM AI OpenScale: Operate and automate AI with trust

AI OpenScale allows businesses to operate and automate AI at scale – irrespective of how the AI was built and where it runs. Bridging the gap between the teams that operate AI and those that manage business applications, AI OpenScale provides businesses with confidence in AI decisions.

Continue reading

October 3, 2018

Advanced conversational AI concepts

Creating a conversational AI solution that can engage with your consumers is only half the battle. Learn how to build a virtual assistant that goes above and beyond. Our latest blog highlights some advanced conversational concepts that will allow you to create more natural conversation with your end users.

Continue reading

September 19, 2018

It’s time to start breaking open the black box of AI

At IBM, we’ve seen our customers use AI as a catalyst to reimaging their workflows – transforming how customer call centers operate, how people complete their taxes, and how legal professionals make data privacy compliance decisions. However, many organizations still continue to struggle in deploying their AI into production environments across their existing applications. For AI […]

Continue reading