August 9, 2017 | Written by: David Ryan Polgar
Categorized: New Thinking
Share this post:
Inspired by the conversational system from Star Trek’s Starship Enterprise, Amazon announced the release of Alexa, alongside its Echo device, in November 2014. What at first seemed unusual and futuristic has since become accepted and commonplace. Consumers are becoming more comfortable talking to their gadgets. Instead of going to a website or downloading an app, we talk to a program as if it’s a friend or an assistant. In the process, we are transitioning our experience with technology from the push/pull dynamic of websites and apps to that of a fluid conversation.
As many commentators have pointed out, the movement towards conversational interfaces―where computers communicate in our language as opposed to users adapting to a computer’s language―is likely to accelerate. A study commissioned by Google and conducted by Northstar Research found that 55% of teens use voice search on their smartphones daily, and pointed to a growing acceptance of conversational interface as etiquette evolves and capabilities grow. For businesses, there is a tremendous potential to engage both differently and deeper with one’s audience, opening up new opportunities for revenue streams and brand loyalty. It is crucial, however, that these news tools are implemented properly.
One of the most contentious issues facing technologists today is how to properly develop these conversational interfaces. There are issues of bias, gender stereotyping, and setting an appropriate tone that is on-brand. In addition, there are the more pragmatic concerns about properly setting user expectations and keeping users within a certain domain of conversation. As businesses seek to incorporate conversational interfaces as part of their larger customer experience, it behooves us to think deeply about designing and deploying these chatbots and virtual assistants.
“Implementing bots requires the right technology, the right data, the right use case, the right design, and the right cultural mindset,” writes Susan Etlinger in the recent Altimeter report, The Conversational Business: How Chatbots Will Reshape Digital Experiences. Etlinger is an analyst at the Silicon Valley-based strategy firm Altimeter, along with being an expert on AI, IoT, and the ethical use of consumer data. For the recent report, Altimeter interviewed a wide range of innovators and companies to gauge both the risks and opportunities of conversational interfaces. I spoke with Etlinger to get her general thoughts on the future of conversational interfaces, and what that means for both society at large and businesses wanting to integrate the technology as part of the customer experience.
“If you’re a retailer, you may want to implement a chatbot to handle customer interactions,” says Etlinger. She is quick to point out, however, that the likely chatbot for your business will be far more limited than Alexa, Cortana, Siri, and other popular virtual assistants. A major point of confusion she foresees is that the consumer expectation is often too high when a business sets out to implement a conversational interface. Chances are that Amazon, Microsoft, and Apple spent considerably more time and money than most businesses desire to spend. The key, according to Etlinger, is setting a better sense of consumer expectation with the interaction.
“A chatbot should have a very clear sense of domain―what you want it to do and what it cannot do,” she says, mentioning the importance of setting both boundaries and suggested conversations in order to have a more seamless interaction. Leaving the conversational interface as a relatively blank slate is inviting off-topic queries that will only lead to frustration. “The consumer,” Etlinger continues, “needs a verbal and visual heuristic about what you can do.” As Sajid Saiyed, the User Experience Design Expert at SAP Labs, writes in Chatbots Magazine, the machine should expose its limitations and not pretend to be a know-it-all. Saiyed recommends that this can be done in a manner that creates empathy for the conversational interface.
“You have to provide a lot of guideposts at first and then people get it after that,” says Etlinger. The chatbot is trying to relocate the conversation and directionality. Even with all of the recommended guideposts and heuristics, consumers are still bound to ask questions and make comments that are completely out-of-bounds and sometimes disturbing.
As Leah Fessler wrote in a widely-discussed piece for Quartz, companies now have a moral responsibility to properly program their respective conversational interfaces to respond appropriately to sexual harassment and other negative user behaviors. Bringing this prospect up to Etlinger, she mentions the need to create guardrails and potential ramifications for when user behavior offends the company’s terms of service and values. She recommends a more holistic development of a company chatbot or virtual assistants, where the concerns and expertise of various departments (such as Human Resources) are utilized. “This is such a different product development process. It’s constantly changing with environmental changes,” says Etlinger.
Another contentious issue when deploying a chatbot or virtual assistant is that of transparency. The underlying algorithmic conversation may be humanized, but users still desire the assurance that they are not conversing with a human. “Robots posing as people have become a menace,” writes Tim Wu in a recent Op-Ed for the New York Times. Wu, an influential professor at Columbia Law School who coined the phrase “Net Neutrality,” recommends that all automated systems announce to the user that they are not a human. Etlinger also believes in bot transparency, noting that there is no social convention for asking someone online if they are human or bot. The burden, then, should be on the conversational interface to be honest about its underlying artificiality.
“The real fundamental shift is that as chatbots adopt humanness, what kind of humanity are we showing?” asks Etlinger.
That is a question that Mark Stephen Meadows, founder of Botanic Technologies, is constantly thinking about as his company develops conversational interface. Botanic’s tagline is, We Build Humane Machines. “How we design bots today will have implications like how we designed cities decades ago,” states Meadows. He mentions the city of Los Angeles, where drivers are regularly ensnarled in congested traffic because of poor design from almost a century ago.
Meadows believes the future of conversational interfaces, which include chatbots, assistants, and avatars, is one in where the underlying personalities are multi-modal and trusted. “They need to behave more like people for people to know how to behave towards them,” he says, pointing out that digital technology currently filters and separates words, sounds, and images. This filtering and separation into discrete models may cut against our evolutionary bias towards interaction. At Botanic, Meadows and his team deploy their technology on systems that allow a person to see and talk with an avatar. “It’s just how we humans evolved,” he says.
So much of a current interaction with chatbots, though, is through text-only. Asking Meadows about this, he points out that our current interactions may be based more on technological deficiencies as opposed to human desire. “People definitely prefer to talk by voice when possible,” states Meadows, “but there’s also a regression against this, an inverted technology trend, because voice recognition systems introduce fail points.” The future of conversational interface, then, may incorporate more speech as the fail points are eliminated.
“We’re concerned about a future in which everyone has to talk with a robotic robot,” says Meadows, who asserts that the human should always be the highest value in the equation.
That may be the easiest way to think about the future of conversational interface―it is one in where the human is placed as the highest value in the equation. That includes not only eliminating annoying fail points that make a conversation less fluid, but also setting the expectations and boundaries of the conversation, responding appropriately to negative user behavior, and being transparent that the user is having a conversation with an algorithm.