New Thinking

How Human Do We Want Our Virtual Assistants?

Share this post:

Should you refer to Amazon’s Alexa as “it” or “her”?

As our daily lives become more integrated with virtual assistants and conversational bots, we must ask, how “human should we design the underlying technology with which we are chatting. Conversation as a platform is positioned to disrupt how we relate to our technology, so it is crucially important for designers to determine if users are looking for an employee, a friend, or an all-knowing extension of self. The relationship between human and machine is shifting dramatically, with businesses trying to read the tea leaves.

The research firm Gartner has predicted that 20% of our smartphone interactions will be with a “Virtual Personal Assistant” (VPA) by 2020, indicating a shift away from apps and towards assistants and social bots. Microsoft CEO Satya Nadella has stated that these types of bots are the next major platform. Given that conversation may replace apps with how consumers find information and relate to a brand, deciding on how to design the virtual assistant is a major business decision most companies will be making in the future. The potential pitfalls include falling into an “uncanny valley of the mind” with conversation that oversteps emotional boundaries, perpetuating gender stereotypes with the chosen name, and creating uncomfortable feelings of a master/servant relationship.

There is also the glaring issue of transparency, where consumers may desire virtual agents to be clearly labeled (or identify itself) as non-human. While it is a win for technologists to imitate humanness beyond distinction, it may be disconcerting for consumers who could feel confused or misled. From a business perspective, it essential to have a thoughtful discussion and thorough understanding before designing and releasing a conversational agent.
The current offerings of virtual assistants have run the gamut from Siri, who has endeared “herself” to millions of users by being personified, to Google Assistant, which was intentionally released without the trappings of a human name and gender. Siri can come across as playful and snarky, while Google Assistant remains omnipresent and omniscient.

The Three Categories of Conversational Tech
In the movie Her, a lonely man in the not-so-distant future falls in love with his operating system. The virtual assistant, named Samantha (and voiced by Scarlett Johansson) is able to slide “her” way into the heart of Theodore. The humanization through its voice, name, and ability to seemingly understand Theodore was sufficient to form an emotional bond. This would be the first category: virtual assistant as friend. As Her clearly points out, it is entirely possible to feel a sense of intimacy with a virtual creation. Research has backed up this idea, often pointing to how we anthropomorphize these experiences—filtering the interactions through a human lens.

“The reason is that the human brain is not specialized for 21st century media,” wrote Stanford University researcher Byron Reeves in a study regarding realistic interactive media. “People are not able to discount social presentations as unreal just because they appear on a screen.”

What is unclear, however, is how superficial these social-bot-as-friend interactions should be. There may be a distinction between making a conversational bot social versus one that appears emotional. Last year, researchers at Chemnitz University of Technology in Germany tested the reactions of participants viewing a virtual conversation between a man and woman in a plaza. The participants were comfortable when the conversation was either human-derived or script-derived, but experienced unease when a virtual actor appeared to convey the human traits of sympathy or frustration.

From a business standpoint, this means humanizing a virtual assistant like a friendly neighbor (banter and transactional talk) as opposed to an inquisitive and empathic best friend. Her is a great movie, but probably not the best course of action for development—at least not for a long time, given our squeamishness about AI.

The second category is a virtual assistant as an all-knowing extension of self. You may have noticed that Amazon’s Alexa, Microsoft’s Cortana, and Apple’s Siri all choose friendly and accessible human names, while Google’s Assistant has remained without a human moniker. This of course wasn’t an act of oversight or indecisiveness, but a deliberate decision to align its functionality with a broader concept of helpfulness as opposed to friendliness. Microsoft prompts you to ask Cortana to tell a joke, while Google Assistant is billed as “your own personal Google, always ready to help.”

“We always wanted to make it feel like you were the agent, and it was more like a superpower that you had and a tool that you used,” said Jonathan Jarvis, a former creative director on Google’s Labs team, speaking to Business Insider last year. “If you create this personified assistant, that feels like a different relationship.”

While selecting a name for a virtual assistant on the surface seems like a playful way to humanize the underlying AI, it also alters the expectations of the relationship and opens the door for perpetuating gender stereotypes. As a society, we have been conditioned to expect female voices for helpful tasks such as telephone operators and GPS directions while retaining male voices for narration requiring authority. The difficulty from a business perspective is lining up internal market research, which will likely reaffirm our human bias with gender, with a larger goal of gender neutrality. Choosing a name, then, may carry a considerable amount of baggage and accusations of reinforcing gender roles.

“For those building bots for the general public, and who care about the future of human-machine interaction,” states futurist Amy Webb, “I’d recommend selecting names that are not gender-specific and are easy to pronounce.” Webb is the founder and CEO of the Future Today Institute and author of The Signals Are Talking: Why Today’s Fringe Is Tomorrow’s Mainstream. “I’ve always found it fascinating how we humans feel the need to anthropomorphize machines. Why assign a traditional name at all? Why not a number? Or something else?”

By skipping over a name and gender entirely, Google Assistant avoids the dilemma of listening to market research or the values of society at large. “One of the things that’s vexing me currently is the proliferation of subservient female bots,” states Rob McCargow, Program Leader for AI at PwC. “I’m concerned that we’re teaching our kids to be rude and demanding to women.” As McCargow points out, speaking to Alexa doesn’t require a “please” or “thank you.” The very act of personifying the virtual agent complicates the dynamic of how we interact with it.

The third category of conversational tech is an employer/employee dynamic between the user and the virtual assistant. This is the direction that Dennis Mortensen, founder and CEO of x.ai, has gone with the company’s virtual scheduling assistants. The assistants, Amy and Andrew Ingram, are laser focused on tasks. They act akin to employees, as opposed to imitating the warmth of friendship like Siri or the omnipresence of Google Assistant. Speaking to Slate, Mortensen calls these “extremely specialized, verticalized AIs that understand perhaps only one job, but do that job very well.”

That doesn’t mean, however, that Amy and Andrew Ingram are designed without any emotional texture.

“When scripting Amy and Andrew’s end of the dialog,” states Mortensen, “we went so far as to build in concepts like empathy. For example, if you have to reschedule a meeting once, that’s no big deal. But if you are on the third reschedule, Amy needs to signal that she realizes that this is not an ideal situation just as a human assistant would, and then communicate with that in mind.”

When designing Amy and Andrew, Mortensen stated that he wanted the assistants to “feel as human-like as possible.” The human-seeming interface [textual voice and personality] allows the host and guest to interact more naturally, according to Mortensen, and better achieve the end goal of getting a meeting on the calendar. “We don’t believe people should have to use machine syntax to operate an intelligent agent.

Amy and Andrew Ingram’s artificiality doesn’t mean the agents don’t get invited to meetings and the occasional date, despite having the initials of AI. The name “Amy Ingram,” which arrived before Andrew, is a nod to both AI and a play on n-gram, a natural language processing concept. Different from Webb’s suggestion at gender ambiguity, Mortensen believes that people will inadvertently extract a gender if it isn’t readily apparent. “We specifically picked two clearly identifiable gender specific names for our agents and let customers choose and let them use the two interchangeably.”

Although Amy and Andrew have clearly identifiable genders, the underlying output, or NLG, is entirely the same. “We’ve managed to develop a voice that defies gender stereotypes, since both Amy and Andrew are often mistaken for humans. The goal is to offer people a choice of genders for agent name but to make sure that all of our phrasings are gender-neutral.”

Even though the seamless nature at which Amy or Andrew Ingram can set up a meeting may make them appear human, x.ai has always been upfront about the distinction. While Alan Turing’s massively influential imitation game created the goal for machine-masquerading-as-human, the consumer desire may lean towards clear labeling. “The measure of a great AI has always been fooling humans, whether that’s a spoken assistant or a virtual game of Go,” states Amy Webb. “But we must proceed with everyday people in mind––we must offer transparency at all times, and explicitly tell users whether they’re interacting with a human or an AI agent.”

The end goal from a business perspective is to be mindful of the complexities and sensitivities that surround the creation of virtual assistants. We must tread carefully in order to find the Goldilocks Zone of Social Bots––human, but not too human. At the same time, we must be cognizant of the pitfalls that come from our evolution from human-machine interaction to human and almost-human.

Add Comment
No Comments

Leave a Reply

Your email address will not be published.Required fields are marked *

More New Thinking Stories

Popping the Filter Bubble with Machine Learning

Programmer Krishna Kaliannan lives in Manhattan. He can sum up the way he felt the day after the 2016 presidential election in one word: Blindsided. Who were the millions of Americans who had voted differently than polls had predicted? Why did they do it? Kaliannan didn’t know them, so he couldn’t ask. At the same […]

Continue reading

Is augmented reality the future of brand storytelling?

Should you be using augmented reality (AR) for your next brand campaign? With a projected $590 billion to be spent on advertising worldwide in 2018, companies are making important decisions as to where their money is best served. AR, having long promised to be an effective way for brands to engage their audiences, may finally […]

Continue reading

For more diversity in AI, a look to the next generation

Seek for reasons to fear the loss of women’s jobs during the artificial intelligence revolution, and ye shall find them. Here’s a headline in Bloomberg on a story about the 2017 World Economic Forum: “The Rise of Robots Will Make the Tech Gender Gap Even Worse.” That meeting took place in the shadow of a […]

Continue reading