New Thinking

Has human communication become botifed?

Share this post:

How often do you receive a message and wonder, “Did a person really write this?”

When receiving a message online, it can often be difficult to discern whether it came from a human or a chatbot. Given the rapid rise in automated forms of communication, it would be foolish to assume that words attached to a name and face are in fact human. On the same token, our emails and messages on social platforms are filled with general platitudes, formulaic personalizations, and responses that seem reminiscent of an if this, then that (IFTTT) equation. Is it a bot or not?

We typically chalk this up as a win for technologists, who have been busy adding linguistic nuances and flourishes to how chatbots interact with humans. Passing the Turing Test, and its related Imitation Game, is often viewed as the holy grail for advancing artificial intelligence. If communication from a machine fools me into believing it is human, according to a loose interpretation of the Turing Test, then the machine has gained intelligence. It has imitated human communication, creating the illusion of thought.

But what if, instead, we humans are imitating machines? The separation between human and chatbot communication is blurry not just from advancing AI, but also from our own dilution of thoughtful communication. It may not be that machines are displaying an ability to think, but that humans as communicators online are displaying a lack of thought.

This argument flips the Turing Test on its head, and asserts that human communication is not a fixed end goal but is instead constantly fluctuating. While the Turing Test frames machines as evolving towards a facsimile of humanness, we may also consider that human communication is de-evolving towards the algorithmic simplicity of chatbots. This botified form of communication lacks the volition, thoughtfulness, and originality we have long viewed as a fixture of human intelligence. In the process, our botified messages have sown confusion and annoyance for those desiring clear distinctions between authentic and automated messages. The awareness of this, however, may help us improve the quality of our own communication.

Let’s look at a common example where the lines between human and chatbot are difficult to distinguish. The following is a message that I received on LinkedIn recently, prompted by a work anniversary that invariably sent out push notifications to my connections.

How do I know this individual wrote the above message, or that it came from an authentic individual? It completely lacks any uniquely human traits that couldn’t be approximated through automation. Instead of displaying the hallmarks of what make us human, such as our ability to emote, empathize, and understand a joke, it follows a simplistic pattern of IFTTT. If it is a LinkedIn contact’s work anniversary, then send boilerplate message. The message may be attached to a face online, but it has been stripped of any semblance of humanity.

It is botified communication. Even the exclamation point, which is intended to evoke feelings of enthusiasm, is directly derived from LinkedIn’s pre-filled “Congrats on your work anniversary!” that a user would simply click a button to send. This effort at efficiency and more frequent communication acts as a nudge towards messages bereft of thought and feeling. It may look like communication, but it lacks depth and value. The goal of online communication, of course, shouldn’t be to find efficient ways to basically spam your friends and acquaintances. So why is this happening?

According to web psychologist Liraz Margalit, our changing online environment involving regular interaction with chatbots may be rubbing off on human communication. Dr. Margalit is the Head of Behavioral Research at Clicktale, where she examines consumer behavior from a cognitive behavioral psychological perspective.

“Interacting with chatbots creates in our brains a new model which results in a new state of mind,” says Margalit. “People think they are communicating with a real person when in actuality it is a piece of software. When these same users then interact with fellow human beings, things go awry. They bring into the real-world human-to-human interaction a mental model partially based on how they felt and behaved while interacting with a bot.”

Contributing to this new mental model, the structure and design of social media platforms also prompt a certain type of communication. This is often referred to as choice architecture, where the design of presented consumer choices impacts how the consumer acts. Much of our online communication now involves prompts towards cut-and-paste conversations, which may be diluting the originality and thoughtfulness of our interactions.

Outside of this possible environmental influence, there is also the issue of time. Communication takes up our valuable time, so it is in our interest to be more efficient with online conversations. This problem is compounded by the sheer number of people with which we are connected online, which greatly outstrips the famed Dunbar Number. The Dunbar Number, derived from the research of British anthropologist Robin Dunbar, refers to the average number of meaningful relationships that a human can successfully maintain. The number is 150.

The idea behind the Dunbar Number is that relationships take an investment of time, emotional energy, and thoughtfulness. Given our human limitations, it would be impossible to truly maintain hundreds of relationships. Humans can be efficient, but we don’t scale. But that is exactly what we’re trying to do online. We may lack the time to give the actual intimacy required for relationships, so we offer an imitation of intimacy through botified communication. To maintain these endless “weak ties,” we put forth a weaker form of conversation.

“As information communication technology strengthens weak ties and many of the interfaces of for-profit ICT platforms nudge succinct communication,” states Evan Selinger, “we’re collectively experiencing the pull—at least in some contexts—to adopt commodified communication styles.” Selinger is a Professor of Philosophy at the Rochester Institute for Technology and co-author of the forthcoming book, Being Human in the 21st Century (Cambridge University Press). He is also the Head of Research Communications, Community & Ethics at the Center for Media, Arts, Games, Interaction, and Creativity (MAGIC).

As a Professor of Philosophy, Selinger is well attuned to the existential crisis that a blurred line between humans and chatbots can create. “When our communicative behavior is engineered to become more automatic than deliberative, it can feel like our very humanity is coming apart at the seams.”

Selinger and his co-author Brett Frischmann, are researching how our changing forms of communication would fit within the human-versus-machine framework laid out by Alan Turing. Frischmann’s working research paper, Human-Focused Turing Tests: A Framework for Judging Nudging and Techno-Social Engineering of Human Beings, lays out the idea of a reverse Turing Test. If a machine that can successfully imitate thinking passes the Turing Test, what about a human that appears to be non-thinking?

“Since we’re embracing these styles and having them imposed upon us—and so are our friends, family, and colleagues—norm-shifting is occurring,” continues Selinger. “Our collective robotic performances add to the confusion that occurs when trying to discern whether an interlocutor is human or software.”

Many of the messages we receive today are composed and sent without thought. Given that thinking is central to our conception of what separates animals from machines, it stands to reason that communication without thought fails to meet a human classification. Instead of an either/or dichotomy distinguishing humans from chatbots, though, we may need a sliding scale of communication that correlates with levels of thoughtfulness.

In other words, the online world is populated with actual humans and actual bots. But it also filled with humanized bots and botified humans. It is easy for our own communication to become botfied if we mindlessly craft messages in an if this, then that fashion. By being more aware and deliberate with how we communicate, we can ensure that the person on the other end of the message knows that you are a living, breathing human being.

Add Comment
No Comments

Leave a Reply

Your email address will not be published.Required fields are marked *

More Technology Stories

Four Predictions on the Future of VR from a Watson Product Director

There are two reasons that Michael Ludden, director of product of IBM Watson Developers Labs & AR/VR Labs, starts his presentations with a video of Star Trek actors playing the virtual reality game Star Trek Bridge Crew. First, it’s a perfect example of how his labs are melding Watson’s artificial intelligence capabilities with the burgeoning […]

Continue reading

To bring “good” taste to scale, first build trust

Brian Smith knows wine. The co-founder of Winc and the company’s Chief Wine Officer (“The title you get when you’re allowed to give yourself your own title”), he studied to become a sommelier and even worked in wine production in Provence and Argentina. But when pressed, he expresses resistance to the notion that one becomes […]

Continue reading

Podcast advertising sticks to no script

Blue Apron may be the presenting sponsor of the Crooked Media podcast Pod Save America, but it’s also the butt of the jokes. Although the fresh-ingredient meal service provides the show with polished ad copy, hosts Jon Favreau and Jon Lovett rarely stick to the script. Here’s what was on the menu this spring, according […]

Continue reading