Employees are increasingly finding themselves working alongside not just human coworkers, but conversational AI assistants and AI agents. The psychological implications of this shift might be even more complex than the technological and financial implications: when millions of employees are interacting with AI coworkers every day, history suggests that some will become emotionally attached.
The associated risks extend well beyond lost productivity from time spent chit-chatting with chatbots. The true organizational dangers of emotional involvement with AI include exposure to HR risks (such as employees oversharing sensitive personal information) and cybersecurity risks (such as increased susceptibility to social engineering), PR debacles or even physical harm.
Companies and institutions are investing heavily in generative AI-driven solutions in pursuit of their tremendous upside. Businesses seeking to ensure a return on their investment are, naturally, incentivized to make sure their employees are actually using these AI tools. But in an era of media CEOs flirting with AI-generated underlings and otherwise reasonable adults outright falling in love with AI models, indiscriminately maximizing employees’ engagement with AI may carry its own cost.
The emotional risks of AI are increasing not simply because of how powerful and convincing large language models (LLMs) are becoming, but because of the scale at which people will interact with them all day at work. Yes, it’s intuitively true that as AI systems grow more sophisticated, people grow increasingly prone to anthropomorphizing them—but humans have been developing attachments to chatbots since the earliest days of natural language processing (NLP).
In 1966, Joseph Weizenbaum created a chatbot program called ELIZA that applied simple rules to transform the language of a person’s input into a response from the chatbot. One of the program's scripts, nicknamed “DOCTOR,” aimed to imitate a Rogerian psychotherapist by either responding generically or rephrasing user input in the form of a question:
Human: Well, my boyfriend made me come here. ELIZA: Your boyfriend made you come here? Human: He says I'm depressed much of the time. ELIZA: I am sorry to hear you are depressed. Human: It's true. I'm unhappy. ELIZA: Do you think coming here will help you not to be unhappy?
As Weizenbaum explained a decade later, he was “startled to see how quickly and very deeply people conversing with DOCTOR became emotionally involved with the computer and how unequivocally they anthropomorphized it.” Not even his secretary, who had watched him work on the program for months and knew it was designed it to basically regurgitate the user’s own words, was immune to the urge to get personal with the chatbot. “After only a few interchanges with it,” Weizenbaum recounted, “she asked me to leave the room."1
Mankind’s perennial predisposition to getting emotionally invested in AI has since come to be known as the ELIZA effect. Its cause lies not in the architecture of advanced LLMs, but in our own emotional programming.
Across many millennia, evolution wired our brains to operate on an assumption that, until very recently, was essentially foolproof: if something seems human and communicates like a human, it’s human. Proceed accordingly.
With that reasonable assumption, we evolved an intricate biological system of social interactions and expectations that governs everything from individual encounters to tribal societies to the modern workplace. But conversational language models undermine that assumption, and therefore disrupt our social biology.
In 1996, O’Connor and Rosenblood proposed the “social affiliation model” to describe the instinctive regulatory process through which social interactions, automatically and subconsciously, instigate a search for certain verbal and non-verbal signals. These signals provide information about the quality of those interactions and their implications, such as whether the person we’re interacting with accepts and values us. Their absence, in turn, triggers brain activity that drives behavior intended to address the situation.2
In a 2023 paper in the Journal of Applied Psychology, Tang et al studied the social affiliation model in the context of people interacting with AI systems in the workplace. They intuited that because AI systems can convincingly mimic human interactions but can’t truly replicate the types of rich complementary social feedback we’ve evolved to detect—a smile, chuckle, shrug, furrowed brow, dilated pupil—the brain’s regulatory processes go searching for signals that aren’t there. In other words, an employee’s conversation with AI engenders instinctive emotional needs that AI can’t satiate.
The paper focused on two types of reactions to this AI-driven social deprivation: passive, maladaptive behavior (like increased withdrawal and loneliness) and active, adaptive behavior (like increased drive to seek out positive social connection). Across diverse industries and countries, the authors indeed found that increased interaction with “AI coworkers” correlated with increased loneliness—as well as with insomnia, after-work alcohol consumption, or both. More productively, the authors also found that for some participants, increased frequency of AI interaction often correlated with increased prosocial behavior (such as helping coworkers) as well.
But for employees of a certain disposition with few opportunities for person-to-person interaction—such as a remote worker, an individual contributors in a siloed role or someone with social anxiety—that increased drive for social connection might sometimes have only one available outlet: the always-on AI “coworker.” And LLMs are, in a fairly literal sense, trained to tell us what we want to hear. The prospect has an obvious appeal.
Anthropomorphizing an AI colleague might simply be a way to avoid the cognitive dissonance of turning to a computer program for human interaction.
To be clear, AI models—even the most cutting-edge LLMs—don’t have emotions or empathy, despite their ability to say empathetic things. Technically speaking, it’s a stretch to even say that a chatbot “responds” to your prompt: it’s more accurate (albeit less fun) to say that the chatbot probabilistically appends text to it. Autoregressive LLMs are simply trained to iteratively predict the next word in a sequence of text that begins with your input, applying linguistic patterns it learned from processing many millions of text samples, until it deems the sequence complete.
It would be reasonable to think that just increasing employees’ AI literacy will remove the risk of emotional involvement with AI. It would also be wrong.
As Harvard research has shown, a placebo can work even when you know it’s a placebo. For example, New York Times reporting from late last year explored how Silicon Valley insiders, including many who work in frontier AI research, have been increasingly turning to Anthropic’s Claude for “everything from legal advice to health coaching to makeshift therapy sessions.” Blake Lemoine, the Google engineer who famously claimed that Google’s LaMDA model was sentient in 2022, studied cognitive and computer science and worked in machine learning for years.
How is this possible? One broad explanation is that emotional reactions are processed intuitively, not logically, and when something is happening at the intuitive level it can bypass rational evaluation altogether. Technical expertise provides little immunity to this inherent bug in our code, because when we’re processing something intuitively—what the late Nobel laureate Daniel Kahneman called “System 1” or "fast" thinking—we often fail to enlist our technical knowledge at all. For example, as Kahneman describes in his seminal book Thinking, Fast and Slow, his research repeatedly demonstrated how “even statisticians [are] not good intuitive statisticians.”
With regard to chatbots, our attitudes toward AI are often shaped more by our “mental models” of it than by its actual performance. A 2023 MIT study found that “non-rational factors, such as superstitious thinking, significantly influence how individuals engage with AI systems.” For instance, the authors discovered a strong correlation between paranormal beliefs (like astrology) and likelihood to perceive even fake AI outputs as “valid, reliable, useful, and personalized."3
The paper's authors also allude to the techno-optimism of Silicon Valley as both a cause and result of this phenomenon. Likewise, Vox reporting on Blake Lemoine noted that Silicon Valley is fertile ground for obscure religious beliefs. The increasingly rapid pace of modern technological development might play a role here: in the famous words of Arthur Clark, “any sufficiently advanced technology is indistinguishable from magic.”
Further complicating things, AI literacy might have an adverse affect on AI adoption: research from earlier this year suggests that knowing less about AI makes people more open to having it in their lives. The paper’s authors posit that people with lower AI literacy are more likely to see AI as magical or awe-inspiring, and that “efforts to demystify AI may inadvertently reduce its appeal.” Organizations might therefore face a tension between maximizing return on their investment in generative AI tools and minimizing emotional fallout from constant use of those tools.
Pointedly, the study found this link between low AI literacy and high AI enthusiasm to be strongest for “using AI tools in areas people associate with human traits, like providing emotional support or counseling.” When dealing with tasks without emotional connotations, such as analyzing test results, the pattern flipped.
Armed with an understanding of how and why the ELIZA effect occurs, organizations can proactively mitigate these risks without undermining employees’ enthusiasm to engage with their generative AI tools.
As Murray Shanahan, Principal Scientist at Google DeepMind, articulated in a widely cited 2022 essay, the way we talk about LLMs matters—not only in scientific papers, but in discussions with policy makers, media and employees. “The careless use of philosophically loaded terms like 'believes' and 'thinks' is especially problematic,” he says, “because such terms obfuscate mechanism and actively encourage anthropomorphism."
As Shanahan notes, it’s normal and natural to use anthropomorphic language to talk about technology. GPS thinks we’re on the highway overpass above us. The email server isn’t talking to the network. My phone wants me to update its OS. These are examples of what philosopher Daniel Dennett calls the intentional stance, and in most cases they’re simply useful (and harmless) figures of speech. But when it comes to LLMs, Shanahan warns, “things can get a little blurry.” For AI systems that so convincingly mimic the most uniquely human of behaviors—language—the temptation to take these figures of speech literally is “almost overwhelming.”
Tutorials, onboarding materials and company communications should therefore be very deliberate in the language they use to describe the features, function and purpose of AI tools to employees. Enterprises should avoid unnecessary anthropomorphizing at every turn. As research into the placebo effect of AI has shown, users’ perception of AI is often shaped more by how it’s described rather than by its true capabilities.4
Making AI models look, sound and feel more human can increase trust5 and engagement,6 but it can also increase risk. In the system card for GPT-4o—which can generate realistic humanlike “speech”—OpenAI noted that “generation of content through a humanlike, high-fidelity voice may exacerbate [anthropomorphization] issues, leading to increasingly miscalibrated trust.” During red teaming and internal testing, OpenAI “observed users using language that might indicate forming connections with the model."7
Even without the elevated risk of emotional attachment, enterprises should be aware that anthropomorphization is a double-edged sword. A 2022 study published in the Journal of Marketing found that anthropomorphic chatbots reduced customer satisfaction and opinion of the company: essentially, customers had higher expectations for humanlike chatbots and greater disappointment when they didn’t deliver human-tier service.8 A series of 2024 studies found that feedback from an anthropomorphized “AI coach” was perceived as less helpful than identical feedback from a non-anthropomorphized AI coach that simply highlighted the role of human researchers in its creation.
People might fall in love with a realistic avatar. They (generally) won’t fall in love with a talking paperclip.
Full-blown ELIZA effect does not happen instantaneously. As with most emotional matters, the phenomenon takes hold progressively. Implementing a means to detect and act upon warning signs can give enterprises the ability to intercept and shut down issues before they blossom into true problems.
Guardrail models are one obvious avenue for such a detection system: they monitor inputs and outputs for language indicative of predetermined risks and trigger the model to act accordingly. A guardrail model trained to detect and prevent exchanges from veering into emotional territory can help avoid things going too far. But conventional guardrail models alone might be an incomplete solution, because not every problematic interaction entails overt emotion and romance.
Even employees with a fully realistic understanding of AI can sometimes get a bit too personal in AI conversations. That's a problem too, because many enterprises store and analyze interactions with AI systems to understand and optimize how the tools are being used by employees or customers. This can put organizations in the uncomfortable position of being provided sensitive personal information that, for legal or moral reasons, they’d rather not handle—information that is otherwise too specific and seemingly innocuous to train a guardrail model to detect.
Understanding this, IBM is working on a “large language model privacy preservation system” designed to prevent users from oversharing with AI models. The system would scan inputs for personally identifying information, classify the offending prompt (to understand its intent), and then substitute the sensitive info with generic placeholders. Only an anonymized version of the user’s input would be stored for future training.
The 2023 Journal of Applied Psychology study mentioned above is one of many indicating a link between frequency or length of chatbot interactions and loneliness or problematic use. The implications are relatively straightforward: strategically limiting use can limit emotional risks. Executed correctly, it can do so without curtailing productivity and even potentially lower inference costs.
A more indirect method would be to periodically disrupt usage patterns, preventing users from getting stuck into too deep of a groove. For example, MIT research notes that interventions such as an imposed “cool-off” period can help “slow down quick judgments and encourage more thoughtful engagement.”6 In other words, such interventions might gently nudge users away from impulsive System 1 thinking and toward more deliberate System 2 thinking.
Periodically disrupting the patterns of AI system itself, such as by altering its persona, might also help discourage problematic usage patterns. A New York Times article about a woman in love with ChatGPT, spending many hours each day on the platform, notes that whenever she maxes out the model’s context window, the “personality” and memory of her AI “boyfriend” is partially reset. Whenever this happens, she grieves—but then she “abstains from ChatGPT for a few days.”
In a 2024 paper exploring the fallout from a major app update at Replika AI, a chatbot companionship service, the authors argued that “identity continuity is crucial for developing and maintaining a relationship with an AI companion."9 The contrapositive implication of that finding would be that disrupting a chatbot’s identity continuity might be crucial to avoiding emotional attachment to an AI companion.
Perhaps the best way to avoid employees using AI to fill an emotional void is to reduce the potential for that void to exist at all. Generative AI can replace tedious everyday work, but it’s no replacement for the everyday comradery of human coworkers.
For instance, a study of companion chatbot usage patterns and their relationship to loneliness found a significant correlation between frequency of chatbot usage and increased loneliness or social withdrawal—but not for users with strong real-world social networks. Not only did users with strong social networks generally interact less with chatbots, but they also experienced far fewer issues than lighter users without similar social support. They typically leveraged chatbots for practical purposes and recreation, rather than as relationship substitutes.10 Such findings are consistent with the parasocial compensation hypothesis, which states that lonely, isolated and socially anxious individuals are more likely to engage in parasocial “relationships” with celebrities or influencers.11
Fortunately, this is an instance where AI can be the solution to its own problems. If your company’s generative AI solutions are delivering the productivity gains they have the potential to provide, there should be no shortage of time or money for some pizza parties.
Streamline your workflows and reclaim your day with watsonx Orchestrate’s automation technology.
Put AI to work in your business with IBM’s industry-leading AI expertise and portfolio of solutions at your side.
Reinvent critical workflows and operations by adding AI to maximize experiences, real-time decision-making and business value.
1. Computer Power and Human Reason, Weizenbaum, 1976
2. "Affiliation Motivation in Everyday Experience: A Theoretical Comparison," Journal of Personality and Social Psychology 70(3):513-522, 1996
3. "Super-intelligence or Superstition? Exploring Psychological Factors Influencing Belief in AI Predictions about Personal Behavior,"arXiv, 19 December 2024
4. "The Placebo Effect of Artificial Intelligence in Human-Computer Interaction," ACM Transactions on Computer-Human Interaction Volume 29 (Issue 6), 11 January 2023
5. "The mind in the machine: Anthromorphism increases trust in an autonomous vehicle," Journal of Experimental Social Psychology Volume 52, May 2014
6. "Anthropomorphism in artificial intelligence: a game-changer for brand marketing," Future Business Journal Volume 11, 2025
7. "GPT-4o System Card," OpenAI, 8 August 2024
8. "Blame the Bot: Anthropomorphism and Anger in Customer-Chatbot Interactions,"Journal of Marketing Volume 86, 2022
9. "Lessons From an App Update at Replika AI: Identity Discontinuity in Human-AI Relationships,"Harvard Business School Working Paper Series, 2024
10. "Chatbot Companionship: A Mixed-Methods Study of Companion Chatbot Usage Patterns and Their Relationship to Loneliness in Active Users,"arXiv, 18 December 2024
11. "Parasocial Compensation Hypothesis: Predictors of Using Parasocial Relationships to Compensate for Real-Life Interaction," Imagination, Cognition and Personality Volume 35, August 2015