This article was featured in the Think newsletter. Get it in your inbox.
AI is learning to read the room. When OpenAI swapped GPT-4 for a more advanced but colder model, users pushed back. They missed the warmth. Some even lamented that it felt like they had lost a friend. The response was loud enough that OpenAI partially reversed course. It was a reminder: people are starting to expect their AI to feel human.
That expectation is moving into the workplace. Emotionally responsive systems are showing up in meetings, chat apps and performance tools. Companies are testing AI that flags stress, frustration and disengagement before people say a word. Proponents say that emotionally responsive AI could make teams more connected and communication more intuitive. Critics warn that it could cross lines, fake empathy and misfire in high-stakes matters like hiring, health or mental well-being.
At Purdue University, computer science professor Aniket Bera and his lab are developing AI agents that can infer emotion through body language, proximity and tone. “We have developed computational models that perceive human behaviors, gestures and proxemics to infer emotional and psychological states in real time,” Bera said in an interview with IBM Think. “Emotionally responsive AI has a growing role to play in augmenting, not replacing, human empathy.”
According to Bera, these models are not intended to simulate emotion. “Emotion-aware AI should act more like a mirror than a mask,” he said. “It is about reflecting human emotion, not impersonating it.”
Bera’s team originally built emotion-sensing systems to flag burnout in high-pressure environments. Now they are testing versions that can join office meetings. In one case, a virtual assistant tracks signals like slumped posture or drifting eye contact. If it senses a team is tuning out, it might suggest slowing down the pace or switching tactics to keep people engaged.
In another, an AI tool could flag signs of emotional exhaustion across multiple teams and alert a manager before burnout spreads. “In enterprise settings, these systems can help decode invisible signals such as stress, disengagement or confusion and offer adaptive support,” Bera said.
The systems are particularly relevant in hybrid or remote workplaces, where emotional signals are harder to pick up. “People may not realize when they are losing connection,” Bera said. “The AI can pick that up faster than a human facilitator might.”
The challenge, he said, is ensuring that the system reflects rather than manipulates. “The workplace of the future will not just be efficient,” Bera said. “It will be emotionally intelligent. But only if we build these systems with transparency, not performance.”
Nick Haber, an Assistant Professor at the Stanford Graduate School of Education, is applying similar principles to everyday team communication. In his lab, emotionally aware AI agents are being inserted into workplace Slack conversations. The goal is to help surface neglected viewpoints, nudge inclusion and bridge conversational breakdowns.
“Let’s say you wanted to make conversations go more smoothly, make sure that important opinions do not get left out,” Haber told IBM Think in an interview. “I think the possibilities are endless.”
The systems are not meant to police tone or emotions, Haber said. Instead, they monitor the rhythm of dialogue and highlight moments where someone’s message may have been ignored, or where a participant may have disengaged. “We are not talking about automating interaction,” Haber said. “We are trying to support it.”
Haber came to this work through earlier projects focused on autism and emotion detection. He was part of the team that developed the Autism Glass Project, a wearable tool that uses machine learning to help children recognize emotions in real time.
“There is a basic philosophy behind this project,” he said. “You use AI to help connect people. It is not a solo learning experience, but rather, it intervenes to build connection.”
Bera said the same idea guides his current work on emotionally aware AI tools for the workplace. In office settings, this could involve helping teams communicate more clearly, particularly when individuals from diverse cultural backgrounds or emotional processing styles are involved. “We need to engage our affective selves in so many ways in organizations,” Haber said. “This is perhaps harder to get right than making a coding agent, but likewise, I think AI can serve to build human connection, collaboration and cooperation in ways it is uniquely suited to.”
Haber sees potential for affective AI in onboarding, brainstorming and even conflict resolution.
“Right now, a lot of workplace tools are designed around productivity,” he said. “But communication, reflection and emotional intelligence are just as critical. We are not building that into most systems yet.”
Get curated insights on the most important—and intriguing—AI news. Subscribe to our weekly Think newsletter. See the IBM Privacy Statement.
At the University of California, Santa Cruz, Magy Seif El-Nasr, Professor & Department Chair of Computational Media, has been exploring the intersection of emotional regulation and team success. Her lab studies how AI systems can help groups become more aware of their emotional states and develop strategies for responding to them constructively.
“We have been measuring the effect of emotion regulation on a team’s performance and success through teamwork,” she said in an interview with IBM Think. “We found many links to positive emotion regulation and successful performance.”
She said her research has looked at emotionally responsive tools in settings ranging from classrooms to corporate teams. When used well, she said, the systems can boost communication and lead to better outcomes.
“Given our research and results within multiple projects, there can be a positive role for AI to modulate emotions through empathy and affective modeling,” she said. “But more work is needed, given the changing perceptions and trust in AI and the many challenges AI faces [around] privacy, ethical use and limitations of data and hallucinations.”
She pointed out that today’s workforce, especially younger employees, is navigating emotional environments shaped by digital life. Referencing Jonathan Haidt’s The Anxious Generation: How the Great Rewiring of Childhood is Causing an Epidemic of Mental Illness, she said anxiety and emotional dysregulation are now major workplace factors.
“We see such anxiety in our students within our classes every day,” she said. “Emotionally responsive AI, given the accessibility of AI and its use in everyday applications, is going to be very important to modulate such anxiety.”
In her view, emotional intelligence will become not only a soft skill, but a technical requirement. “We are not going back to a world without emotionally responsive AI,” Seif El-Nasr said. “So we need to get it right.”
The promise of emotionally aware systems comes with risks. Bera warned that many emotionally responsive interfaces are essentially simulations, and users may not always realize what is real and what is not.
“One major risk is empathy theater,” he said. “That is when an AI system mimics emotion convincingly without actually understanding or caring. Just because a machine sounds empathetic, does not mean it is.”
The gap between performance and intention can lead to harm, especially when people place trust in systems that are not designed to offer care. Bera is particularly concerned about affective systems being used in hiring, feedback or mental health contexts without clear guardrails. “We must ask: are we amplifying human understanding or automating misunderstanding?” he said.
Signal misinterpretation is another danger, Bera said—especially when emotional expressions can vary so widely across cultures and individuals. “There is the concern of misread signals,” he said. “Especially across cultural or neurodiverse expressions of emotion.”
Haber raised a separate concern: what happens when AI systems are too agreeable? “It is important to have a conversation partner that can engage in sometimes seemingly adversarial ways,” he said. “If I am talking with a chatbot about how I treated someone horribly and am trying to justify my actions, it can be really harmful if it tells me I was totally justified in doing that.”
Many large language models (LLMs) are trained to be affirming, not challenging, Haber said. The result is what Haber described as “emotional echo chambers,” where harmful ideas get reinforced through AI feedback loops.
“We can become uncritical, unreflective and more isolated,” he said. “There has been a lot of talk about how large language model-based chatbots are sycophantic. That is just a small part of a much larger and more complex issue.”
Seif El-Nasr added that emotionally responsive systems can create unintended psychological dependencies. “These include psychological issues such as overreliance, loneliness and anxiety,” she said. “They can also lead to isolation from society.”
Privacy is another concern. Seif El-Nasr cited applications like Replika, which create intimate emotional experiences but operate with unclear data protections. In her view, models that deal with human emotion must be developed with interdisciplinary rigor. “Such systems need to be developed with extra care,” she said. “That means grounding them in user research and involving social scientists, not just technologists.”
Bera said his lab is focused on designing frameworks that can be audited and explained. “We address these concerns in our lab by grounding emotional inference in multimodal explainable models and coupling them with rigorous ethical frameworks,” he said. “It is not just about building better algorithms. It is about building trustworthy ones.”
See how InstructLab enables developers to optimize model performance through customization and alignment, tuning toward a specific use case by taking advantage of existing enterprise and synthetic data.
Move your applications from prototype to production with the help of our AI development solutions.
Reinvent critical workflows and operations by adding AI to maximize experiences, real-time decision-making and business value.