How to stop AI from seeming conscious

Woman working from home office looking at laptop screen

Author

Sascha Brodsky

Staff Writer

IBM

This article was featured in the Think newsletter. Get it in your inbox.

Artificial intelligence does not think or feel, but it is starting to look like it does.

That blur has set off a fierce debate about how to keep people from mistaking software for a mind. Microsoft’s AI Chief, Mustafa Suleyman, recently warned that “seemingly conscious” systems could surface within a few years and pull users into unhealthy attachments.

The concern is not just hypothetical. When OpenAI retired its GPT-4 model earlier this month, users lamented the loss in deeply personal terms. Their sense of attachment to a piece of software showed how easily natural-sounding conversation can blur into something more. Suleyman said systems that appear conscious could soon lead people to believe they have feelings, intentions or even a sense of self. He argues this illusion could distort social norms, trigger emotional attachments and even spark calls for AI rights.

Researchers say the danger is not real awareness on the part of models, but the illusion of it. The remedy lies in design, training and reminding people that the voice on the other side is still a machine.

“It does not really matter if the system is conscious or not,” Francesca Rossi, IBM Global Leader for Responsible AI and AI Governance, told IBM Think in an interview. “It is enough that it is perceived as being conscious to have an impact on people using it.”

Kunal Sawarkar, Distinguished Engineer for Generative AI and Chief Data Scientist at IBM, told IBM Think that the real concern isn’t consciousness itself but how people respond to the illusion of it. “AI isn’t conscious,” he said, “but people are already treating it like a buddy.”

Designing products that resist illusion

One way to limit confusion about whether AI is conscious is through design, by shaping systems so they appear as assistants rather than companions, Rossi said. That can mean avoiding choices that make the software look or sound like a person. Should a chatbot speak in the first person? Should it apologize, express empathy or appear as an animated avatar?

Suleyman has urged developers to strip out language that implies personhood, such as “I think” or “I feel.” Rossi believes the idea has merit, although she cautioned that the illusion of personhood can emerge unintentionally.

“Each one of those characteristics—natural language, memory, empathy—can be embedded for reasons of usefulness,” she said. “But when they come together, they create a seemingly conscious AI.”

The paradox is that measuring the success of their products through engagement and more human-like interactions tends to keep people coming back. But what makes the technology more usable can also make it easier to misinterpret.

The latest AI trends, brought to you by experts

Get curated insights on the most important—and intriguing—AI news. Subscribe to our weekly Think newsletter. See the IBM Privacy Statement.

Thank you! You are subscribed.

Your subscription will be delivered in English. You will find an unsubscribe link in every newsletter. You can manage your subscriptions or unsubscribe here. Refer to our IBM Privacy Statement for more information.

The design challenge of demystifying AI that seems human is not new. In the 1960s, the psychologist Joseph Weizenbaum created ELIZA, an early program that mimicked a psychotherapist. Even though ELIZA relied only on simple pattern-matching, many users reported feeling understood. Weizenbaum himself was startled by the intensity of those reactions, and he spent much of his later career warning about the dangers of anthropomorphizing software.

Today’s systems are exponentially more sophisticated. Where ELIZA used canned phrases, modern language models can generate long, context-aware responses, adopt emotional tones and remember conversations across sessions. Digital avatars add gestures and expressions. Each advance makes the illusion more powerful.

Education can help, too, Rossi said, by reminding users that no matter how fluent the words, they are not coming from a mind. At IBM, she added, AI is deployed in professional settings where training and onboarding help reinforce that distinction.

“Our solutions are for specific purposes, like helping someone in a bank or government agency do their job better,” she said. “We can train users to understand that the purpose is not to replace a human collaborator, but to help them with a certain task.”

Consumer chatbots are different. They reach billions of people, often with little guidance beyond a terms-of-service click. “It is not that easy to train everybody,” Rossi said. “People use it for anything—health recommendations, mental health, advice on life challenges.”

Some researchers propose adding reminders directly into chatbot interfaces, such as labels within chat windows or pop-up notices that clarify that the user is interacting with software. Others have suggested limiting memory across sessions so that chatbots are less likely to appear as enduring personas with lasting awareness.

Rossi said AI systems present themselves as consistent personalities, making it easier for users to form emotional bonds that feel real even if the connection is not. She pointed to the reaction when GPT-4 was phased out, noting that some users responded as if they had lost a trusted companion. “People said, ‘I don’t want to lose this model because it helped me in difficult life situations.’ They felt as if they had lost a friend,” she said.

Psychologists warn that AI companions can deepen isolation, offering comfort in the moment but not substituting for real human connection. Suleyman has gone further, warning that some users could even push for AI citizenship, the idea that highly advanced systems might deserve legal rights or social recognition as entities.

Rossi dismissed that idea, calling it a distraction from the real safeguards the industry needs. “These machines should not be thought of as human beings,” she said. “They are very useful to human beings, but they are not human.”

If the ethical debate veers toward rights, Rossi said, the industry will lose sight of the practical safeguards it needs to put in place now. “Consciousness, to me, is not even a question worth addressing scientifically. Intelligence can be tested from the outside,” she added. “Consciousness cannot. What matters is the perception.”

However, her view echoes Suleyman’s broader point: the risk is not that AI develops minds, but that people become convinced that it has.

The conversation also connects to a principle that Rossi says guides IBM’s work: the company’s first principle of AI ethics is that “AI must augment human intelligence,” not replace it. “This implies that AI is not like a human being,” she said. “It is just an assistant, or an agent.” She extends that view to the larger purpose of technology. “Humanity should build and use technology to advance, grow, become wiser and thrive. AI being perceived as conscious does not seem to lead us there.”

A fragile boundary

The boundary between a helpful tool and a companion is proving more fragile than many expected. Suleyman has predicted that seemingly conscious systems will emerge within a few years.

Rossi suggested two parallel paths. Developers must design chatbots to emphasize utility over persona. Users must learn to see AI as software, not as a companion.

Otherwise, the cycle of attachment and disappointment will repeat. Each time a model is deprecated or updated, people will grieve as if they have lost a relationship. “If you get attached to a machine and then it is deprecated, you feel it like mourning the loss of a friend, which you should not,” she said.

For Rossi, the responsibility lies with all AI stakeholders, including both builders and users. Developers must resist pushing human-like qualities too far. People must learn to treat AI as what it is: code that can assist with tasks but cannot think or feel.

“These are machines,” she said. “They can be very useful and even helpful in personal challenges, but they are machines. And people must never forget that.”

Related solutions
IBM® watsonx Orchestrate™ 

Easily design scalable AI assistants and agents, automate repetitive tasks and simplify complex processes with IBM® watsonx Orchestrate™.

Explore watsonx Orchestrate
Artificial intelligence solutions

Put AI to work in your business with IBM’s industry-leading AI expertise and portfolio of solutions at your side.

Explore AI solutions
Artificial intelligence consulting and services

IBM Consulting AI services help reimagine how businesses work with AI for transformation.

Explore AI services
Take the next step

Whether you choose to customize pre-built apps and skills or build and deploy custom agentic services using an AI studio, the IBM watsonx platform has you covered.

Explore watsonx Orchestrate Explore watsonx.ai