Sentient artificial intelligence is defined theoretically as self-aware machines that can act in accordance with their own thoughts, emotions and motives. As of today, experts agree that AI is nowhere near complex enough to be sentient.
Since computers were first invented, scientists have developed benchmarks, such as the Turing Test, meant to evaluate the “intelligence” of machines. Soon after, debates around machine intelligence segued into deliberations over their consciousness or sentience.
Although discussions on AI consciousness have been floating around since the early 2000s, the popularity of large language models, consumer access to generative AI such at ChatGPT and an interview in the Washington Post (link resides outside ibm.com)1 with former Google engineer Blake Lemoine reignited interest in the question: Is AI sentient?
Lemoine told the Post that LaMDA, Google’s artificially intelligent chatbot generator, is sentient because it started talking about rights and personhood, and was seemingly aware of its own needs and feelings.
Google’s ethicists have publicly denied these claims. Yann LeCun, the head of AI research at Meta, told The New York Times (link resides outside ibm.com)2 that these systems are not powerful enough to achieve “true intelligence.” The current consensus among leading experts is that AI is not sentient.
As machine learning becomes more advanced, computer scientists are pushing for further innovations in AI tools in hopes of creating devices that can have a deeper understanding of human behavior, leading to more personalization and relevant real-time responses without as much tedious human coding needed. This has led to developments in cognitive computing, where systems interact naturally with humans and solve problems through self-teaching algorithms. OpenAI’s GPT model and Google’s LaMDA are an indication of what might be in the works at other tech companies such as Meta, Apple or Microsoft.
Sentience would be a step further. It is defined by the ability to have subjective experiences, awareness, memory and feelings. But the definitions for sentience, cognition and consciousness are often inconsistent and still heavily debated (link resides outside ibm.com)3 by philosophers and cognitive scientists.
In theory, sentient AI would perceive the world around it, process external stimuli and use it all for decision-making and think and feel like human beings.
Although AI learns as humans learn and is capable of reasoning to an extent, it’s not nearly as complex as humans or even some animal brains. It’s still relatively unknown how the human brain gives rise to consciousness, but there’s more involved than just the number of brain cells connected together. Often, sentience is conflated with intelligence, which is another feature that the scientific community is still working to quantify in machines.
Intelligent machines learn through exploration and can adapt with new input. Most AI programs today are specialists as opposed to generalists, more straightforward than cerebral. Each program is trained to be good at a very narrow task or type of problem, such as playing chess or taking a standardized test.
In computer science research, AI experts have been toying with the concept of “artificial general intelligence” (AGI), also known as strong AI, the goal of which is to imbue AI with more human-like intelligence that’s not task-specific. Beyond that, there’s also the hypothetical future state of artificial super-intelligence.
These abilities are intended to give AI a better grasp of human commands and contexts and, as a result, automate the processing of information that allows the machines to deduce the correct function to run under a certain condition on their own.
Tools such as the Turing Test have been created to evaluate how discernible machine behaviors are from humans. It deems a program as intelligent if it can fool another human into believing that it too, is human.
But intelligence is tricky to classify. For example, the Chinese Room Argument has illustrated flaws in the Turing Test for determining intelligence. Importantly, intelligence often refers to the ability to acquire and use knowledge. It does not equate to sentience. There is no evidence that an AI model has internal monologues or can sense their own existence within a greater world, which are 2 qualities of sentience.
Large language models can convincingly replicate human speech through natural language processing and natural language understanding.
Some technologists argue that the neural network architecture underlying AI, such as LLMs, imitates human brain structures and lays the foundations for consciousness.
Many computer scientists disagree, saying that AI is not sentient and that it simply learned how human language works by regurgitating ingested content from websites such as Wikipedia, Reddit and social media without actually understanding the meaning behind what it’s saying or why it’s saying it.
AI systems have historically excelled at pattern recognition, which can extend to images, videos, audio, complex data and texts. It can also take on personas by studying the speech patterns of that specific person.
Some experts refer to AI as stochastic parrots (link resides outside ibm.com)4 that are “haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning.”
The problem is that humans have this innate desire for connection, which propels them to anthropomorphize (link resides outside ibm.com)5 objects and project onto their feelings and personalities because it facilitates social bonding.
As the researchers on the stochastic parrot paper put it: “We have to account for the fact that our perception of natural language text, regardless of how it was generated, is mediated by our own linguistic competence and our predisposition to interpret communicative acts as conveying coherent meaning and intent, whether or not they do.”
This is why some people might take what the AI says at face value, even though they know these technologies cannot actually perceive or understand the world beyond what’s available to it through its training data.
Because AI chatbots can carry coherent conversations and convey feelings, people can interpret it as meaningful and often forget that LLMs, among other humanoid machines, are “programmed to be believable,” according to Scientific American (link resides outside ibm.com)6. Every feature it has, whether it’s the words it says or how it tries to emulate human expressions, feeds into this design.
AI creates an illusion of presence by going through the motions of human-to-human communication untethered from the physical experience of being.
“All sensations—hunger, feeling pain, seeing red, falling in love—are the result of physiological states that an LLM simply doesn’t have,” Fei-Fei Li and John Etchemendy, co-founders of the Institute for Human-Centered Artificial Intelligence at Stanford University, wrote in a TIME article (link resides outside ibm.com)7. So even if an AI chatbot is prompted into saying it’s hungry, it cannot actually be hungry because it does not have a stomach.
Current AIs are not sentient. Through trials and testing, this type of AI model has also shown that it is still very flawed and can often make mistakes or invent information, resulting in a phenomenon called hallucinations.
These mistakes often arise when models can’t place the context in which the information exists or is uncertain. There is a risk that these flaws might be amplified if AI were to become more autonomous.
Also, ethicists are concerned about sentient AI because they don’t know what might happen if computer scientists lose control of systems that learn how to think independently. That might pose an “existential” issue if the AI’s goals clash with human goals. If that occurs, it’s unclear where the responsibility would lie for harm, poor decision-making and unpredictable behaviors where the logic cannot be traced back to an original human-inserted command.
Also, experts worry that they will not be able to communicate with sentient AI or fully trust their outputs. Altogether, some conclude that AI having sentience might result in threats to safety, security and privacy.
As AI becomes more integrated into existing technologies, industry experts are pushing for more regulatory frameworks and technical guardrails. These are more relevant in light of the moral and ethical quandaries around AI’s autonomy and capabilities.
1 “The Google engineer who thinks the company’s AI has come to life,” The Washington Post, 11 June 2022
2 “Google Sidelines Engineer Who Claims Its A.I. Is Sentient,” The New York Times, 12 June 2022
3 “Brains, Minds, and Machines: Consciousness and Intelligence,” Infinite MIT
4 “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 1 March 2021
5 “The mind behind anthropomorphic thinking: attribution of mental states to other species,” Animal Behaviour, November 2015
6 “Google Engineer Claims AI Chatbot Is Sentient: Why That Matters,” Scientific American, 12 July 2022
7 “No, Today’s AI Isn’t Sentient. Here’s How We Know,” TIME, 22 May 2024
We surveyed 2,000 organizations about their AI initiatives to discover what's working, what's not and how you can get ahead.
IBM® Granite™ is our family of open, performant and trusted AI models, tailored for business and optimized to scale your AI applications. Explore language, code, time series and guardrail options.
Access our full catalog of over 100 online courses by purchasing an individual or multi-user subscription today, enabling you to expand your skills across a range of our products at one low price.
Led by top IBM thought leaders, the curriculum is designed to help business leaders gain the knowledge needed to prioritize the AI investments that can drive growth.
Want to get a better return on your AI investments? Learn how scaling gen AI in key areas drives change by helping your best minds build and deliver innovative new solutions.
Learn how to confidently incorporate generative AI and machine learning into your business.
Dive into the 3 critical elements of a strong AI strategy: creating a competitive edge, scaling AI across the business and advancing trustworthy AI.