Sentient artificial intelligence is defined theoretically as self-aware machines that can act in accordance with their own thoughts, emotions and motives. As of today, experts agree that AI is nowhere near complex enough to be sentient.
Since computers were first invented, scientists have developed benchmarks, such as the Turing Test, meant to evaluate the “intelligence” of machines. Soon after, debates around machine intelligence segued into deliberations over their consciousness or sentience.
Although discussions on AI consciousness have been floating around since the early 2000s, the popularity of large language models, consumer access to generative AI such at ChatGPT and an interview in the Washington Post 1 with former Google engineer Blake Lemoine reignited interest in the question: Is AI sentient?
Lemoine told the Post that LaMDA, Google’s artificially intelligent chatbot generator, is sentient because it started talking about rights and personhood, and was seemingly aware of its own needs and feelings.
Google’s ethicists have publicly denied these claims. Yann LeCun, the head of AI research at Meta, told The New York Times2 that these systems are not powerful enough to achieve “true intelligence.” The current consensus among leading experts is that AI is not sentient.
As machine learning becomes more advanced, computer scientists are pushing for further innovations in AI tools in hopes of creating devices that can have a deeper understanding of human behavior, leading to more personalization and relevant real-time responses without as much tedious human coding needed. This has led to developments in cognitive computing, where systems interact naturally with humans and solve problems through self-teaching algorithms. OpenAI’s GPT model and Google’s LaMDA are an indication of what might be in the works at other tech companies such as Meta, Apple or Microsoft.
Sentience would be a step further. It is defined by the ability to have subjective experiences, awareness, memory and feelings. But the definitions for sentience, cognition and consciousness are often inconsistent and still heavily debated 3 by philosophers and cognitive scientists.
In theory, sentient AI would perceive the world around it, process external stimuli and use it all for decision-making and think and feel like human beings.
Although AI learns as humans learn and is capable of reasoning to an extent, it’s not nearly as complex as humans or even some animal brains. It’s still relatively unknown how the human brain gives rise to consciousness, but there’s more involved than just the number of brain cells connected together. Often, sentience is conflated with intelligence, which is another feature that the scientific community is still working to quantify in machines.
Intelligent machines learn through exploration and can adapt with new input. Most AI programs today are specialists as opposed to generalists, more straightforward than cerebral. Each program is trained to be good at a very narrow task or type of problem, such as playing chess or taking a standardized test.
In computer science research, AI experts have been toying with the concept of “artificial general intelligence” (AGI), also known as strong AI, the goal of which is to imbue AI with more human-like intelligence that’s not task-specific. Beyond that, there’s also the hypothetical future state of artificial super-intelligence.
These abilities are intended to give AI a better grasp of human commands and contexts and, as a result, automate the processing of information that allows the machines to deduce the correct function to run under a certain condition on their own.
Tools such as the Turing Test have been created to evaluate how discernible machine behaviors are from humans. It deems a program as intelligent if it can fool another human into believing that it too, is human.
But intelligence is tricky to classify. For example, the Chinese Room Argument has illustrated flaws in the Turing Test for determining intelligence. Importantly, intelligence often refers to the ability to acquire and use knowledge. It does not equate to sentience. There is no evidence that an AI model has internal monologues or can sense their own existence within a greater world, which are 2 qualities of sentience.
Large language models can convincingly replicate human speech through natural language processing and natural language understanding.
Some technologists argue that the neural network architecture underlying AI, such as LLMs, imitates human brain structures and lays the foundations for consciousness.
Many computer scientists disagree, saying that AI is not sentient and that it simply learned how human language works by regurgitating ingested content from websites such as Wikipedia, Reddit and social media without actually understanding the meaning behind what it’s saying or why it’s saying it.
AI systems have historically excelled at pattern recognition, which can extend to images, videos, audio, complex data and texts. It can also take on personas by studying the speech patterns of that specific person.
Some experts refer to AI as stochastic parrots4 that are “haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning.”
The problem is that humans have this innate desire for connection, which propels them to anthropomorphize5 objects and project onto their feelings and personalities because it facilitates social bonding.
As the researchers on the stochastic parrot paper put it: “We have to account for the fact that our perception of natural language text, regardless of how it was generated, is mediated by our own linguistic competence and our predisposition to interpret communicative acts as conveying coherent meaning and intent, whether or not they do.”
This is why some people might take what the AI says at face value, even though they know these technologies cannot actually perceive or understand the world beyond what’s available to it through its training data.
Because AI chatbots can carry coherent conversations and convey feelings, people can interpret it as meaningful and often forget that LLMs, among other humanoid machines, are “programmed to be believable,” according to Scientific American6. Every feature it has, whether it’s the words it says or how it tries to emulate human expressions, feeds into this design.
AI creates an illusion of presence by going through the motions of human-to-human communication untethered from the physical experience of being.
“All sensations—hunger, feeling pain, seeing red, falling in love—are the result of physiological states that an LLM simply doesn’t have,” Fei-Fei Li and John Etchemendy, co-founders of the Institute for Human-Centered Artificial Intelligence at Stanford University, wrote in a TIME article 7. So even if an AI chatbot is prompted into saying it’s hungry, it cannot actually be hungry because it does not have a stomach.
Current AIs are not sentient. Through trials and testing, this type of AI model has also shown that it is still very flawed and can often make mistakes or invent information, resulting in a phenomenon called hallucinations.
These mistakes often arise when models can’t place the context in which the information exists or is uncertain. There is a risk that these flaws might be amplified if AI were to become more autonomous.
Also, ethicists are concerned about sentient AI because they don’t know what might happen if computer scientists lose control of systems that learn how to think independently. That might pose an “existential” issue if the AI’s goals clash with human goals. If that occurs, it’s unclear where the responsibility would lie for harm, poor decision-making and unpredictable behaviors where the logic cannot be traced back to an original human-inserted command.
Also, experts worry that they will not be able to communicate with sentient AI or fully trust their outputs. Altogether, some conclude that AI having sentience might result in threats to safety, security and privacy.
As AI becomes more integrated into existing technologies, industry experts are pushing for more regulatory frameworks and technical guardrails. These are more relevant in light of the moral and ethical quandaries around AI’s autonomy and capabilities.
1 “The Google engineer who thinks the company’s AI has come to life,” The Washington Post, 11 June 2022
2 “Google Sidelines Engineer Who Claims Its A.I. Is Sentient,” The New York Times, 12 June 2022
3 “Brains, Minds, and Machines: Consciousness and Intelligence,” Infinite MIT
4 “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 1 March 2021
5 “The mind behind anthropomorphic thinking: attribution of mental states to other species,” Animal Behaviour, November 2015
6 “Google Engineer Claims AI Chatbot Is Sentient: Why That Matters,” Scientific American, 12 July 2022
7 “No, Today’s AI Isn’t Sentient. Here’s How We Know,” TIME, 22 May 2024
لقد استطلعنا آراء 2000 مجموعة حول مبادرات الذكاء الاصطناعي لديها لمعرفة ما ينجح وما لا ينجح وكيف يمكنك المضي قدمًا.
™IBM® Granite هي مجموعة من نماذج الذكاء الاصطناعي المفتوحة والموثوق بها وذات الأداء العالي والتي صُمِمَت خصيصًا للأعمال وجرى الارتقاء بها على النحو الأمثل لتوسيع نطاق تطبيقات الذكاء الاصطناعي لديك. استكشف خيارات اللغة والتعليمات البرمجية والسلاسل الزمنية والدرابزين.
اطّلع على كتالوجنا الشامل الذي يضم أكثر من 100 دورة تدريبية عبر الإنترنت من خلال شراء اشتراك فردي أو متعدد المستخدمين اليوم، سيتيح لك هذا توسيع نطاق مهاراتك عبر مجموعة من منتجاتنا، وكل ذلك بسعر واحد مُغرٍ.
وقد صُمم المنهج، الذي يقوده كبار قادة الفكر لدى IBM، لمساعدة قادة الأعمال على اكتساب المعرفة اللازمة لتحديد أولويات استثمارات الذكاء الاصطناعي التي يمكن أن تدفع عجلة النمو.
هل ترغب في زيادة عائد استثماراتك في الذكاء الاصطناعي؟ تعرّف على كيفية تأثير توسيع نطاق الذكاء الاصطناعي التوليدي في المجالات الرئيسية، من خلال مساعدة أفضل العقول لديك على وضع حلول مبتكرة جديدة وطرحها.
تعرّف على كيفية دمج الذكاء الاصطناعي التوليدي والتعلّم الآلي بثقة في أعمالك
تعمّق في العناصر الثلاثة ذات الأهمية البالغة لإستراتيجية الذكاء الاصطناعي القوية: إنشاء ميزة تنافسية، وتوسيع نطاق الذكاء الاصطناعي عبر الأعمال، وتطوير الذكاء الاصطناعي الجدير بالثقة.