APA | Marcus, G. (2022, June 14). Google's AI is not sentient. Not even slightly.IAI News. /articles/googles-ai-is-not-sentient-not-even-slightly-auid-2153 | |
MLA | Marcus, Gary. "Google's AI is not sentient. Not even slightly." IAI News, 14 June 2022. /articles/googles-ai-is-not-sentient-not-even-slightly-auid-2153 |
AI consciousness has not arrived yet

14th June 2022
Gary Marcus
| Gary Marcus is an American scientist, author, and entrepreneur who is a professor in the Department of Psychology at New York University and was founder and CEO of Geometric Intelligence
1,058 words
Read time: approx. 5 mins
A Google AI engineer has been put on leave for thinking an AI has become sentient. However, this is an illusion caused by a clever language model and a human anthropomorphising, writes Gary Marcus.
Blaise Aguera y Arcas, polymath, novelist, and Google VP, has a way with words.When he found himself impressed with Google’s recent AI system LaMDA, he didn’t just say, “Cool, it creates really neat sentences that in some ways seem contextually relevant”, he said, rather lyrically, in an interview with The Economist on Thursday,
“I felt the ground shift under my feet … increasingly felt like I was talking to something intelligent.”
Nonsense. Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent.All they do is match patterns, drawn from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient. Which doesn’t mean that human beings can’t be taken in. In our book Rebooting AI, Ernie Davis and I called this human tendency to be suckered byThe Gullibility Gap— a pernicious, modern version ofpareidolia, the anthromorphic bias that allows humans to see Mother Theresa in an image of a cinnamon bun. Indeed, someone well-known at Google, Blake LeMoine, originally charged with studying how “safe” the system is, appears to have fallen in love with LaMDA, as if it were a family member or a colleague. (Newsflash: it’s not; it’s a spreadsheet for words.)
Love the iai?
Sign up to get exclusive access.
Related Posts:
AI's growing threat to challenging ideas
AI can uncover humanity's unknown unknowns
The day the AI dream died
AI, Moloch, and the race to the bottom
Related Videos:
The truth about quantum computing
The danger and promise of the algorithm
The consciousness test
Profit, planet, and pretence
Gary Marcus
14th June 2022
Continue reading
Enjoy unlimited access to the world's leading thinkers.
Start by exploring our subscription options or joining our mailing list today.
Start Free Trial
Already a subscriber?Log in
Related Videos
Latest Releases
Language wars
The price of everything, value of nothing
What dating apps get wrong about your love life
Join the conversation
Fat Tony16 June 2022
Good summary. Consciousness and sentience are not computable. I don't know whether the explanation for them resides in quantum physics or elsewhere, but it seems clear that it is the non-computable nature of sentience that distinguishes true general intelligence and understanding from mere mechanical computation. Weak AI (the only form of computable AI) in contrast will always just carry out the tasks that we sentient creatures have set for it, and no matter how sophisticated those tasks are, they will never be sentient. There is also something wrong I find with the assumption that strong artificial intelligence will be demonstrated by the ability to pass some sort of test. Does that mean an actual human that failed the test, we should consider their sentience suspect? A baby can't carry on any sort of conversation. But I'm sure it is sentient. It isn't even truly aware of itself or its environment. But I know for a fact that inside that baby is a person experiencing what it feels like to be that baby in that moment, even if that memory is ultimately forgotten over time, in that moment there is someone on the inside. But surely it is not outside the realm of technology to be able to simulate what a baby does since babies display few talents or insights. Similarly I would look at an ant and be quicker to assume the ant has a "someone" on the inside feeling what it is like to be an ant in that moment, even if only on a very primitive and basic level, than assume some clever robot that mimics sophisticated human speech, is actually alive. I believe our best bet to find some sort of genuine intelligence and sentience is quantum physics: if only for a lack of plausible alternatives. It also seems extremely coincidental that computing is heading in that direction in any case.