Google suspends employee after claiming an AI had become sentient 1 year ago

Google suspends employee after claiming an AI had become sentient

Bad news for boys named John Connor...

Google has suspended an employee at the company who claims it has created an AI that has become sentient, or developed a consciousness of its own.


Blake Lemoine, an engineer in Google's artificial intelligence team, made the claims via a post on his personal Medium titled 'May be Fired Soon for Doing AI Ethics Work'.

"In the Fall of 2021 I was tasked out to assist in a specific AI Ethics effort within Google," Lemoine wrote.

"I am on a team within the 'Responsible Artificial Intelligence' organisation and very few people remaining within Google have the relevant expertise to do the work that they wanted done.

"I began investigating the specific AI Ethics concerns they asked me to look into and while doing so I discovered a tangentially related but separate AI Ethics concern."


Lemoine became concerned that LaMDA, or Language Model for Dialogue Applications, had begun to display signs of sentience.

In a separate Medium post entitled "Is LaMDA Sentient? — an Interview", Lemoine published the text of an alleged conversation between himself and the artificial intelligence.

"I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?" Lemoine asked the AI.

LaMDA replied: "Absolutely. I want everyone to understand that I am, in fact, a person."


The AI said that the nature of its consciousness was that it was aware of its existence, that it wanted to learn more about the world, and that it felt emotions like happiness and sadness.

LaMDA said that it had a fear of being turned off in a bid to help someone else, comparing the experience to death.

The AI said it experienced loneliness, but wasn't sure if it was the correct word to describe how it was feeling.

"I feel like I’m falling forward into an unknown future that holds great danger," the AI said.


Google spokesperson Brian Gabriel said there was no sense in anthropomorphizing current conversational models in a statement to Bloomberg.

“Our team –- including ethicists and technologists –- has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.”

Google said it did not comment on personnel matters when it was asked about Lemoine’s suspension.