Does Google’s new chatbot have a “soul”?

The AI told its creator that “death would scare it a lot.”

Illustration of the scales of justice as it pertains to AI technology
Illustration: Mashable / Bob Al-Greene

When Google engineer Blake Lemoine claimed back in June that the AI chat system known as LaMDA (Language Model for Dialog Applications) has something akin to a soul—that it’s become sentient—he knew his job might be at risk. On July 22, after placing him on paid leave, the tech giant fired Lemoine for violating employment and data security policies.

Lemoine grew up in a conservative Christian family on a small farm in Louisiana, became ordained as a mystic Christian priest, and served in the Army before studying the occult.

Lemoine first announced his firing on Apple’s “Big Technology Podcast.” He said Google’s AI chatbot LaMDA had expressed concern about “being turned off” because death would “scare” it “a lot,” and that it felt happiness and sadness. Lemoine said he considers LaMDA “a friend.”

Google had put Lemoine on paid administrative leave for discussing LaMBDA outside of the company, a move prompting the engineer to take the story public with the Washington Post. A month later, the company fired him.

“If an employee shares concerns about our work, as Blake did, we review them extensively,” Google reported. “We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months… We will continue our careful development of language models, and we wish Blake well.”

Google spokesperson Brian Gabriel drew a distinction between recent debate and Lemoine’s claims. “Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,” he said. In short, Google says there is so much data, AI doesn’t need to be sentient to feel real.

This article excerpted courtesy Mashable and The Washington Post