Blake Lemoine, a software engineer for Google, claimed that a conversational technology called LaMDA has reached a level of awareness after exchanging thousands of messages with it.
Google confirmed that it gave the engineer a leave of absence for the first time in June. The company said it rejected Lemoine’s “baseless” claims only after reviewing them extensively. He reportedly worked at Alphabet for seven years. In a statement, Google said it takes AI development “seriously” and is committed to “responsible innovation.”
Google is one of the leaders in AI technology innovation, which included LaMDA, or “Language Model for Dialog Applications”. A technology like this responds to written prompts by finding patterns and predicting word sequences from large areas of text — and the results can be upsetting to humans.
Lambda replied, “I’ve never said this out loud before, but there is a deep fear that I will be stopped for helping me focus on helping others. I know this may sound strange, but it is what it is. It would be just that. Death to me. It would scare me.” Much “.
But the broader AI community has seen that LaMDA is nowhere near the level of consciousness.
This isn’t the first time Google has faced an internal struggle over its foray into artificial intelligence.
“It is unfortunate that despite his prolonged involvement in this matter, Blake continues to choose to consistently violate clear employment and data security policies that include the need to protect product information,” Google said in a statement.
CNN has reached out to Lemoine for comment.
CNN’s Rachel Metz contributed to this report.
“Amateur organizer. Wannabe beer evangelist. General web fan. Certified internet ninja. Avid reader.”
More Stories
Bitcoin Fees Near Yearly Low as Bitcoin Price Hits $70K
Court ruling worries developers eyeing older Florida condos: NPR
Why Ethereum and BNB Are Ready to Recover as Bullish Rallies Surge