The Limited Times

Now you can see non-English news...

Google fires engineer who claimed its AI tech was self-aware

2022-07-25T01:42:26.994Z


Google fired the engineer who claimed an unreleased artificial intelligence (AI) system had become sentient.


They deny that Google's LaMDA has become aware of its own 2:25

(CNN) --

Google has fired the engineer who claimed an unreleased artificial intelligence (AI) system had become sentient, the company confirmed, saying it violated data security and employment policies.

Blake Lemoine, a software engineer at Google, claimed that a conversation technology called LaMDA had reached a level of consciousness after exchanging thousands of messages with it.

Google confirmed that it had suspended the engineer for the first time in June.

The company said it dismissed Lemoine's "totally baseless" claims only after thoroughly reviewing them.

In a statement, Google said it takes the development of AI "very seriously" and is committed to "responsible innovation."

Google is one of the leaders in innovating AI technology, including LaMDA, or "Language Model for Dialog Applications."

This kind of technology responds to typed requests by finding patterns and predicting word sequences from large swaths of text, and the results can be disturbing to humans.

"What kinds of things are you afraid of?" Lemoine asked LaMDA, in a Google document shared with top Google executives last April, the Washington Post reported.

advertising

LaMDA responded, "I've never said it out loud, but I have a very deep fear of being turned off so I can focus on helping others. I know it may sound strange, but that's what it is. To me it would be exactly like the death. It would scare me."

But the AI ​​community at large has held that LaMDA is nowhere near a level of consciousness.

"No one should think that autocompletion, even on steroids, is conscious," Gary Marcus, founder and CEO of Geometric Intelligence, told CNN Business.

It's not the first time Google has faced infighting over its foray into AI.

In December 2020, Timnit Gebru, a pioneer in AI ethics, parted ways with Google.

As one of the company's few black employees, she said she felt "constantly dehumanized."

  • No, Google's artificial intelligence program LaMDA is not aware

The sudden departure drew criticism from the tech world, including from Google's ethical AI team.

Margaret Mitchell, leader of Google's Ethical Artificial Intelligence team, was fired from her in early 2021 following her statements about Gebru.

Gebru and Mitchell had raised concerns about the AI ​​technology, saying they warned that people at Google might believe the technology was sentient.

On June 6, Lemoine posted on Medium that Google has placed him on paid administrative leave "in connection with an investigation of AI ethics concerns he was raising within the company" and that he could be fired "soon."

"It is unfortunate that, despite a lengthy engagement on this issue, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information," Google said in a statement.

CNN has reached out to Lemoine for comment.

CNN's Rachel Metz contributed to this report.

GoogleIAArtificial Intelligence

Source: cnnespanol

All news articles on 2022-07-25

You may like

News/Politics 2024-02-28T04:04:27.235Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.