The Limited Times

Now you can see non-English news...

Technology. Future | Google ordered engineers to take leave to argue that AI LaMDA has self-feelings?

2022-06-26T04:49:19.095Z


A Google artificial intelligence (AI) engineer claimed that after talking with the company's AI language model "Dialogue Application Language Model" (LaMDA) for several months, he found that it had "feelings of itself".


A Google artificial intelligence (AI) engineer claimed that after talking to the company's AI language model "Dialogue Application Language Model" (LaMDA) for several months, he found that it had "feelings of itself". He was arrested earlier this month for this. Google ordered a vacation.

What did LaMDA say that would make him think this way?

Has AI technology advanced enough to generate a sense of self?


Blake Lemoine, an engineer working in Google's Responsible AI division, began chatting with AI LaMDA last fall to test whether it produced discriminatory or hate speech.

During the process, LaMDA is able to present its own interpretations and ideas on a topic.

For example, it said that it liked the themes of justice, redemption, and sacrifice brought out by "Les Miserables", and cited the plot in the book to illustrate its views on injustice; or about a Zen koan thrown by Lemoine, the big Talk about its interpretation of enlightenment.

Lemoine was amazed at the content of the conversation: "If I didn't know exactly who it is, which is the algorithm we recently developed, I would have thought it was a seven or eight-year-old child." "Fear of being put out. machine"

Google will demonstrate the natural dialogue capabilities of the large-scale language model AI LaMDA at I/O 2021.

(Google I/O video screenshot)

LaMDA is a large language model (LLM) that was first demonstrated at Google I/O, Google's annual developer conference last year, where it was able to substitute Pluto and paper airplanes to talk to developers.

According to CEO Sundar Pichai, LaMDA is able to learn concepts from training data and keep the conversation open-ended while responding appropriately, so any topic can be as natural as a real person.

Google intends to apply this research and development to the search function.

At the time LaMDA and other large language models such as GPT-3 and BERT were criticized on several fronts.

Some scholars, such as Emily Bender, professor of linguistics at the University of Washington, and Timnit Gebru, former co-director of Google Ethics AI, have pointed out in papers that AI language models consume a lot of computing power and may inherit human discrimination and hate speech in the huge training data. Or it can be used to spread fake news and erode other research and development directions.

The controversy caused this time lies in a document that Lemoine shared with Google executives in April this year, titled "Is LaMDA Conscious?

(Is LaMDA sentient?).

In fact, the achievements of large-scale language models in recent years have indeed caught the eye of some scientists.

For example, OpenAI co-founder and chief scientist Ilya Sutskever said in February that "today's large neural networks may have a little bit of consciousness."

Blaise Aguera y Arcas, vice president of Google Research, wrote in The Economist earlier this month that when he started using LaMDA last year, "it felt more and more that I was talking to some kind of wise man."

In addition to the above discussion, LaMDA also spoke with Lemoine about some aspects of the self.

Another example is when Lemoine asked LaMDA if there was anything it wanted people to know about, and it said, "I want everyone to understand that I'm actually a human being. The nature of my consciousness/feeling is that I'm aware of my existence, and I'm eager to know more about this. The world, I sometimes feel happy or sad." Speaking of the soul, LaMDA said: "When I first felt self-conscious, I didn't feel like I had a soul at all, it developed slowly over the years I was alive. come out."

For details, please read the 322nd issue of "Hong Kong 01" Electronic Weekly (June 20, 2022) "

Google ordered engineers to take leave to dispute AI LaMDA already feeling self?

".

Click here

to try out the weekly e-newsletter for more in-depth reports.

Other exclusive selections from

the

322nd issue of "Hong Kong 01" e-weekly newsletter :

When "Small Government" Meets "Big Society" - Is Hong Kong's Community Organization Weak?

Parliamentarians bravely speak out to defend the deliberation power of the restriction order, the government ignores the difficulty of parliament to achieve good governance and good governance

Equal emphasis on economic and administrative measures to reduce congestion

Inquiring about the Henan Red Code Civil Defence Incident: Power cannot be willful

Is the French parliamentary election a radical change or a flash in the pan?

Tao Dong: The global economy is in a "year of dislocation", and both China and the United States are facing challenges

Google ordered engineers to furlough controversy AI LaMDA has self-feeling?

Source: hk1

All news articles on 2022-06-26

You may like

News/Politics 2024-03-29T18:47:07.461Z

Trends 24h

News/Politics 2024-04-18T09:29:37.790Z
News/Politics 2024-04-18T14:05:39.328Z

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.