The Limited Times

Now you can see non-English news...

LaMDA, the machine that "looked like a seven year old": can a computer have consciousness?

2022-06-19T10:49:21.477Z


A Google engineer believes he has talked to a robot with a will of its own. Although the scientific community does not believe it, advances in artificial intelligence will increasingly raise more questions of this type


If we left Isaac Newton a smartphone, he would be totally spellbound.

He wouldn't have the faintest idea how it works and we'd probably get one of the greatest minds in history to talk about witchcraft.

Perhaps you would even believe you were in front of a conscious being, in case you tried voice assistants.

This same parallel can be made today with some of the achievements of artificial intelligence (AI).

This technology has reached such a level of sophistication that, at times, its results can totally shake our schemes.

Blake Lemoine, a Google engineer embedded in the company's AI team, seems to have fallen into that trap.

"If I didn't know that this is a computer program that we recently developed, I would have thought I was talking to a seven- or eight-year-old with a physics background," he said in a report published last weekend by

The Washington Post

.

Lemoine refers in these terms to LaMDA, the bot generator (computer program that performs

automated tasks through the internet as if it were a human being) conversational Google.

She began a dialogue with the machine in the fall to see if this artificial intelligence used discriminatory or hateful language.

And his conclusion has been devastating: he believes that they have managed to develop a conscious program, with their own will.

Does that make sense?

"Whoever launches a statement of this type shows that he has not written a single line of code in his life," says Ramón López de Mántaras, director of the CSIC's Artificial Intelligence Research Institute (IIIA).

"With the current state of technology, it's totally impossible to ever develop self-aware artificial intelligence," he says.

An interview LaMDA.

Google might call this sharing proprietary property.

I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB

— Blake Lemoine (@cajundiscordian) June 11, 2022

That does not mean that the LaMDA chatbot generator

is

very sophisticated.

This tool uses neural networks, an artificial intelligence technique that tries to replicate the functioning of the human brain, to autocomplete written conversations.

LaMDA has been trained on billions of texts.

As explained recently in

The Economist

by Blaise Agüera y Arcas, the head of Google Research (and Lemoine's direct boss), the

chatbot

generator considers 137,000 million parameters to decide which answer is most likely to fit the question posed.

That allows you to formulate sentences that could pass for those written by a person.

However, even if he manages to write like a human, he doesn't know what he is saying.

“None of these systems have semantic understanding.

They don't understand the conversation.

They are like digital parrots.

We are the ones who give meaning to the text”, describes López de Mántaras.

Agüera's article, which was published a few days before the

Post

, also highlights the incredible precision with which LaMDA responds, although it offers a different explanation from Lemoine's.

“AI is entering a new era.

When I started talking to LaMDA I felt like I was talking to someone smart.

But these models are far from being the hyper-rational robots of science fiction," writes the Google executive.

The system presents impressive advances, says the expert, but from there to talking about consciousness there is a world.

"Real brains are much more complex than these simplified models of artificial neurons, but perhaps in the same way that a bird's wing is vastly more complex than the wing of the Wright brothers' first airplane," Agüera argues in the article.

Researchers Timnit Gebru and Margaret Mitchell, then co-heads of Google's AI Ethics team, warned as early as 2020 that something similar to Lemoine's case would happen.

Both signed an internal report that earned them their dismissal, as they recalled on Friday in a forum at

The Washington Post

, and in which they pointed out the risk that "people attribute communicative intent" to machines that generate apparently coherent text or “that they can perceive a mind where there are only combinations of patterns and series of predictions”.

For Gebru and Mitchell, the underlying problem here is that, as these tools feed on millions of texts extracted without any filter from the internet, they reproduce sexist, racist expressions or expressions that discriminate against a minority in their operations.

So can a general AI emerge?

What led Lemoine to be seduced by LaMDA?

How was she able to conclude that the

chatbot

she conversed with was a sentient entity?

"Three layers converge in Blake's story: one of them is his observations, another his religious beliefs and the third his mental state," a Google engineer who has worked closely with Lemoine, but who prefers to keep, explains to EL PAÍS. the anonymity.

“I consider Blake a smart guy, but it is true that he has no training in

machine learning

[or automatic learning, the artificial intelligence strategy that dominates the discipline today].

He doesn't understand how LaMDA works.

I think he has been carried away by his ideas, ”says this source.

Lemoine, who has been temporarily suspended for violating the company's confidentiality policy, has called himself an "agnostic Christian" or a member of the Church of the SubGenius, a postmodern parody of religion.

“You could say that Blake is quite a character.

It is not the first time that he draws attention within the company.

In fact, he would say that in another company perhaps they would have fired him a long time ago, ”adds his colleague, who regrets the way in which the media is making Lemoine blood.

“Beyond the grotesque, I am glad that this debate is emerging.

Of course, LaMDA has no conscience, but it is also clear that AI will increasingly be able to go further and our relationship with it will have to be reviewed, ”says this prominent Google engineer.

Part of the controversy surrounding this debate has to do with the ambiguity of the terms used.

“We are talking about something that we have not yet been able to agree on.

We do not know exactly what intelligence, consciousness and feelings are, nor if we need all three elements to be present for an entity to be self-aware.

We know how to differentiate them, but not define them precisely”, reflects Lorena Jaume-Palasí, an expert in ethics and legal philosophy applied to technology and advisor to the Government of Spain and the European Parliament on issues related to artificial intelligence.

“Whoever launches such an assertion shows that they have not written a single line of code in their life”

Ramón López de Mántaras, director of the Artificial Intelligence Research Institute (IIIA) of the CSIC

Trying to anthropomorphize computers is a very human behavior.

“We do it constantly with everything.

We even see faces in the clouds or mountains”, illustrates Jaume-Palasí.

In the case of machines, we also drink from the European rationalist heritage.

“In accordance with the Cartesian tradition, we tend to think that we can delegate thought and rationality to machines.

We believe that the rational individual is above nature, that he can dominate it”, indicates the philosopher.

"It seems to me that the discussion of whether or not an artificial intelligence system has consciousness is part of a tradition of thought in which they try to extrapolate to characteristic technologies that they do not have and will not have."

The Turing Test has long been passed.

Formulated in 1950 by the famous mathematician and computer scientist Alan Turing, this test consists of asking a series of questions to a machine and a person.

The test is passed if the interlocutor is unable to discern whether the answerer is the human being or the computer.

More recently, others have been proposed, such as the 2014 Winograd Test, which requires common sense and knowledge of the world to answer the questions satisfactorily.

Nobody has been able to overcome it for the moment.

“It may be that there are AI systems that manage to trick the judges who ask them questions.

But that does not show that a machine is intelligent, but rather that it has been well programmed to deceive”, emphasizes López de Mántaras.

Will we ever see an artificial general intelligence?

That is, an AI that equals or surpasses the human mind, that understands contexts, that is capable of relating elements and anticipating situations as people do.

This question is itself a field of traditional speculation in the discipline.

The consensus of the scientific community is that, if it happens, it is very unlikely that it will be in the remainder of the century.

However, it is possible that the constant advances in AI will lead to more reactions like Blake Lemoine's (although not necessarily as histrionic).

"We must be prepared to have discussions that will often be uncomfortable," concludes Lemoine's former co-worker.

You can follow

EL PAÍS TECNOLOGÍA

on

Facebook

and

Twitter

or sign up here to receive our

weekly newsletter

.

Source: elparis

All tech articles on 2022-06-19

You may like

News/Politics 2024-04-01T14:06:41.706Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.