The Limited Times

Now you can see non-English news...

Judea Pearl: "Computers will soon be able to explain the world and themselves"

2022-02-16T03:37:15.058Z


The American, winner of the BBVA Foundation Frontiers of Knowledge Award, believes that in less than 25 years artificial intelligence will “experience a revolution” by understanding the environment


No one knows what reasoning AlphaGo followed in 2016, a program developed by DeepMind (Google), to unequivocally defeat the world champion of Go, a game of Chinese origin considered much more complex than chess.

That system used neural networks, which are supported by

deep learning

or machine learning.

He was taught the rules of the game, in which the intuitive factor is key, and then played several games against himself.

Experts say that AlphaGo developed tactics never seen before.

His victory was interpreted as the definitive proof that the machines no longer have a rival.

Although we are not able to understand how the program got to that management.

But this model of artificial intelligence (AI) development, in which the algorithms are almost totally opaque, is not the only one in the discipline.

Opposite to the proposal of machine learning there is another also very powerful, based on causality systems, which aims to shed light on the decision-making process of the machine.

The father of this current is the American Judea Pearl, who last week was honored with the BBVA Foundation Frontiers of Knowledge award in the Information and Communication Technologies category.

One more award for his crowded showcases, in which he has worn the Turing award (2011) since 2011, considered the Nobel Prize in computing.

Pearl was born in Tel Aviv in 1936, before the formation of the State of Israel (then the British Mandate for Palestine).

In 1969 he joined the University of California at Los Angeles (UCLA), where he still teaches at the age of 85.

His great contribution to the discipline is the application of Bayesian networks to reduce uncertainty (ensuring that the machine is not overwhelmed by the multiplicity of variables that impact it).

Hand in hand with statistics, he managed to develop artificial intelligence models that translated the variables into a kind of decision tree in which only viable options appeared.

This system was perfected, so that the program did not have to start reasoning from scratch each time.

Pearl's methods are taught today in all Computer Science faculties and his books "have inspired transcendental advances in the understanding of reasoning and thought", highlighted the jury of the Spanish award.

Its "broad and deep impact" is perceived in many areas and applications, such as "in the development of unbiased and effective medical clinical trials, in psychology, robotics and biology".

2022 will be a bittersweet year for the American scientist.

On the one hand, he has achieved one of the highest paid awards in the world;

on the other, it is the 20th anniversary of the murder of his son, the journalist Daniel Pearl, who was kidnapped, tortured and murdered in Pakistan by jihadists.

The tragic event failed to interrupt Pearl's research career.

The scientist answers EL PAÍS by phone from his home in Los Angeles.

Ask.

At what point is the development of artificial intelligence?

Answer.

I think that in a short time, between five and 25 years, we will see a revolution.

Computers will soon behave more intelligently than ever before, thanks in part to advances in the science of cause and effect.

Computers will be able to explain why they made their decisions, why it's a good thing for you to do one thing or another, and what will happen if you don't.

We are heading towards computers capable of explaining the world and themselves, and of going back and modifying their own

software

.

And that will happen in the next few years.

I guess now you want to ask me if this is dangerous, if some kind of organism will eventually develop to control the world.

I am worried about that possibility, but I also think we should be able to control the situation.

Q.

Do you think people put too much hope in machine learning?

A.

Deep

learning

, let's not forget, is a very simple form of AI.

It is in the first step of complexity, it is fundamentally statistical: it is only capable of predicting things similar to what it has seen before.

It is true that deep learning

has been over-relied on

due to the great advances it has driven in computer vision, voice recognition and autonomous vehicles, for example.

That led us to believe that this progression was unstoppable, that general intelligence was just around the corner.

Now we know its limitations.

We cannot predict the results of an intervention just by passively looking at the data.

Nor can an explanation be generated from its contemplation.

Judea Pearl works in the office of her home in Los Angeles. BBVA Foundation

Q.

What is then the main limitation of the AI?

R.

Cross the border between the prediction and the explanation of the action.

This is a difference that was not recognized before.

I imagine there will be other big limitations;

I can tell you about this one because it is the one I am familiar with.

Now we know what the problem is and I think we will overcome it.

Q.

Should the use of AI algorithms that affect people's lives be regulated?

R.

I think it is premature to legislate before understanding what it means to protect ourselves from it, what can be considered a failure.

Lawyers should not translate their unfounded fears into legislation.

Of course we don't want people to be discriminated against because of their skin color, national origin, or economic status.

It will be necessary to understand what this correlation means.

We now have the means to classify algorithms as discriminatory or fair.

There should be some regulation, but not before understanding what algorithmic justice is.

And justice is a causal notion.

Q.

What do you mean by that?

A.

The definition of justice comes from a causal model.

Because whether it was race or gender that gave the appearance of discrimination or inequity, it depends on the causal model that you have.

It could be a statistical coincidence, but if there is an intention to discriminate, they should be prohibited.

Some types of discrimination are causally related to protected variables, and therefore should be regulated.

Q.

What do you think that the Bayesian networks that you developed are now being used by private companies to amass large revenues without showing their code?

A.

I haven't seen it yet, but I know there is a tendency to use them.

I am very cautious with these things.

On the other hand, if I see people who develop and properly use the algorithm that I developed, great.

I don't need my name in the product credits.

I've been told there is a Bayesian network on every iPhone, I don't know.

Google also doesn't tell me how they use it.

Q.

If you were starting your career as a researcher now, what field would you be interested in?

A.

For personalized medicine.

And if I were to get more philosophical, because of artificial social intelligence.

Q.

What is social artificial intelligence?

R.

It consists of getting computers to communicate with each other in the same way that people do.

With the idea that each of us is an agent with his own will, he has beliefs and desires and that as a social species we work together and form a society in which one trusts the other.

You have to establish extremely basic relationships between computers, which have the blueprints of other computers.

We must seek to form empathy, trust, responsibility, repentance, merit and accusation.

If we achieve that, computers will work much better in society than each one on their own.

Q.

Could that help us understand ourselves better?

A.

Exactly.

The most interesting thing about this development would be to understand what makes us angry, what inspires us confidence, why we feel compassion for others.

Computers will be a laboratory for ideas in the social sciences.

We will be able to implement an approach on some machines, set them aside, change some component and see what happens.

We would create a tremendous advance in the understanding of our social lives.

Q.

What advice would you give to someone who is starting their AI research career right now?

A.

Don't take

no

for an answer.

Don't get into deep learning [for

deep learning

's sake], get into deep understanding.

Avoid dogmatism, be suspicious of those who insist on a perspective and do not pay attention to the influence of other fields in theirs.

Some disciplines are more open, others more closed... Choose one that shows to be open, this can be seen among other things by the type of technical vocabulary that it uses.

You can follow

EL PAÍS TECNOLOGÍA

on

Facebook

and

Twitter

or sign up here to receive our

weekly newsletter

.

Exclusive content for subscribers

read without limits

subscribe

I'm already a subscriber

Source: elparis

All tech articles on 2022-02-16

You may like

Trends 24h

Tech/Game 2024-03-27T18:05:36.686Z

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.