The Limited Times

Now you can see non-English news...

José Hernández-Orallo, AI expert: "The scales of human beings cannot be used to evaluate artificial intelligence"

2023-05-03T05:22:16.630Z


The researcher calls, together with 15 other scientists, to rethink the evaluation of AI tools to move towards more transparent models


José Hernández-Orallo (Kennington, London, 51 years old) got his first computer, at the age of 10, in a raffle.

"It was a Spectrum, my brother made a collection of a computer encyclopedia by installments and, if you finished it, you entered the raffle," he recalls.

They won it.

“We played, like any kid nowadays, but we also programmed, we had complete control of the computer.

They're not like they are now."

Today he is a doctor and professor at the Polytechnic University of Valencia, a world expert in the evaluation of artificial intelligence and has led the letter that, together with 15 other researchers, has been published in the journal

Science

in which they claim the need to “rethink” the evaluation of AI tools in order to move towards more transparent models and find out what their real effectiveness is, what they can and cannot do.

More information

Geoffrey Hinton, the 'godfather' of AI, leaves Google and warns of the dangers of this technology

Question

.

What do you think of Geoffrey Hinton's decision to leave his job at Google in order to more freely warn of the dangers posed by artificial intelligence?

Answer.

What Hinton says is quite reasonable, but I am a little surprised that he says it now, when we have been saying the same thing for a long time in centers like the Center for the Study of Existential Risk or the Leverhulme Center for the Future of Intelligence [both from Cambridge University and to which it is affiliated].

And I think he has said similar things before, perhaps not as clearly or as loudly.

I am surprised that Hinton now realizes that artificial and natural systems are very different, and what works for one (capability, evaluation, control, ethics, etc.) does not have to work for the other, apart from the obvious fact that the scale and multiplicity (they can replicate, communicate and update much faster than humans).

But it is welcome that such an important scientist says this like this and now.

There is a very high coincidence in the risks, although we may differ in the priorities.

For example, I do not believe that the generation of false material (text, images or video) is so problematic, since increasing our skepticism and forcing us to compare the sources is healthy.

I am more concerned with some solutions to the “alignment problem” that are allowing certain countries, political or religious groups to align AI to their interests and ideology, or censoring AI systems in a particular direction.

The word "alignment", understood as "unique alignment", reminds me of very dark times of humanity.

since raising our skepticism and forcing us to contrast the sources is healthy.

I am more concerned with some solutions to the “alignment problem” that are allowing certain countries, political or religious groups to align AI to their interests and ideology, or censoring AI systems in a particular direction.

The word "alignment", understood as "unique alignment", reminds me of very dark times of humanity.

since raising our skepticism and forcing us to contrast the sources is healthy.

I am more concerned with some solutions to the “alignment problem” that are allowing certain countries, political or religious groups to align AI to their interests and ideology, or censoring AI systems in a particular direction.

The word "alignment", understood as "unique alignment", reminds me of very dark times of humanity.

Q.

How did you come to artificial intelligence?

R.

There was another encyclopedia at home, on human evolution.

I was fascinated by intelligence, how it had evolved, and I wanted to understand it.

I also read philosophy books.

And, with all the pieces together, I studied Computer Science because it was what my brother studied even though, back then, artificial intelligence was half a subject.

Later I did my thesis in the Department of Logic and Philosophy of Science at the University of Valencia, which had a program more oriented to the philosophy of artificial intelligence.

I was captivated and I also had no other options because we had no resources.

It was a year in which I was also able to work on what I liked, write a book and do the substitute social benefit.

Sometimes you don't choose, one thing goes after the other but in the end I dedicate myself to what I have always liked, which is to understand intelligence,

Q.

What is the evaluation of artificial intelligence systems?

A.

We know what the bikes or kitchen robots are for, and the tasks they can do, and they are evaluated from the quality point of view.

Until recently, artificial intelligence systems were going down that path.

If they had to classify cats and dogs, what was important was that they classify cats and dogs as well as possible.

They were task-oriented systems.

If you know how to evaluate it, you know if it serves the task you want and how many mistakes it makes.

But that is very different from systems like GPT4, which have cognitive capacity.

Q.

What are those systems like now?

A.

A system is good if it works for you, if it meets your expectations, if it doesn't surprise you negatively.

AI are general purpose systems.

You have to determine what they are capable of doing based on the way you give them instructions.

They are quite good but they are not human beings, it is thought that they will react the same as a person and that is where the problems begin.

They answer with some certainty and you think it is correct.

This does not mean that humans always answer correctly, but we are used to gauging people, knowing if they are reliable or not, and these systems do not work with the intuitions that we use with human beings.

Q.

And how can evaluations be improved in these general-purpose tools, capable of doing so many things?

R.

Well, it is something that has been tried.

It's called skills-based assessment, not tasks.

There is a huge tradition and a science for this type of evaluation but many have launched to use the same tests that are used for humans and try to apply them in AI and they are not designed for machines.

It's like using a wall thermometer to take body temperature, it's not going to work.

Q.

But is there a way to evaluate artificial intelligence by capabilities?

A.

It is what we are trying to develop.

For example, GPT4 gives an evaluation with tests, especially education, university entrance tests, chemistry, physics, language, a bit of everything.

Trying to compare the result you get with those of humans and saying it's at the 70% percentile doesn't make any sense.

It may be an indicator but that does not mean that it is above 70% of the people.

When you apply these instruments on humans you assume a lot of things, that a coffee can bring you, for example... now tell the system to bring you a coffee.

Q.

So there is no way to evaluate them?

A.

We cannot measure how they work by tasks because we would never finish.

For the evaluation of a system like these, it is necessary to extract indicators, in this case capacities, that allow us to extrapolate how the system will work in the future.

It is not giving a number.

We should be able to compare humans and artificial intelligence systems but it is being done wrong.

It is a very complex system, but I do not lose hope.

We are as physics was in the fifteenth or sixteenth century.

Now it's all very confusing.

It is necessary to break schemes and the final objective is, in decades or centuries, to arrive at a series of universal indicators that can be applied not only to humans and artificial intelligence, but also to other animals.

P.

Do you understand that it is scary?

R.

We are a species in the context of evolution and we are only one type of intelligence that there can be.

Sometimes we believe that we are sublime but we have gotten there by a lot of chances of evolution.

The closest thing are the bonobos and there is an important leap because we have acquired language and we believe that we are a peak in the natural scale and we are not.

With artificial intelligence, we ask ourselves where our place is.

The difference is that our evolution has been given to us and there is enough consensus that we do not play or anyone start making new species, but with artificial intelligence we are playing and when you play you can get burned.

We are reaching levels of sophistication that games are not jokes and must be taken seriously.

It's fascinating, it's like creating a new world.

Q.

The authors of the letter propose a roadmap for AI models, in which their results are presented in a more nuanced way and the results of the evaluation on a case-by-case basis are made publicly available.

R.

Yes. The level of scrutiny must be higher.

In other cases, with the training data, algorithm and code, I can execute it but with these systems it is impossible due to the computational and energy cost.

Q.

But can they be more transparent?

R.

You can be transparent in the process.

What we ask is that you be more detailed in the results.

Let access to the details be given in each of the examples.

If there are a million examples, I want the results for each one of the million examples because I don't have the capacity to reproduce that and not only because I don't have access to the computer and that limits what is basic in science, which is peer scrutiny.

We don't have access to the parts where it fails.

Q.

Is regulation a solution?

R.

It is necessary but it has to be done well.

If it is not regulated, there will be rebounds for sure.

If you don't regulate aviation, accidents happen, people lose confidence and the industry doesn't get off the ground.

If something big happens, society's reaction may be to turn against these systems and in the medium and long term they will have less diffusion and use than they may have for tools that, in general, are positive for society.

You have to regulate but not brake too much.

People are afraid of flying but we know that aviation regulations are among the strictest, that planes are one of the safest means of transport and companies know that, in the long term, it is beneficial for them.

Q.

Can there be a regulation for everyone, worldwide?

A.

There is an Atomic Energy Agency and recombinant DNA agreements.

Genetically modified foods have failed, countries do not agree and in Europe we are consuming these foods but we cannot make them, and that is what can happen to us.

The EU regulation may have errors but you have to jump in and put it into operation.

Q.

Do you think this regulation should be strict or lax?

R.

I think that the volume has to be particularized.

It must be strict with the big ones and more lax with the little ones.

You can't demand the same from Google as from a

startup

of four kids in college because if you don't kill innovation.

Q.

Has there been a gap between regulation and science again?

R.

It is that artificial intelligence goes very fast and there are things that cannot be anticipated.

It is difficult to regulate something that is so transversal, so cognitive.

We are slow but we are also late with social networks and we take forever with tobacco.

Q.

Would it shed some light on how black boxes work?

A.

Black boxes do not explain what the system does.

To really know what it is, when it fails, and what expectations you have, a lot of evaluation is needed.

To evaluate students we do not give them a scanner, we give them a test.

If we want to know how a car works, we want to know if they have tested whether or not it goes out in a curve and it will not help me to know how many spark plugs it has but to know how many tests they have done.

That is why the issue of evaluation is essential.

What we want is to test these systems until we define in which area you can use it safely.

This is how cars and planes are evaluated.

Q.

Why does artificial intelligence create such anxiety?

A.

Outreach efforts are being made but their goal is not to understand how it works.

The criticism of OpenAI is that it has given access to the most powerful artificial intelligence system to hundreds of millions of people, including children and people with mental problems, with a clause that they are not responsible and that is the culture we have today. in day.

We download applications and no one is responsible.

I think they've thought that if they don't get people to use it, how are they going to know the risks.

But pilot tests can be done.

They say there is gradual access but it is a career policy.

It is a challenge for Google in its search engine business to be leaders.

And people are afraid because a few players dominate everything and it is an oligopoly.

You can follow

EL PAÍS Tecnología

on

Facebook

and

Twitter

or sign up here to receive our

weekly newsletter

.

Subscribe to continue reading

Read without limits

Keep reading

I'm already a subscriber

Source: elparis

All tech articles on 2023-05-03

You may like

News/Politics 2024-03-22T23:04:48.727Z
News/Politics 2024-03-23T00:35:34.817Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.