The Limited Times

Now you can see non-English news...

Oriol Vinyals: "Our generation will see an artificial intelligence that equals or exceeds that of the human being"


The DeepMind research director explains in an interview with EL PAÍS what projects this leading Google company is working on and how machines have started a silent revolution that will change our lives

Ever since he saw

2001: A Space Odyssey

as a child , Oriol Vinyals knew that he wanted to dedicate himself to artificial intelligence.

“I was very interested in how naturally Hal 9000, the computer, spoke.

Could we achieve something like this?”, this teenager from Sabadell was already wondering.

Today, at 39, he is a world authority on

deep learning .

), one of the most advanced techniques of artificial intelligence (AI).

His scientific articles have been cited tens of thousands of times and his research has contributed to improving machine translation systems or the way machines interpret and classify images.

Elon Musk himself, a character who does not stand out for his modesty, responded gratefully to a tweet from the Catalan in which he blessed a Tesla project.

Vinyls is the research director of DeepMind, a British company that Google bought in 2014 and that has made great strides in the discipline.



made its first headlines in the international press thanks to


a program that managed to beat a world champion of Go, the thousand-year-old Asian game whose board allows the tiles to be placed in more arrangements than there are atoms in the universe.

The program not only outperformed the best, but invented never-before-seen plays along the way.

Catalan joined Google in 2013, after receiving his PhD from the University of Berkeley.

Less than a year later he landed at the newly acquired DeepMind.

And in 2016, he led the team responsible for the company's next big milestone:


a sim capable of winning over expert

StarCraft II players


It is a real-time strategy video game with imperfect information (each player only sees what is happening on the portion of the map they have explored) in which it is key to have intuition, imagination and cognitive skills to try to guess what the opponent is doing.

These qualities that the AI ​​had not yet shown how to master.

Since then, he has been part of or supervised the teams behind


an artificial intelligence that has predicted the structure of all known proteins (about 200 million molecules), or


an automatic program capable of writing code at the level of the best programmers.

This same week, DeepMind has presented a new advance in the gaming environment:


an algorithm capable of playing Stratego like an expert human, a probabilistically more complex board game than Go.

Vinyls receives EL PAÍS at the London offices of DeepMind, located in the Kings Cross neighborhood and which coincidentally are a stone's throw from those of Meta, Google's archenemy.

"From my office window I can greet them," he says with a laugh.


When she has to tell someone what she does, what does she say?


It's hard to explain.

We develop machines capable of learning by themselves to play games.

Before, AI consisted of programming a series of specific instructions, for example, for the machine to say a series of sentences.

Now, with deep learning, what you do is tell it that when it sees the words “The sky is” it knows to say “blue” next.

You teach him to predict that word by showing him thousands or millions of examples.

You go on shaping the system with an automatic algorithm until it is capable of stringing together meaningful sentences.

The magic is that when you give it input that is not part of the examples it has analyzed, that brain generalizes and is able to make a reasonable extrapolation.


Being able to apply

deep learning

to almost any field, why did you start with games?


The games are very useful in research because they offer a controlled environment in which to do tests, since if you win or lose nothing happens, and in which it is also very easy to define the objectives, which are to win the game.

You can run 1,000 games in parallel without the expense of, for example, putting 1,000 robots to do things.

And simulations can be sped up, so you progress faster than if you were working in real time.

Vinyals, in one of the few spaces in the DeepMind offices accessible to staff from outside the organization.Carmen Valiño


Why were you commissioned with the

AlphaStar project?


When I was young I played a lot of


in internet cafes in Sabadell.

And at Berkeley, a colleague and I developed a rather primitive simulator for that game.

When I came to DeepMind I came from Google Brain, the company's research project focused on deep learning.

He had worked on text translation and image classification systems, among others.

And, although it may not seem like it, the algorithms behind those machines have a lot to do with game simulators.

For example, in


the first step is to learn from the games that humans play.

You ask the algorithm, after studying many games and having seen what has happened in the current one, to tell you at a specific moment where the human will click next.

That first step is identical to what is used in text translations or to create natural language: after analyzing millions of words or phrases, you ask it to tell you which letter or word is most likely to be next in the conversation at any given time. .


Then came




Are they related, beyond the name?


They are very different projects, although it is true that what we discover in one we transfer to the algorithms of the other.

We have applied the lessons learned with


in architectures and systems optimization in natural language models or in


which has allowed us to unravel the structure of proteins.

The algorithms that we develop in each project are like tools that you accumulate and that you can apply in other applications.

Everything we've done so far is helping us, for example, in some work we're doing on nuclear fusion.


Nuclear fusion?


Yes. Achieving the merger is simple;

the hard part is extracting more energy than you invest.

In nuclear fusion, a kind of donut-shaped empty tubes are used with electromagnetic fields that are controlled at very high frequencies.

Inside the donut is the plasma, which you heat so much that energy is generated, because there comes a time when the atoms begin to fuse.

Our contribution here is in the part of the control of these electromagnetic fields: you have to make sure that it never touches a wall, that it is where it should be.

To do this, you have to balance it very precisely and very quickly.

It is a very complex system.

And it's like a game: it's about optimizing the systems so that the plasma is well placed.

We are using reinforcement learning algorithms.

There are promising results

Creating an artificial intelligence that equals or exceeds ours will be the most profound scientific advance humanity will achieve."


What else do you work on?


We are also trying to improve weather predictions by studying how clouds move.

If we manage to make planetary climate projections beyond a week, which is what can be done now, we will be able to better understand the consequences of the climate emergency.

It is a new field for us.

As a researcher, the most exciting thing about deep learning is that it really is a metascience: it can be applied to biology, physics, or whatever you want.

Deep learning has endless applications.


You are also developing an AI system that is not a specialist in doing a single task, but several.

Is it your most ambitious project?


It is often criticized that AI is a specialist in something, even if it is infinitely relevant, such as nuclear fusion, but that it does not understand anything beyond its task.

We want to change that.

What we have achieved so far is a 101% performance playing Go, combining proteins or programming.

The future goes through multimodality, to achieve yields of 10 or 20%, but in many or all tasks.

That is what we want to achieve with our

Gato neural network.

At the moment, you can have a conversation with her by asking her with text or showing her an image for her to comment on.

She is also capable of playing simple video games and controlling a robotic arm.

The tasks that she does are not perfect: she sometimes makes mistakes in simple matters, such as locating the right and the left.

But that will get better.

We will manage to develop a single algorithm that does it all.


Is Gato

a first step towards a general artificial intelligence, the one that equals or exceeds the human being?


Yes, clearly.

I think language processing is currently the most promising field towards a truly general artificial intelligence.

And this is achieved with algorithms that will create more general systems than the ones we use today.


is another good example: having systems that understand the code language means that they can create much more general complexities than we've seen before.

Lee Sedol, the South Korean Go champion, focused on the board after losing the last game of the tournament against DeepMind's 'AlphaGo' program, which defeated him 4-1.Lee Jin-man (AP)


Do you think our generation will ever see one of these general artificial intelligences?


Yes, I think we will live it.

But I also think that at first it will not be something that changes everything overnight.

The transition will be gradual, and in fact in the field of AI an evolution is already palpable.

We will see a series of jumps or transitions that will not be incredible, but that will add up, and that will be truly striking when looking back.

In a few years, I don't know how many, the systems will increasingly be able to do more different things and with better efficiency: 20%, 30%... until reaching 100%.

As it will be progressive, people will get used to it.


This summer, a Google engineer said that the


he was working on had gained consciousness.

Can machines feel?


It seems to me a very interesting debate.

I work in the guts of the AI, so to speak, and clearly the machines have no conscience.

Chatbots can


you what time it is and other things like that, but they have very basic limitations.

One of them is that they are not aware of their own existence.

Another very obvious one is that they have no long-term memory, you start from scratch with each conversation and they contradict themselves.

In any case, I think it is very useful to speak publicly about these issues.


The most advanced conversational models do not have a semantic understanding of what is being said to them, but they are capable of producing the answers that someone who does understand what is being asked would give.

So are they smart?


The part that interests me the most about this is the utilitarian part.

It is true that if we manage to teach these algorithms to play games and verify that they have understood them, then you can analyze what process they followed to get there.

Whether or not that is intelligence, I hardly care.

I understand that for someone who studies the human brain it can be interesting.

My mathematical training leads me to think that what is relevant is the fact of getting a machine to perform a task in a way that is indistinguishable from how a human would.


Are we prepared as a society to accept more advances of this type?


I think that achieving a general artificial intelligence will be one of the most profound scientific advances that humanity can achieve, because we do not even understand our own intelligence, despite the many advances of neuroscientists.

We need to talk more about it, about its implications.

Philosophers, sociologists or historians have more and more to say in our work.

You have to think about the long-term consequences of AI.

You can follow






or sign up here to receive our

weekly newsletter


Source: elparis

All tech articles on 2022-12-04

You may like

News/Politics 2023-01-21T01:00:57.022Z

Trends 24h

Tech/Game 2023-02-05T21:37:18.643Z
Tech/Game 2023-02-05T21:49:29.752Z


© Communities 2019 - Privacy