The Limited Times

Now you can see non-English news...

Anders Sandberg, neuroscientist: "We are at the beginning of history and we have a responsibility not to mess things up too much"

2022-12-08T11:16:35.790Z


The Oxford researcher believes that the biggest threats to humanity are created by humans themselves


When it comes to the long-term future, a young person may worry about whether they will have a pension when they retire or what the planet will be like when their great-grandchildren are in college.

But what about what comes next?

For Anders Sandberg (Solna, Sweden, 50 years old), a researcher at the Institute for the Future of Humanity at the University of Oxford, thinking in the long term means thinking about what will happen in thousands of years.

The goal is to ensure that the humans of the next centuries and millennia have a chance to be born and survive.

Sandberg is a computational neuroscientist and is part of a philosophical stream known as the long-termist, which studies the far future.

To get there, humanity will have to survive a series of threats.

“There are natural ones, like asteroid impacts and supervolcanoes, but the probability of those wiping out humanity over a century is much less than climate change or nuclear war,” he says.

To the list of low-probability but high-impact risks, Sandberg adds artificial intelligence (AI), which, misused, can lead to systemic chaos.

As he argues, the biggest threats are created by humans themselves and that is where the risk lies, but he remains optimistic.

"Decisions can be made now," the researcher told EL PAÍS, after giving a conference on the future of humanity at the National Center for Oncological Research (CNIO) in Madrid.

Sandberg also studies other philosophical questions related to augmented cognition, human cryogenics, and the search for extraterrestrial life.

Ask.

How does climate change compare with the risks posed by new technologies or artificial intelligence?

Response.

The risks of artificial intelligence are currently close to zero, but many researchers think that it will grow and become very important, even in the near future.

The interesting thing is that we can avoid it.

We can work on creating security mechanisms for artificial intelligence, as well as for biotechnologies.

The risk decreases if we do our job well.

Climate change is complicated because it is systemic: it affects the economy, politics, the supply chain, the ecosystem.

And that means it requires very different solutions.

To fix AI, even if you figure out how to do it safely, we still need governments and laws.

That, to me, is what guarantees your safety.

Q.

What technologies could get out of control?

R.

People often mention that robots behave badly, but I think advisory systems could be the most dangerous.

Imagine a program created to advise a company and make it more profitable.

If it's good enough, it makes sense to use it because it gives good advice.

Executives who do not follow them could be fired and that maximizes the company's income, but it does not respect ethics.

In the end, it turns the company into something more ruthless, not because it's bad, but because it maximizes profit.

And this could be replicated by other companies.

If there is a law that prohibits it, they will hire lawyers to give it a spin.

The situation becomes more problematic the more powerful it is.

If we have very powerful systems capable of creating very intelligent technologies, both may have dangerous effects.

Q.

Any examples?

A.

Humans can ask a computer for a doomsday weapon or a robotic arm and that amplifies the malicious ability.

The other problem is that we want the computer to do something, but it doesn't understand why we want it.

So the real risks could be that we get powerful technologies that alter the way our society is run and cause humans to lose control over that effect.

To some extent, we have already suffered from it because states, corporations and large institutions already use artificial intelligence, but are still made of people and rules.

Automating the whole process could mean a world controlled by large systems that are not human, don't care about values, and just wander off in a random direction that was not prudently established.

P.

Meta created the Galactica AI platform with the promise of helping science.

But he took her down after just three days for the fake content he produced.

Is it an example of how poorly structured artificial intelligence could screw up systems?

R.

Galactica is very well done for things that seem like a scientific explanation based on what is written on the internet.

But he doesn't tell the truth, he makes things up that seem to be true.

This is dangerous because in science you should not invent things.

It is very important to be correct, even if it is boring.

So we could end up with systems that make up news and facts, all false, that would confuse us a lot.

Above all, taking into account that artificial intelligence systems are trained with what is on the internet.

If there is a lot of bogus scientific content that looks very serious because it has the correct equations and is written in an academic style, it is possible that even more deceitful systems will be obtained in the future.

The truth is very valuable and also very fragile.

Many of us don't know how to handle it well,

Anders Sandberg, researcher at the Future of Humanity Institute at the University of Oxford.Luis Sevillano

Q.

Should we be afraid of the future?

Q.

Being afraid of something means wanting to run away from it.

The future is exciting and terrifying, and full of possibilities.

It's like a video game, jungle gym or playground - there are dangerous things you need to be careful about.

There are also very interesting things.

And there are things that we must play with to grow and be better.

So we shouldn't fear for the future, but be hopeful and make sure it's worth doing.

Q.

Will future generations think that today's contemporaries are criminals with the planet?

A.

To some extent, we blame our ancestors thousands of years ago for driving mammoths to extinction, but they didn't know about ecology or even knew they could make the mammoth extinct.

Future generations may have opinions about what we are doing, and some may even be right.

Sometimes we assume that the future is going to be more sensible, with more knowledge and resources.

But we are here at the beginning of the story and we have a responsibility not to mess things up too much.

The future is like a video game, a jungle gym or a playground: some things are dangerous, but others are very interesting.

Q.

Is the next step space?

A.

Space is one of many stops.

We are expanding into the computational space, with virtual reality and the understanding of artificial intelligence, but also on a psychological level.

In the future, and probably relatively near, we will take the first steps to try to address the universe.

That may not be the most convenient place to live, but it helps to have a backup.

We must have backups of our civilization and as far away as possible.

Q.

Do you think that millionaires and space explorers like Elon Musk use the possibility of a future on Mars as a pretext for not taking charge of current problems?

R.

There is always a fight between the different problems.

Should I save people suffering from malaria or think about poverty?

Do I have to face poverty when pandemics can occur?

Pandemics may not be as severe as nuclear wars.

There are many things to work on.

We can argue about what the priorities are, but in the end you will have to select one.

Some people use excuses to think about the distant future, but many are escaping the great problems of humanity by focusing only on now.

There are people who go to work in the fight against poverty because they do not dare to tackle a nuclear war.

That is also an excuse.

In practice, we should have as many people as possible trying to solve as many problems as possible.

Sometimes,

the solution to one problem is also useful for another.

Thanks to the search for solutions to covid, we now have RNA vaccines.

That seems to be creating a revolution to help cure many other diseases, including non-pandemic ones.

Space gives us many useful tools to recycle on Earth, making space food seems to be useful as emergency resources on our planet as well.

Maybe the space isn't the most convenient place to live, but it helps to have a backup.

We must have them and as far as possible

Q.

_

Is it possible to have infinite development on a finite planet?

A.

Maybe.

Many people say that economic growth cannot go on forever, but this thinking assumes that economic value is embodied in things.

the

mona lisa

It does not have so many kilograms of material.

However, its value is enormous, and we could learn to appreciate art more and appreciate it [the work] even more.

Value, which is really what economic growth is all about, is not so tied to quantity of matter.

Similarly, technological development often means doing more with less material.

Modern planes are lighter than old ones because they carry better materials and run on less fuel.

In fact, many advanced technologies require fewer resources than in the past.

In the long term, infinite development does not fit on a finite planet and it is much safer to spread it out.

But to say that it is better to have less technology is nonsense, because it means less efficiency.

Going low-tech is a luxury today.

Q.

What discoveries do you expect to find in the next decade?

R.

I'd love to see a good way to bring human values ​​into AI, so we can figure out what we really want when we ask a computer to do something.

And that even machines are able to say "I'm not going to do it because I don't fully understand that problem."

Currently, the AI ​​does what the human tells it to do, which is dangerous.

Similarly, I think we need to work hard to get better sources of energy.

Glad to see the progress on the merger.

I suppose that solar energy is going to be much more powerful and that we are going to see the development with atomic precision, nanotechnology, which is going to revolutionize the world.

We will be able to do things in a cleaner way, with less resources and energy.

Also, create better ways to recycle materials and build efficient computers.

Q.

A few years ago, you said that many scientists were afraid of ruining their careers by pursuing studies on cryogenics, for example.

Does the same thing happen with long-term investors?

R.

Yes, many people in astronomy, for example, think it's entirely reasonable to spend a lot of effort trying to understand the universe's past, but they get angry when I ask them to use the same equations to predict a few hundred billion years into the future. .

They argue that science has to be checked against reality and cannot be tested over the long term.

But there are climate forecasts that are projected into the future and are quite important for setting policies.

There are many methodological arguments in science and some of them are very relevant, but I think many people only focus on one way to use their knowledge and are not aware that it can be applied to other domains.

I think it's worth trying to learn as much as we can about the future,

You can follow

EL PAÍS TECNOLOGÍA

on

Facebook

and

Twitter

and sign up here to receive our

weekly newsletter

.

Subscribe to continue reading

Read without limits

Keep reading

I'm already a subscriber

Source: elparis

All tech articles on 2022-12-08

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.