The Limited Times

Now you can see non-English news...

Brad Smith, president of Microsoft: “We must have a way to slow down or turn off artificial intelligence”

2024-02-20T05:04:27.350Z

Highlights: Brad Smith, president of Microsoft: “We must have a way to slow down or turn off artificial intelligence” “The difference between the promise and danger of new technology has rarely been more striking,” says Smith. “If Spain does not use AI in government, in healthcare or in the economy, I don’t see how it can sustain its long-term growth,’ he says. Smith: ‘Artificial intelligence is the most important invention for the human mind since the invention of the printing press’


The executive is in favor of governments and civil society putting pressure on the industry to regulate AI: “The more powerful a technology becomes, the stronger the controls that accompany it have to be”


There is no denying that Brad Smith is a lawyer.

The president of Microsoft navigates with extreme caution when he talks about the turbulent waters in which artificial intelligence is immersed right now.

It is the most important technology that has been created, he says, since the invention of the printing press, but he does not deny that there are problems regarding its use and control, from cyber attacks by countries like Russia or North Korea, or the complex protection of copyright, to what is, he confesses, his great concern: the use of

deepfakes

to alter the result of the elections, in a year in which practically half the planet will go to the polls.

Smith (65 years old, Milwaukee, USA) presented last Friday at the Munich Security Conference an agreement, along with twenty other companies, to try to improve the technological response to these deceptions and improve the transparency of the companies they control. the AI.

“The difference between the promise and danger of new technology has rarely been more striking,” says Smith.

The president of Microsoft, which is today the most valuable company in the world, met yesterday in Madrid with the President of the Government, Pedro Sánchez, to sign an agreement on the use of AI that represents Microsoft's largest investment in Spain in its 37 years of presence in the country: 2.1 billion euros.

It is a very important alliance, says Smith, because if Spain “does not use AI in government, in healthcare or in the economy, I don't see how it can sustain its long-term growth.”

Ask.

I asked Copilot, Microsoft's generative AI, to ask a question for you.

He suggested this to me: “What is your vision on the role of artificial intelligence in today's society and how do you think we can guarantee its ethical and responsible development?”

I don't know what you think of the question, can Copilot take my job away?

Answer.

No no.

There are three questions there, the two from Copilot and yours.

And let me start with yours, because I think it's important.

Copilot is a co-pilot.

He is not a pilot.

You may have used it to prepare for this interview, but it is a tool.

It helps you, it gives you ideas, but ultimately you will need to use his judgment.

And I think that's what we should ask of all people who use this technology: to use it to be more creative, to develop ideas, to help with writing, but delegating or outsourcing your thinking to a machine would be a mistake.

Regarding the other two questions, today and tomorrow [for yesterday and today] we announce new initiatives with the Spanish Government around cybersecurity and responsible AI.

Why does that matter?

At the end of last year, Spain ranked fourth in Europe in GDP per capita use of AI.

But when you look at AI skills, it is at 14th. And in creating AI-based software, it is at 15th. This shows that there is a gap between the use of AI and the skills to create AI.

Spain is an incredible place to live.

It is a very prosperous country in many ways, but it is not growing rapidly.

The population is aging.

The unemployment rate is 11%.

And those challenges need to be addressed with creativity and urgency.

Part of the solution is the use of AI.

We must create the jobs of the future.

Q.

We are fascinated and scared by the AI ​​revolution.

You have been working in this industry for many years, more than 30.

Is a technology really that revolutionary?

A.

I believe that artificial intelligence is the most important invention for the human mind since the invention of the printing press.

And that was almost 600 years ago.

Think about the printing press and what it accomplished: it made it possible for people to write, for others to read, and for market economies to flourish.

Something very similar happens with artificial intelligence, since it is a tool that can help you think in a different way.

Maybe it helped you think of questions for a conversation, and that is an injection of creativity.

It is enormously valuable in finding patterns in large amounts of data, to offer insights to advance issues such as drug discovery.

If we use it well, it can be an accelerator for people and what they do.

Q.

However, the IMF warns that artificial intelligence will affect 60% of jobs in advanced economies and, for the first time, will impact the most qualified...

A.

This is a very important problem that we must address, but, first, I would put it in context.

What percentage of jobs have been impacted by the advent of digital technology, the personal computer or the mobile phone in the last 40 years?

It's probably an even higher percentage.

And yet, we have been adapting to this change for almost the entire working life of all of us working today.

Many of these jobs have been affected and some have disappeared.

The real lesson of the last 40 years is that if people can stay ahead of the curve in knowing how to use technology, they are likely to be successful in their careers.

Their jobs can take them places they didn't necessarily anticipate.

There is going to be a real impact on the way we work, yes.

And this should inspire us and also alert us to the urgency of learning to use this technology and getting better at whatever it is we want to do.

Q.

You mentioned creation, and this is one of the aspects most affected by AI.

His company has been sued by

The New York Times

for copyright infringement…

A.

I think this is a natural and inevitable aspect of a new technology that impacts how people create and distribute what they write.

There are two legal questions we are going to face.

One is relatively easy and the other is more complex.

The easy one is to ask what you do if an artificial intelligence system generates a result that copies something that is protected.

That is a violation of the law, there is no doubt about it.

And we've done two things to address this.

First, we built a complete architecture around Copilot and other tools [Copilot cites sources in its answers, unlike other generative AI].

And second, we have told all of our clients that this is our legal problem, not theirs.

If they use our system correctly, we are responsible, not them.

And then there is another question, more uncertain.

Can you train in AI by reading all the works in the world?

It has always been understood that you can read as much as you want and remember as much as you can.

I met with a government official in Germany last week who said that he had read, by his estimate, about 5,000 books in his lifetime.

When he gives a speech, he doesn't have to step back and ask, where did I read this?

Do I have to give credit to where I first got this idea?

We all have the right, under copyright law, to read and learn.

The question is whether we can allow machines to learn in the same way.

And I think there's a social imperative to make that possible.

Advancing the capabilities of this new technology will require you to be able to learn widely.

And more than that: if you really want to open this new industry to open source developers and academics, beyond large companies, it is essential that you be open to that type of learning.

At the same time, none of us should want this new technology to put creators, including newspapers like

The New York Times,

out of business .

I think we'll have to find a way to balance that learning with giving creators the ability to continue earning a good living.

Q.

Is it possible that you reach an agreement, not only with

The New York Times

, but with other creators and authors?

A.

There are three objectives that we must take into account.

Goal number one: ensure that technology can advance and reward the creators of today and tomorrow.

Number two is to make sure that this moves forward in a way that makes this content widely available and on affordable terms to everyone.

And goal three is to think about the impact for companies that also control much of the content.

The New York Times

may seem like a big landlord, but compared to YouTube, it's tiny.

We need to think about other places where these content repositories exist, and we need to make sure they are open, on affordable terms to everyone, and not just the one company that owns them, in order to develop their own model.

Q.

The EU has become the first place in the world where AI is regulated.

What do you think of this law?

A.

We need a level of regulation that guarantees safety.

I'm sometimes surprised when I look at people in the tech sector who say we shouldn't have that regulation.

When we buy a carton of milk in the supermarket we don't worry about whether it is safe to drink, because we know that there is a safety basis that guarantees it.

If this is, as I believe it is, the most advanced technology on the planet, I don't think it's unreasonable to ask that it have at least as much safety regulation as we have for a carton of milk.

Regarding the [European] AI Law, the good news is that this protection is created.

The AI ​​law does this;

examines security standards and imposes a basis for these models.

And it's not much different from what the UK or the US is doing.

Brad Smith, during his conversation with EL PAÍS.Claudio Álvarez

I also think we have to be careful.

We need to have security without creating a level of burdensome administration that would increase costs, especially for

startups

.

The companies I've heard express the most concern about the AI ​​law are not the largest.

We, frankly, have the infrastructure to comply with it.

It's the

startups

that may not be able to.

I'm not saying this to criticize the AI ​​law, but it's what I hear, especially in countries like Germany or France.

Everything will depend on the implementation.

Q.

You have defended a “safety brake” for AI in the US Senate.

What does it consist of?

A.

It's about addressing what people often describe as an existential threat to humanity;

that uncontrolled AI that would seek to extinct humans.

This is what we see in

Terminator

and fifty other science fiction films.

One of the things that stands out to me, after 30 years in this industry, is that life often imitates art.

It's amazing that there are 50 movies with the same plot: a machine that can think for itself decides to enslave or extinguish humanity, and humanity fights and wins by shutting down the machine.

What does that tell us?

That we have to have a way to slow down or turn off AI, especially if it is controlling an automated system of critical infrastructure.

It's been nine months since I first introduced this concept, and what's most striking to me is that everywhere I go, the conversation is more or less the same.

He begins with this concern: “My God, there is this existential risk.”

And then people say “that's decades away, we don't have to focus on it now.”

We have the ability to solve more than one problem at a time.

Let's solve today's and tomorrow's: the best time to face a problem is before it happens.

We know how to do it: every bus and every train has an emergency brake.

Q.

Microsoft and OpenIA just published a report on how China, Russia, North Korea and Iran are using AI for increasingly sophisticated cyberattacks.

What can be done to avoid it?

A.

First, we need to recognize the problem.

In this study, Microsoft and OpenAI [owned by Microsoft] found that these four states were using generative AI in cybersecurity operations and cyber influence.

We are not going to allow state actors to engage in this type of harmful behavior with the use of our applications, because we believe that this will likely harm the world.

But we also need to use AI to fight back and create stronger cybersecurity protection.

Q.

On Friday you announced an agreement to fight

deepfakes

[manipulated video, image or voice files] in electoral processes.

What does it consist of?

A.

This agreement is very important.

First of all, because the problem is.

There are going to be elections this year in more than 65 countries and throughout the European Union.

And we are seeing rapid use of

deepfakes

to try to mislead the public about, for example, what a candidate has said.

With this technology agreement we focus on three things.

One is to better protect the authenticity of content with credentials and watermarks.

Secondly, detect

deepfakes

and eliminate them if they want to deceive the public.

And, third, we must focus on citizen education.

And this is a huge priority for Microsoft.

Personally, I probably spent more time between the end of last year and Friday on this one issue than anything else, because we think it's critical to address this issue.

This matters a lot because it affects electoral processes but, unfortunately, it affects other problems as well, such as financial fraud and cyberbullying, especially of children and women.

Those two groups are especially vulnerable.

And we will have to do it right, because if not, the world will be worse instead of better because of this technology.

Q.

But the same industry that signs agreements like this makes technologies that cause these problems available to everyone.

OpenIA unveiled its video AI, Sora, on Thursday, where AI-generated videos are almost indistinguishable from reality…

A.

The more powerful a technology becomes, the stronger the safeguards and controls that accompany it have to be.

I think we're all going to have to try harder.

And the industry will likely benefit if it is also pressured by government and civil society, because the magnitude of the liability and potential impact is very high.

After the Christchurch Commitment [an agreement by companies and governments to remove violent and extremist content from the internet after the massacre of the same name], we all had to adapt.

And I think we're going to need to adapt again.

What I hope is that we remember it took an online mass shooting to open people's eyes to what could go wrong.

I'm encouraged by Friday's agreement, but we're going to have to go even faster.

And most importantly: we are going to have to bring together more people.

The biggest mistake the tech sector could make is thinking that it is already doing enough and can do what it needs to if left alone.

You can follow

EL PAÍS Tecnología

on

Facebook

and

X

or sign up here to receive our

weekly newsletter

.

Subscribe to continue reading

Read without limits

Keep reading

I am already a subscriber

_

Source: elparis

All tech articles on 2024-02-20

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.