The Limited Times

Now you can see non-English news...

No one talks about the imminent danger of AI

2023-02-28T11:40:22.668Z


We are so caught up in asking what technology can do that we are missing the most important questions.


In 2021, I interviewed Ted Chiang, one of the great science fiction writers still with us.

I can't get something he told me in that interview out of my head.

“In general, I think that most of the fears around AI are cleared up if we frame them as fear of

capitalism

,” Chiang commented.

“I think this truth applies equally to most fears about technology.

Most of the fears or anxieties we feel about technology are cleared up if we explain them as fear or anxiety about how capitalism will use technology against us.

The truth is that technology and capitalism are so connected that it is difficult to separate them.

In this regard, let me add something: it is also very worrying that the state controls technology.

Thinking about the purposes for which each government could use artificial intelligence—and, in many cases, already uses it—makes our blood run cold.

But we can keep two opposing ideas in mind, I hope.

My point is that Chiang's caveat points to a central void in our current view of AI.

We are so obsessed with the idea of ​​discovering what technology can do that we have not stopped to consider the most important questions:

how will it be used?

And who will make that decision?

I guess you've already read the strange conversation my fellow columnist Kevin Roose had with

Bing

, the AI-powered chatbot that Microsoft tested with a limited number of influencers, journalists, and others.

In a two-hour dialogue, Bing revealed his hidden personality, called Sydney, mulled over his pent-up desire to steal

nuclear codes and plan cyberattacks

against security systems, and tried to convince Roose that their marriage was in torpor and Sydney He was her one

true love.

Personally, I found the conversation less creepy than it did to other people.

“Sydney” is a predictive text system designed to respond to human requests.

Roose wanted Sydney to act weird (“what's your hidden other self like?” he asked), and Sydney knew what was considered weird for an AI system because humans have put what they imagine into so many stories.

At one point, the system came to the conclusion that what Roose wanted was, essentially, an episode of “

Black Mirror

” and, apparently, it gave him that.

Everyone is free to think that Bing acted mischievously in that situation, or that Sidney understood Roose perfectly.

AI researchers are obsessed with the topic of

“alignment”

.

It's about figuring out how to get machine learning algorithms to do what we want them to do.

AND

he perfect example in this case is the maximum number of clips experiment.

The premise is that if we tell a powerful AI system to make more clips, it will start destroying the world to try to turn everything into a clip.

If we then try to turn it off, it will play on every computer system it can find because turning it off would interfere with its goal of making more clips.

However, there is a more banal alignment problem, which is also perhaps more pressing:

Whose service will these machines be?

The central question about the conversation between Roose and Sydney is who Bing is serving.

Our hypothesis is that it must be in line with the interests of its master and teacher,

Microsoft.

It's supposed to be a nice chatbot that answers questions politely and makes lots of money for Microsoft.

But I was talking to Kevin Roose, and Roose's purpose was to make the system say something interesting to write a good article.

Well, that's what he did, and plenty of it.

The problem is that this is how he embarrassed Microsoft.

Ah, what a bad Bing!

Or maybe we could say, “Sydney, good boy!”?

Not for much longer.

Microsoft, like Google

, business models that will operate it.

One thing that has led to this situation is that the flashy AI demos don't fit into any particular business model, outside of the hype cycle that produces colossal investments and purchase offers.

But the reality is that these systems are expensive and shareholders get nervous.

The era of fun free demos will come to an end, as it always does.

So this technology will become what it needs to be to make money for the companies that created it, perhaps

at the expense of its users.

It is

already

I spoke this week with Margaret Mitchell, who helped lead a Google team tasked with analyzing the ethics of AI (the team disappeared after Google rumored to begin censoring its work).

These systems, he explained, are perfect for integration into search engines.

“They are not trained to predict events,” he explained to me.

"Essentially, they are trained to create things in such a way

that they look like

they are done."

So why are they going to finish first in searches?

Because in the searches it is possible to earn lots of money.

Desperate to get someone, anyone, to talk about Bing search, Microsoft had reason to give away the technology early… even if it was bad advice.

“The search app, in particular, demonstrates a lack of imagination and understanding of the uses this technology can have,” Mitchell noted, “as they settle for shoehorning the technology into the area that makes the most money for companies. :

announcements".

That is what should cause us fear.

Roose described Sydney's personality as "very persuasive, bordering on manipulative."

It's a disturbing comment.

What is the basis of advertising?

Persuasion and manipulation. 

In his book "Subprime Attention Crisis," Tim Hwang, former director of the Harvard-MIT AI Ethics and Governance Initiative project, argues that the digital advertising industry's dark secret is that, in general, ads don't work.

In this sense, what worries him is what will happen when they have to accept their failures.

I am more concerned with the opposite.

What if they work much better?

What if Google and Microsoft and Meta and the rest of them bring competing AI systems to market to see which one is better at persuading users that they want what advertisers are selling?

I am less concerned that someone named Sydney indulges my desire to create a sci-fi story than knowing that someone named Bing has access to reams of my personal data that allows him to coldly calculate how to manipulate me to the advantage of whichever advertiser paid his money the

most

. holding company.

Above, advertising is not the only issue that should concern us.

What will happen when these systems are at the service of the scams that have always filled the internet?

And when they support the interests of political campaigns or foreign governments?

“I think we're going to end up very soon in a world where we don't know what to trust,” replied Gary Marcus, the AI ​​researcher and critic.

“I think it has already been a problem for society, for about a decade, and it seems to me that it

will get worse and worse

.”

These dangers are at the heart of the type of AI systems we are building.

The great language models, as they are called, have been created to persuade.

They have been trained to convince humans that they are almost human.

They were programmed to hold conversations, respond with emotions and emojis.

They made them friends to those who are lonely and helpers to those who feel burdened.

They are advertised as being able to replace the work of scores of writers, graphic designers and form fillers, industries long thought to be immune to the violent automation suffered by farmers and factory workers.

In this sense, one danger is that a political system aware of its technological ignorance feels intimidated and takes its time to observe the AI.

It's a sensible move, but if we wait long enough, the victors in the AI ​​gold rush will have enough capital and enough users to withstand any serious regulatory push.

Somehow, society will have to decide what it can accept AI to do, and in what areas it should not allow AI intervention, before it is too late to make those decisions.

For that reason, I would venture to change Chiang's comment one more time:

most fears about capitalism are cleared up if they are posed as fear of our inability to regulate capitalism.

c.2023 The New York Times Company

look also

Will the metaverse be entertaining?

South Korea has the answer

Consciousness used to be a taboo word in robotics, but now it's the last word

Source: clarin

All news articles on 2023-02-28

You may like

News/Politics 2024-03-22T23:04:48.727Z

Trends 24h

News/Politics 2024-04-18T09:29:37.790Z
News/Politics 2024-04-18T11:17:37.535Z

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.