The Limited Times

Now you can see non-English news...

Why AI boosters sign so many apocalyptic manifestos


Highlights: In two months, thousands of experts have called for it to be paused, observed and regulated. The creators of this technology, its main investors and thousands of academics have warned of extraordinary dangers. The first two signatories are precisely Hinton and Yoshua Bengio, two winners in 2018 with the Turing Prize, the "Nobel" of computer science. One of the three fathers of AI who did not sign is the Frenchman Yann LeCun, who has been explaining on Twitter that it is difficult to regulate something that we do not know how it will be.

The creators of this technology, its main investors and thousands of academics have warned of extraordinary dangers in recent weeks.

As if the world didn't already have a variety of future options of total calamity, we now have a new one: artificial intelligence. In just two months, thousands of experts have called for it to be paused, observed and regulated. First, more than 30,000 people, led by Tesla co-founder Elon Musk and Apple co-founder Steve Wozniak, called in March for a six-month halt to artificial intelligence (AI) research. Then, one of the fathers of these developments, Geoffrey Hinton, left Google to warn about its possible darkness; soon after, Sam Altman, leader of the company that has created the most successful product in this field, ChatGPT, went to Congress to say that "anything can go very wrong", and now he is touring the world to, among other things, warn of epic dangers.

As if this arsenal were not enough, on Tuesday another manifesto of only 22 words was published in its original English: "Mitigating the risk of extinction of AI should be a global priority along with other risks on a social scale, such as pandemics and nuclear war," it said. Signed by 350 people, the first two signatories are precisely Hinton and Yoshua Bengio, two winners in 2018 with the Turing Prize, the "Nobel" of computer science, for being "fathers" of the AI revolution. Three other illustrious signatories are the three leaders of the top AI companies today: Altman, of OpenAI; Demis Hassabis, of DeepMind (owned by Google); and Daniel Amodei of Anthropic.

1. What needs to be regulated?

Why all this sudden grandstanding against something spectral and still unknown? There are several answers, some more sincere, others more cynical. First, it is remarkable that there is one of the three fathers of AI who did not sign. Along with Hinton and Bengio, the Frenchman Yann LeCun also won the Turing Award in 2018, who has been explaining on Twitter for several days why he is not on the side of existential fear: his hypothesis is that it is difficult to regulate something that we do not know how it will be, he says.

More informationThe US and EU will propose a voluntary "code of conduct" for generative artificial intelligence

"Superhuman AI is nowhere near high on the list of existential risks. Largely because it doesn't exist yet. Until we have a basic design for dog-level AI (let alone human-level), discussing how to make it safe is premature," he wrote on Twitter. In a recent talk, LeCun used this metaphor: "How can you design belts for a car if the car doesn't exist yet?"

Super-human AI is nowhere near the top of the list of existential risks.
In large part because it doesn't exist yet.

Until we have a basic design for even dog-level AI (let alone human level), discussing how to make it safe is premature.

— Yann LeCun (@ylecun) May 30, 2023

That ignorance about the future is not something LeCun thinks about. Altman himself believes that for there to be a real leap in the capacity of AI, things must happen that you have no idea about: "A system that cannot contribute to the sum of scientific knowledge, discover new fundamental science, is not a superintelligence. And to get it right, we'll have to expand the GPT model in pretty important ways that we're still short of ideas for. I don't know what those ideas are. We're trying to find them," he said in a recent interview.

This existential grandstanding gives AI a capability we don't yet know if it has.

2. Why more manifestos

If there was a manifesto in March, why do we need another? The difference in this week's short manifesto is clearly that the latter is signed by industry leaders. Altman (OpenAi), Hassabis (Google), Amodei (Anthropic) and Hinton (former Google) did not sign the first, which called for a moratorium on the development of these programs: evidently, these companies do not want to pause research. The rest of the signatories are made up of a small part of those who already supported the original March manifesto, which are already more than 31,000 people, mainly academics. Its promoters, from the Future of Life Institute, sent an email on May 19 to its signatories to again "join Hinton, Bengio and Amodei" because it is "essential to normalize and legitimize the debate on the most serious risks of AI."

3. A smokescreen?

A few days before the short manifesto, Altman and two other OpenAI leaders published an article titledGovernance of Superintelligence. There they asked to stop worrying about current models to focus on legislating on the future danger: "Today's systems will create enormous value in the world and, although they have risks, the level of those risks is commensurate with other Internet technologies. On the contrary, the systems we care about will have power beyond any technology created so far."

In London, during his European tour, Altman said that they would consider withdrawing ChatGPT from the continent if the European Union advances in its "over-regulation", which they will debate in a plenary session of Parliament on the 13th. One legislative option is to force companies with models such as ChatGPT to disclose copyrighted data in the corpus with which they train their machines, a conflicting scenario. Days later, the OpenAI leader tweeted that they were no longer considering leaving Europe. Now, it even seems that they are looking for a European headquarters, according to what has been published.

Thus, one possible explanation behind the manifestos is that companies like OpenAI prefer that politicians debate apocalyptic future existential risks, rather than focusing on laws that may complicate their expansion today.

Watermarking, external audits, feedback loops - just some of the ideas discussed with @AnthropicAI and @sama @OpenAI for the #AI #CodeOfConduct launched today at the #TTC in #Luleå @SecRaimondo Looking forward to discussing with international partners.

— Margrethe Vestager (@vestager) May 31, 2023

With these threats, Altman also makes sure to be consulted when it comes time to propose laws. If policymakers see these models as incomprehensible, they will require the help of "experts" within companies to regulate, says Jake Browning, a New York University researcher on the philosophy of AI who has not signed the manifesto. Precisely, the European Commissioner for Competition, Margrethe Vestager, made public this week her meetings with Altman and Amodei just after announcing her imminent proposal for a "voluntary code of conduct". A day later, he met with Commission President Ursula von der Leyen.

Brussels is more focused on the here and now: "The EU is a threat [to these companies] because it ignores the hype of AI and only looks at the consequences, treating these new models as services," says Browning. And he asks: "Is the information provided by GPT reliable? Do they comply with existing privacy standards, such as the right to be forgotten? Do they respect copyright? Will Google's AI search lead to monopolies? Do companies market these products honestly? In all those metrics, these language models go awry: they're prone to hallucination; train and reproduce private data; do not respect copyright laws; They are designed to harness the power of big tech and increase their advertising revenue, and they are marketed in a deceptive manner without due warning about their limitations."

4. The tale of the end of the world

The end of the world is a debate that generates great interest in Silicon Valley. Altman himself has said that we need to think more about the extinction of humans. But critics of these exaggerated manifestos believe that among the priorities of their famous signatories is not to benefit humanity. Timnit Gebru, an expert in computational ethics at Stanford University, warned before this explosion of the biases and dangers of these models. Not only did they ignore her, but Google fired her.

Today Gebru continues to see only hypocrisy in this debate about the unpredictable superintelligence of the future: "This is a movement that has been decades in the making, with the same multi-billion dollar funders of OpenAI, Deepmind and now Anthropic. His ideological father, Nick Bostrom [author of the bestsellerSuperintelligence], talks about 'dysgenic pressures', which is that those considered stupid reproduce too much, which would be an existential risk for humanity, and he said that 'blacks are stupider than whites' and insulted us. Do they want us to believe that they are the ones who care about humanity? Where were they when we raised awareness about the real harms and faced a backlash? It's almost like a coordinated hack attack to distract us from holding accountable the real organizations causing harm."

To co-sign, retweet:

— on Mastodon (@emilymbender) May 30, 2023

Émile Torres, from Leibniz University of Hannover, has been studying these theories of future annihilation for years: "There is a lot of focus on AI, but not so much on AI companies. This attention gives companies a kind of free rein. It redirects the gaze of what they are doing to this kind of mysterious, extraterrestrial mind that will arise by some kind of law of technological progress." He adds: "This utopian vision was invented by a handful of super-privileged rich white guys and now they try to impose it on the rest of the world." For them, also called transhumanists, it would take a more privileged brain than the human one to survive, that is, the digital one.

5. What if it's true?

Along with these possible reasons, we must not ignore the most explicit reason to support these manifestos: to truly believe that artificial intelligence poses an existential danger. That the signatories, although a good handful have obvious economic interests, sincerely fear an end of the world caused by poorly managed AI.

There are thousands of researchers from dozens of disciplines, with no ties to industry, who believe that research should be paused and its existential risks monitored. EL PAÍS has asked their motives to 10 signatory academics from different fields, from physics and computer science to law, economics or psychology. Their answers can be grouped into four points:

(a) Speed of development. For anyone outside the AI sector, the speed of innovation is inconceivable. It is perhaps the basic fear: "Not so long ago, the danger of machines posing an extinction risk seemed fanciful," says Andrew Briggs, emeritus professor of nanomaterials at the University of Oxford. "The pace of progress in AI is now accelerating so fast, as shown by engines like ChatGPT, that it has become urgent to look for security measures before it's too late."

Speed can bring unforeseen problems, as Juan Pavón, professor of Software Engineering and AI at the Complutense University, says: "Faster progress is being made in the development of large AI models than in their understanding, and as these are complex systems, with a multitude of interactions between the elements that compose them, unwanted and unforeseen effects can occur", Says.

b) Ignorance about how they work. Ignorance is another factor that worries the signatories. "AI programs like GPT4 and its likely successors are extremely complex systems, and we don't really understand them," says Alessandro Saffiotti, Professor of Computer Science at Örebro University in Sweden. "Even so, we could end up delegating critical decisions to them for the planet and our society: power plants, financial transactions or even military systems. If companies don't pause the deployment of these systems until we understand them better, we must prepare for potentially disastrous scenarios."

"It's about 'what we don't know we don't know', i.e. the potential for serious problems that can arise in an unforeseen way," warns Henning Grosse Ruse-Khan, professor of law at the University of Cambridge. "The potential of AI is so significant that we have no realistic way to predict, or even guess, its consequences."

c) Doubt requires prudence. The 22-word phrase of the latest manifesto is easy to assume by mere risk analysis. The text comes to say: if you have a pistol loaded with 3 bullets in a magazine of 100, would you shoot? Although we do not know if that gun exists, the natural response is to try to get those 3 bullets out of the magazine. "It's important to talk about it because of the great uncertainty out there," says Edoardo Gallo, professor of economics at the University of Cambridge. "We have very little idea of the odds. I'm pretty sure the risk of AI extinction in the next 100 years is very small, but I'm also pretty sure it's not zero."

In the end, the debate boils down to a gamble. Browning, who did not sign, is comfortable denying all risks. "If you think language is the core of intelligence, you may be inclined to think that such a talking machine is one step away from superhuman intelligence," he explains. But he doesn't think so: "Philosophically, I don't think superintelligence makes sense as a concept; technically, I don't think anything that happens under the label of 'AI,' no matter how broad, poses an existential threat."

In that doubt, Bojan Arbutina, professor of Astrophysics at the University of Belgrade, prefers to err excessively: "The threat may be exaggerated, but if it is not, we will not have time to reconsider it, and therefore, I think it should be taken seriously. We cannot understand all the risks. Superintelligence could, for example, perceive us humans as we see insects or even bacteria," he says.

(d) There are many other problems. But we must not go that far, says Helena Matute, professor of Psychology at Deusto, who asks that "existential risk not be mixed with the discussion about consciousness and intelligence, it has nothing to do with it". For Matute, the number of challenges facing humanity should not be an excuse not to look at AI: "Limiting the discussion only to the risks that some consider already obvious is to go behind the problem. Global agreements must be reached soon that minimize the risks of AI, all the risks. I don't understand why some people think they have a kind of permission to say, 'This can be regulated, but not this.'"

These experts also include immediate and current regulation, which other parties with economic interests may despise: "My goal, in highlighting the existential threats of AI, is the exact opposite of trying to rule out short-term harms," says Michael Osborne, professor of AI at the University of Oxford. Instead, I wish to emphasize that we are not doing enough to govern AI, a technology that today is tightly controlled by a small number of opaque and powerful tech companies.

You can follow EL PAÍS Tecnología on Facebook and Twitter or sign up here to receive our weekly newsletter.

Subscribe to continue reading

Read without limits

Read more

I'm already a subscriber

Source: elparis

All tech articles on 2023-06-02

You may like

News/Politics 2023-09-09T07:32:55.025Z

Trends 24h


© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.