A group of scientists and leaders of the artificial intelligence (AI) industry signed a troubling joint statement on Tuesday: "Mitigating the risk of extinction of AI should be a global priority, along with other risks on a societal scale such as pandemics and nuclear war."
The statement is signed by personalities such as Demis Hassabis, CEO of Google DeepMind, Dario Amodei, of Anthropic, and the founder of OpenAI, Sam Altman, among others.
It also bears the signature of heavyweights in the scientific field: Geoffrey Hinton – who was baptized as the "godfather" of AI and had part of his career at Google – and Youshua Bengio, two of the three researchers who won the Turing Prize 2018 (the Nobel of computer science) for their contributions to AI.
The text was posted on the site of the Center for AI Safety, a San Francisco nonprofit. It has only one sentence and does not explain much: it does not substantiate why there would be a "risk of extinction" associated with artificial intelligence or why they make a comparison with pandemics and nuclear wars.
Geoffrey Hinton, a pioneer in artificial intelligence, left Google earlier this month to warn of "the dangers" of that technology. Photo Archive
The statement is given in a year in which generative artificial intelligence is going through exponential growth: since ChatGPT became popular to create texts and Midjourney or Stable Diffusion for images, all tech giants began to develop their systems in this direction, as Google did with Bard or Microsoft with Copilot, both AI assistants to provide more accessible experiences to their users.
However, it is the second time this year that AI has been publicly and categorically questioned. In March of this year, Elon Musk and more than a thousand experts signed a request to pause for six months in research on AI as powerful as GPT-4, warning of "great risks to humanity."
Following the letter, Musk insisted that AI could "cause the destruction of civilization." Then, Bill Gates, founder of Microsoft, anticipated the disappearance of teachers. And Warren Buffet, legendary investor – and friend of Gates – also compared artificial intelligence to an atomic bomb.
In the case of this new joint statement, the difference is that it explains very little and risks a catastrophic scenario without substantiating it.
What is "existential risk" and how real it is
This type of approach responds to what in the field is called "existential risk" of artificial intelligence.
"The idea of existential risk has to do with an ill-founded concept, which implies that an intelligence superior to human could make a decision to extinguish humanity. A little bit is in line with the movie Terminator and the Skynet program, which becomes aware of itself and decides to turn against humans, "explains Javier Blanco, PhD in Computer Science from the University of Eindhoven, Holland.
"These technologies such as adversarial neural networks and machine learning have no chance of constituting something like this: it is a very elementary technology, based on statistical patterns that are recognized. The generation ones like ChatGPT are the same but complementary -generative systems from classifications-, they do not carry any risk of a type of intelligence that could be an existential threat to humanity, "he adds.
Adversarial neural networks generate new data sets by the opposition of different algorithms. Machine learning is a branch of artificial intelligence that programs techniques that make computers "learn": they improve their performance with use (something very evident in ChatGPT, for example).
The idea of the extermination of humanity is still a chimera for Blanco: "That this is a long-term risk is as likely as that a giant asteroid falls and destroys the Earth: that some technology could lead to hybrid or artificial cognitive entities and that they are interested in destroying the human race is a completely remote possibility. " Adds.
Now, for the expert and also a professor at the Faculty of Mathematics, Astronomy, Physics and Computing of the UNC (Córdoba), there are concrete risks with these technologies, which have nothing to do with the existential.
"There are occupational risks, job losses. Dealing with certain evolutions that make pretending and deceiving much more feasible (fake news, disinformation): that is a fact and is one of the problems. All this has consequences difficult to measure today, but they already impact on the social: there is a genuine concern, "he warns.
He also notes as concerns the concentration of these technologies in a small handful of companies: "It is important to be able to distinguish genuine concerns and possible solutions – which do not necessarily coincide with what corporations are pursuing – from speculative and unlikely concerns in many of the nearest futures."
"These risks do not share the scenario of a pandemic or nuclear war at all. In addition, effectively, unlike a pandemic or a nuclear war, the functioning of AI is invisible: it is not in the public sphere," he says.
Thus, the scenario under which artificial intelligence technologies are developed is uncertain. "We believe the benefits of the tools we have developed so far far outweigh the risks," Altman said in his congressional testimony.
Publications like Tuesday's do not seem to support his perspective, in what crowns a paradoxical strategy: those who are developing the most powerful tools of artificial intelligence are those who sign a declaration that warns of the possible extermination of humanity.