Sam Altman, CEO of Open AI, once again defended his popular invention, ChatGPT. As fears grow around the control of this technology, the developer once again ruled out the possibility that generative artificial intelligence could one day be turned against humanity.
In fact, the CEO of the company confessed that he carries with him the equivalent of the nuclear briefcase of the president of the United States, Joe Biden, capable of shutting down all the servers that house his AI, in case of an apocalypse.
In this case, it is a blue backpack with which the American entrepreneur and technology investor appears every time he travels or the public appears.
What's in Sam Altman's Blue Backpack
Altman became in recent months a new "star" of Silicon Valley. (Photo: Bloomberg)
Even the American media Business Insider dares to ensure that Altman believes in the apocalypse and is prepared for it. "I worry, of course. I mean, I worry a lot about it," he said.
According to several media reports, a Mac computer would be the "nuclear weapon" with the ability to stop this technology if its intentions turned perverse.
This does nothing but demonstrate Altman's hidden plan for fear of a rebellion by Artificial Intelligence. And the contents of that blue backpack would be his main answer.
For example, in 2016, he told The New Yorker that for a supposed end of the world he had kept "weapons, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israel Defense Forces and a large piece of land in the south."
A group of scientists and leaders of the artificial intelligence (AI) industry signed a troubling joint statement in May: "Mitigating the extinction risk of AI should be a global priority, alongside other risks on a societal scale such as pandemics and nuclear war."
Sam Altman was one of the main references who signed this declaration along with personalities such as Demis Hassabis, executive director of Google DeepMind, Dario Amodei, Anthropic, among others.
This type of approach responds to what in the field is called "existential risk" of artificial intelligence.
OpenAI, the developer behind ChatGPT. (Photo: AP)
"The idea of existential risk has to do with an ill-founded concept, which implies that an intelligence superior to human could make a decision to extinguish humanity. A little bit is in line with the movie Terminator and the Skynet program, which becomes aware of itself and decides to turn against humans," Javier Blanco, PhD in Computer Science from the University of Eindhoven, Netherlands, explained to Clarín.
The truth is that for both Altman and other gurus of artificial intelligence is a territory still unexplored and many worry that at some point this technology will surpass the human being, who did not hide his fears for the threat it could represent to the world.
SL
See also