The Limited Times

Now you can see non-English news...

The mysterious device of the creator of ChatGPT to 'turn off' artificial intelligence in case of emergency

2023-08-14T18:16:56.783Z

Highlights: Sam Altman, CEO of Open AI, once again defended his popular invention, ChatGPT. As fears grow around the control of this technology, the developer once again ruled out the possibility that generative artificial intelligence could one day be turned against humanity. The CEO of the company confessed that he carries with him the equivalent of the nuclear briefcase of the president of the United States, Joe Biden, capable of shutting down all the servers that house his AI, in case of an apocalypse.


Sam Altman rules out that AI can be put against humanity, but it still has a plan B.


Sam Altman, CEO of Open AI, once again defended his popular invention, ChatGPT. As fears grow around the control of this technology, the developer once again ruled out the possibility that generative artificial intelligence could one day be turned against humanity.

In fact, the CEO of the company confessed that he carries with him the equivalent of the nuclear briefcase of the president of the United States, Joe Biden, capable of shutting down all the servers that house his AI, in case of an apocalypse.

In this case, it is a blue backpack with which the American entrepreneur and technology investor appears every time he travels or the public appears.

What's in Sam Altman's Blue Backpack

Altman became in recent months a new "star" of Silicon Valley. (Photo: Bloomberg)

Even the American media Business Insider dares to ensure that Altman believes in the apocalypse and is prepared for it. "I worry, of course. I mean, I worry a lot about it," he said.

According to several media reports, a Mac computer would be the "nuclear weapon" with the ability to stop this technology if its intentions turned perverse.

This does nothing but demonstrate Altman's hidden plan for fear of a rebellion by Artificial Intelligence. And the contents of that blue backpack would be his main answer.

For example, in 2016, he told The New Yorker that for a supposed end of the world he had kept "weapons, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israel Defense Forces and a large piece of land in the south."


A group of scientists and leaders of the artificial intelligence (AI) industry signed a troubling joint statement in May: "Mitigating the extinction risk of AI should be a global priority, alongside other risks on a societal scale such as pandemics and nuclear war."

Sam Altman was one of the main references who signed this declaration along with personalities such as Demis Hassabis, executive director of Google DeepMind, Dario Amodei, Anthropic, among others.

This type of approach responds to what in the field is called "existential risk" of artificial intelligence.

OpenAI, the developer behind ChatGPT. (Photo: AP)

"The idea of existential risk has to do with an ill-founded concept, which implies that an intelligence superior to human could make a decision to extinguish humanity. A little bit is in line with the movie Terminator and the Skynet program, which becomes aware of itself and decides to turn against humans," Javier Blanco, PhD in Computer Science from the University of Eindhoven, Netherlands, explained to Clarín.

The truth is that for both Altman and other gurus of artificial intelligence is a territory still unexplored and many worry that at some point this technology will surpass the human being, who did not hide his fears for the threat it could represent to the world.

SL

See also

Worldcoin in Argentina: controversy grows over the project of the creator of ChatGPT that scans the iris in exchange for dollars

Alert by Bard and ChatGPT: the new hidden danger behind the rise of artificial intelligence

Source: clarin

All tech articles on 2023-08-14

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.