Artificial intelligence poses "profound risks for society and humanity" and for this reason a "pause" of at least six months in the "training" of the most advanced systems would be needed.
The alarm comes from Elon Musk and 1,000 other researchers and managers who, in an open letter, ask for a government stop or moratorium to avoid the much feared 'Terminator scenario' and allow the development of shared security protocols.
The letter published by the nonprofit Future of Life Institute
boasts excellent signatures and for this reason it is taken very seriously.
In fact, it is signed by Apple co-founder Steve Wozniak but also by those of Pinterest and Skype, as well as the founders of the artificial intelligence start-ups Stability AI and Charatcters.ai.
In addition of course to the billionaire visionary owner of Tesla who aspires to improve the brain with Neuralink after having, with his drive for innovation, contributed to radically transforming the space industry with SpaceX and revolutionizing the car industry, paving the way already more than a decade ago to electric cars that drive themselves.
A tech fanatic, Musk has invested in AI companies and was also on the board of directors of OpenAI, the industry behemoth on which Microsoft has bet billions of dollars.
In the sights of the big names there is not all artificial intelligence but the most advanced systems of GPT-4,
the OpenAI chatbot capable of telling jokes and easily passing exams like the one to become a lawyer.
"Powerful AI systems should be developed only when there is confidence that their effects will be positive and their risks manageable", reads the letter, which speaks of a "runaway race to develop and deploy powerful digital minds that no one, not even their creators, can understand, predict and control".
A race which, if it is not stopped, risks making the sci-fi 'Skynet' protagonist of James Cameron's films and his decision to destroy humanity become reality;
or, more prosaically, to get these powerful systems into the wrong hands with possibly devastating fallout.
Precisely the fear of catastrophic consequences prompted the big hi-tech companies to ask for a six-month break to be used for the "joint development and implementation of shared protocols" that ensure the "security" of systems "beyond any reasonable doubt", continues the letter, which invites AI laboratories to focus on making current systems more "accurate, transparent, credible and loyal" instead of rushing towards the development of even more powerful means.
"The company has paused on other technologies with potentially catastrophic effects - continues the letter -. We can do it here too. Let's enjoy a long summer of AI, and do not rush into the autumn unprepared".