A button, 'kill switch', to interrupt an Artificial Intelligence system if necessary, in the wake of those developed to stop the unauthorized launch of nuclear weapons.
This is the suggestive proposal that emerges from a document from several academic institutions including the Leverhulme Center of the University of Cambridge, the Oxford Internet Institute and Georgetown University, together with some researchers from ChatGpt.
The document takes into account a series of traceable elements of Artificial Intelligence such as the computing system ("detectable and quantifiable"), the large infrastructures that support it, the chips used to train the AI models which are produced by a relatively small companies, supply chain constraints on semiconductor manufacturing.
“These factors give regulators the means to better understand how and where AI infrastructure is deployed, who can and cannot access it, and impose sanctions for misuse,” argues the document which highlights several ways in which policy makers can regulate AI hardware.
Such as, for example, a 'kill switch', a button to even remotely disable the use of Artificial Intelligence in malicious applications or in case of violation of the rules.
Along the lines of the safety locks used for nuclear weapons.
A suggestive tool which, however, as the researchers themselves observe, could backfire by preventing, if used badly or in the wrong hands, the development of artificial intelligence.
Another critical point of this document is the participation of researchers from OpenAI, the parent company of ChatGpt, which on the one hand asks for regulation but on the other must do business and churns out new AI systems at a rapid pace.
The latest is Sora which generates realistic one-minute videos simply starting from text, a potential risk for cinema and for the spread of deepfakes.
Reproduction reserved © Copyright ANSA