Artificial intelligence presents "a risk of extinction" of humanity in the same way as "pandemics or nuclear war", and preventing this scenario should be "a global priority". This succinct petition, published Tuesday by the Center for AI Safety, was signed by more than 350 engineers, researchers and recognized personalities in the field of artificial intelligence, the vast majority of them Americans. Above all, it was signed by the bosses of some of the largest companies in this industry, as well as by many of their employees: OpenAI, Google, and Anthropic.
Requested by the Center for AI Safety, Meta (formerly Facebook) is however absent. "Superhuman AI is far from being at the top of the list of existential risks for humanity, largely because it does not yet exist," Yann Le Cun, director of fundamental AI research at the social media giant, said on Twitter. Her colleague Joëlle Pineau adds
This article is for subscribers only. You still have 77% to discover.
Want to read more?
Unblock all items immediately.
TEST FOR 0,99€
Already a subscriber? Log