Artificial intelligence poses an existential threat to humanity and should be considered a social risk like pandemics and nuclear wars. An open letter signed by more than 350 managers and released by the nonprofit Center for AI Safety reads: "mitigating the risk of extinction" posed by artificial intelligence "should be a priority along with other social risks" Among the signatories of the letter are the CEO of OpenAI Sam Altman, the number one of Google DeepMind Demis Hassabis and the leader of AnthropicDario Amodei.