ChatGPT inventor Sam Altmann now wants to limit his own AI. He cites nuclear energy as an illustrative comparison of similar proportions.

Altman wants to install security measures that will take effect should ChatGPT become a threat to humanity. He could imagine jointly agreeing on a development limit. This would limit how far ChatG PT and other super-intelligent systems are allowed to evolve per year. Corporations that develop such technology would have to comply with extremely high standards. What such a control mechanism might look like is still unclear.