ChatGPT shows what artificial intelligence is capable of. Nevertheless, the technology poses great dangers. Even the inventor wants to regulate his monster.
USA – ChatGPT became available to the masses at the end of 2022 and has been making headlines ever since. Artificial intelligence inspires and frightens in equal measure. While some rave about the infinite possibilities of the tool, others fear that ChatGPT could make people obsolete or even cause disasters. Even the inventor Sam Altman wants to regulate his creation.
Non-profit companies | OpenAi |
Location | San Francisco |
Industry | Research / Artificial Intelligence |
Foundation | 2015 |
Founder | Elon Musk and Sam Altman |
Stock exchange | no OpenAi share, as the company is a foundation |
ChatGPT inventor wants to regulate his own creation
What happened? Sam Altman, CEO and founder of OpenAI, along with Greg Brockman and Ilya Sutskever, published a blog post about the regulation of ChatGPT. In it, the three recommend subjecting AI and its development to strong controls and limits in order to use the technology correctly and purposefully. He cites nuclear energy as an illustrative comparison of similar proportions.
+ChatGPT: Even the inventor wants to regulate his work
© OpenAI / IMAGO: Alexander Limbach (Montage)
How does Sam Altman plan to regulate ChatGPT? Altman wants to limit chat AI in a number of ways. On the one hand, he wants to install security measures that will take effect should ChatGPT become a threat to humanity. He cites the IAEA, the International Atomic Energy Agency, as an example of this. It reports to the United Nations Security Council if it detects threats to international security.
What you need to know about ChatGPT
ChatGPT is stealing jobs? These professions could be at risk
ChatGPT in toys wants to seize world domination
ChatGPT poses a challenge to school systems
In addition, he could imagine jointly agreeing on a development limit. This would limit how far ChatGPT and other super-intelligent systems are allowed to evolve per year. Corporations that develop such technology would have to comply with extremely high standards and act responsibly.
- Regulatory body that tightly controls the development of super-intelligent AI systems such as ChatGPT
- Developers agree on one development limit per year
- Democratic input from people around the world to determine what an AI can and cannot do
In addition, Altman suggests leaving the design of ChatGPT to a democratic process. In this case, the people who use such super-intelligent systems would determine for themselves what the AI can and cannot do. What such a control mechanism might look like is still unclear. However, OpenAI is experimenting in this direction, because in his opinion it would be "risky and difficult to stop the development of AI completely". The technology would have too many advantages for that.
Category list image: OpenAI / IMAGO: © Alexander Limbach (montage)