The Limited Times

Now you can see non-English news...

ChatGPT inventor wants to tame his own monster

2023-05-25T12:21:22.332Z

Highlights: ChatGPT has been making headlines ever since it became available to the masses in 2022. Even the inventor Sam Altmann wants to regulate his creation. He wants to install security measures that will take effect should ChatGPT become a threat to humanity. He could imagine jointly agreeing on a development limit. This would limit how far ChatG PT and other super-intelligent systems are allowed to evolve per year. In addition, Altman suggests leaving the design of ChatGpt to a democratic process. In this case, people who use the AI would determine for themselves what it can and cannot do.



ChatGPT shows what artificial intelligence is capable of. Nevertheless, the technology poses great dangers. Even the inventor wants to regulate his monster.

USA – ChatGPT became available to the masses at the end of 2022 and has been making headlines ever since. The AI inspires and frightens in equal measure. While some rave about the infinite possibilities of the tool, others fear that ChatGPT could make people obsolete or even cause disasters. Even the inventor Sam Altmann wants to regulate his creation.

Non-profit companiesOpenAi
LocationSan Francisco
IndustryResearch / Artificial Intelligence
Foundation2015
FounderElon Musk and Sam Altmann
Stock exchangeno OpenAi share, as the company is a foundation

ChatGPT inventor wants to regulate his own creation

What happened? Sam Altmann, CEO and founder of OpenAI, together with Greg Brockman and Ilya Sutskever, published a blog post about the regulation of ChatGPT. In it, the three recommend subjecting AI and its development to strong controls and limits in order to use the technology correctly and purposefully. He cites nuclear energy as an illustrative comparison of similar proportions.

ChatGPT: Even the inventor wants to regulate © his work OpenAI / IMAGO: Alexander Limbach (montage)

How does Sam Altmann want to regulate ChatGPT? Altmann wants to limit the chat AI in various ways. On the one hand, he wants to install security measures that will take effect should ChatGPT become a threat to humanity. He cites the IAEA, the International Atomic Energy Agency, as an example of this. It reports to the United Nations Security Council if it detects threats to international security.

What you need to know about ChatGPT

ChatGPT is stealing jobs? These professions could be at risk

ChatGPT in toys wants to seize world domination

ChatGPT poses a challenge to school systems

In addition, he could imagine jointly agreeing on a development limit. This would limit how far ChatGPT and other super-intelligent systems are allowed to evolve per year. Corporations that develop such technology would have to comply with extremely high standards and act responsibly.

  • Regulatory body that tightly controls the development of super-intelligent AI systems such as ChatGPT
  • Developers agree on one development limit per year
  • Democratic input from people around the world to determine what an AI can and cannot do

In addition, Altman suggests leaving the design of ChatGPT to a democratic process. In this case, the people who use such super-intelligent systems would determine for themselves what the AI can and cannot do. What such a control mechanism might look like is still unclear. However, OpenAI is experimenting in this direction, because in his opinion it would be "risky and difficult to stop the development of AI completely". The technology would have too many advantages for that.

Source: merkur

All life articles on 2023-05-25

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.