The Limited Times

Now you can see non-English news...

Elon Musk and more than a thousand experts ask to pause advances in artificial intelligence: "There are great risks for humanity"

2023-03-29T11:25:14.444Z


They demanded to stop investigations in the sector for six months until there is a legal framework that can avoid possible consequences.


Elon Musk and more than 1,000 world experts signed a call on Wednesday for a six-month pause on research into artificial intelligences (AIs) more powerful than ChatGPT 4, the OpenAI model released this month, warning of "great risks

to The humanity".

In the petition posted on the futureoflife.org site, they call for a moratorium until security systems are established with new regulatory authorities, surveillance of AI systems,

techniques that help distinguish between the real and the artificial

, and institutions capable of doing in the face of the "dramatic economic and political disruption (especially for democracy) that AI will cause."

It is signed by personalities who have expressed fears about an uncontrollable AI surpassing humans, including Musk, owner of Twitter and founder of SpaceX and Tesla, and historian Yuval Noah

Hariri

The director of Open AI, who designed ChatGPT, Sam Altman, has acknowledged being

"a little afraid"

that his creation will be used for "large-scale disinformation or cyber attacks."

"The company needs time to adjust," he recently told ABCNews.

"In recent months we have seen AI labs launch into a headlong race to develop and deploy increasingly powerful digital brains that no one, not even their creators, can reliably understand, predict, or control," they say.

"Should we allow machines to flood our information channels with propaganda and lies? Should we automate all jobs, including rewarding ones? (...) Should we risk losing control of our civilization? These decisions should not be delegated

to unelected tech leaders

," they concluded.

Signatories include Apple co-founder Steve Wozniak, members of Google's DeepMind AI lab, Stability AI director Emad Mostaque, as well as American AI experts and academics and executive engineers from OpenAI partner Microsoft.

The complete manifesto to stop advances in AI

Human-competitive intelligence AI systems can pose profound risks to society and humanity, as demonstrated by extensive research [1] and recognized by leading AI labs.

[2] As set out in the widely endorsed Asilomar AI Principles, advanced AI could represent a profound change in the history of life on Earth, and must be planned and managed with appropriate care and resources.

Unfortunately, this level of planning and management is not happening, despite the fact that in recent months AI labs have entered an out-of-control race to develop and deploy digital minds ever more powerful than anyone, not even their creators, they can understand.

predict or control reliably.

Contemporary AI systems are now becoming competitive with humans in general tasks, [3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and falsehood?

Should we automate all jobs, including compliance?

Should we develop non-human minds that could eventually outnumber, outsmart, and replace us?

Should we risk losing control of our civilization?

Such decisions should not be delegated to unelected technology leaders.

Powerful AI systems should only be developed once we are confident that their effects will be positive and their risks manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects.

OpenAI's recent statement regarding artificial general intelligence states that "at some point, it may be important to get independent review before starting to train future systems, and for more advanced efforts to agree to limit the growth rate of computing." used to create new models".

We agree.

That point is now.

Such decisions should not be delegated to unelected technology leaders.

Powerful AI systems should be developed only after we are sure that their effects will be positive and their risks are manageable.

This confidence must be well justified and increase with the magnitude of the potential effects of a system.

OpenAI's recent statement regarding artificial general intelligence states that "at some point, it may be important to get independent review before starting to train future systems, and for more advanced efforts to agree to limit the growth rate of computing." used to create new models".

We agree.

That point is now.

Therefore, we call on all AI labs to immediately pause training on AI systems more powerful than GPT-4 for at least 6 months.

This pause must be public and verifiable, and include all key stakeholders.

If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should take advantage of this pause to jointly develop and implement a set of shared security protocols for advanced AI design and development that are rigorously audited and monitored by independent third-party experts.

These protocols must ensure that the systems that adhere to them are secure beyond a reasonable doubt.

[4] This does not mean a pause in AI development in general, just a step back from the perilous race to ever larger unpredictable black box models with emerging capabilities.

AI research and development must be refocused on making today's powerful, next-generation systems more accurate, secure, interpretable, transparent, robust, aligned, reliable, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate the development of robust AI governance systems.

These should include at a minimum: new and capable regulatory authorities dedicated to AI;

monitoring and monitoring of high capacity AI systems and large sets of computing power;

provenance systems and watermarks to help distinguish real from synthetic leaks and trace patterns;

a robust audit and certification ecosystem;

liability for damage caused by IA;

robust public funding for technical AI safety research;

and institutions well equipped to deal with the dramatic economic and political disruptions (especially in democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI.

Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" where we reap the rewards, design these systems for the clear benefit of all, and give society a chance to adapt.

Society has paused on other technologies with potentially catastrophic effects on society.

[5] We can do it here.

Let's enjoy a long summer of AI, let's not rush to fall unprepared.

Letter notes and references 

[1] Bender, EM, Gebru, T., McMillan-Major, A., and Shmitchell, S. (March 2021).

On the dangers of stochastic parrots: Can language models be too big?🦜.

In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610-623).

Bostrom, N. (2016).

Superintelligence.

Oxford University Press.

Bucknall, BS and Dori-Hacohen, S. (July 2022).

Current and short-term AI as a possible existential risk factor.

In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (pp. 119-129).

Carlsmith, J. (2022).

Is AI seeking power an existential risk?

.

arXiv preprint arXiv:2206.13353.

Christian, B. (2020).

The Alignment Problem: Machine Learning and Human Values.

Norton and Company.

Cohen, M. et al.

(2022).

Advanced artificial agents intervene in the provision of the reward.

AI Magazine, 43(3) (pp. 282-293).

Eloundou, T., et al.

(2023).

GPTs are GPTs: an early look at the potential labor market impact of big language models.

Hendrycks, D. & Mazeika, M. (2022).

Risk analysis X for AI research.

arXiv preprint arXiv:2206.05862.

Ngo, R. (2022).

The alignment problem from a deep learning perspective.

arXiv preprint arXiv:2209.00626.

Russell, S. (2019).

Compatible with humans: artificial intelligence and the problem of control.

Viking.

Tegmark, M. (2017).

Life 3.0: Being Human in the Age of Artificial Intelligence.

Knopf.

Weidinger, L. et al. (2021).

Ethical and social risks of damage to linguistic models.

arXiv preprint arXiv:2112.04359.

[2] Ordóñez, V. et al.

(2023, March 16).

OpenAI CEO Sam Altman Says AI Will Reshape Society, Acknowledges Risks: 'This Scares Me A Little'.

ABC News.

Perrigo, B. (2023, January 12).

DeepMind CEO Demis Hassabis urges caution with AI.

Time.

[3] Bubeck, S. et al.

(2023).

Sparks from Artificial General Intelligence: First Experiments with GPT-4.

arXiv:2303.12712.

Open AI (2023).

GPT-4 Technical Report.

arXiv:2303.08774.

[4] There is ample legal precedent;

for example, the widely adopted OECD AI Principles require that AI systems "function properly and do not pose an unreasonable security risk."

[5] Examples include human cloning, human germ line modification, gain-of-function research, and eugenics.

Source: clarin

All tech articles on 2023-03-29

You may like

News/Politics 2024-03-05T17:27:43.677Z
News/Politics 2024-03-01T13:44:12.177Z
News/Politics 2024-03-01T12:03:59.758Z
News/Politics 2024-03-09T00:37:18.550Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.