The Limited Times

Now you can see non-English news...

Can we really pause artificial intelligence, as Elon Musk asks?

2023-03-30T16:37:46.885Z


FIGAROVOX/TRIBUNE - Tech personalities, including Elon Musk, signed an open letter on March 29 calling for the development of AI to be suspended. This would make it possible to legislate and further control progress in this area, analyzes Laetitia Pouliquen, director of the think tank...


Laetitia Pouliquen, Director of the think-tank NBIC Ethics and co-designer of the

Open Letter to the European Commission on Robotics and AI

signed by 285 European experts, denouncing the creation of a specific legal personality for autonomous robots.

The open letter "Suspend AIs more powerful than ChatGPT"

,

signed by thousands of experts including the leaders of Tesla, Apple, DeepMind, Google's AI laboratory, or Yuval Noah Harari, author of Homo Deus , published yesterday, March 29, 2023;

the same day, a belgian suffering from eco-anxiety for months and “supported” in his daily life by his conversations with a conversational robot developed by OpenAI, commits suicide.

Finally, as of this writing, Sam Altman, CEO of OpenAI, has yet to ratify the open letter.

This letter is a solemn appeal to the scientific community and governments:

“We call on all AI labs to immediately suspend, for at least six months, learning AI systems more powerful than GPT-4.

This break should be public and verifiable, and include all key players.

If such a pause could not be decreed quickly, governments should intervene by instituting a moratorium.

Read alsoElon Musk and hundreds of experts call for a break in AI, citing “major risks for humanity”

As early as 2014, Elon Musk considered AI to be the "

greatest existential threat

" to humanity and compared it to "

summoning the demon

".

At the time, Musk was investing in AI companies like OpenAI, not to make money, but "

to keep tabs on the technology in case it got out of hand."

he indicated.

At the Asilomar conference in 2017, scientists established ten principles of AI ethics.

1/Algorithmic and judicial transparency;

2/ moral responsibility of designers and builders;

3/alignment of values ​​on the objectives and behaviors of AI-trained systems with human values;

5/ respect for human dignity, rights, freedoms and cultural diversity of the human race;

6/confidentiality of personal data;

7/freedom to act confidentially;

8/benefit and shared prosperity;

human control;

9/ non-subversion;

10/ rejection of the AI-based arms race.

It is on the basis of these principles that many European governments and Tech companies such as Microsoft or Oracle have developed their guide to

ethics - note, however, that on March 14, Microsoft disbanded its AI ethics team.

In 2020, the Vatican is even partnering with giants Microsoft and IBM to promote the ethical development of AI and call for the regulation of facial recognition.

Elon Musk's recent speech in Dubai at the 2023 World Government Summit once again reminds us of the unprecedented dangers of artificial intelligence - which we should also call "algorithms" for greater scientific and anthropological accuracy.

Surprisingly, some of the signatory experts somewhat resemble arsonist firefighters.

They denounce the same so-called disruptive technologies that they nevertheless put on the market.

Laetitia Pouliquen

Surprisingly, some of the signatory experts somewhat resemble arsonist firefighters.

They denounce the same so-called disruptive technologies that they nevertheless put on the market.

Take the example of Deepmind, a subsidiary of Google, which has many signatories to the open letter.

This research lab tries to

'solve intelligence to advance science and benefit humanity'

".

In July 2022, DeepMind announced the development of DeepNash, a learning system capable of playing the Stratégo board game at the level of a human expert.

In itself, having a machine beat a human being at a board game has no moral value, but the technology developed by this research is actually used to model an artificial brain that Deepmind aims to make brain-like. human in terms of creativity, intuition, perception, language, analysis etc.

This work announces the hybridization of humans with AI: the human brain will then be able to connect to the cloud, thanks to a connection to the neocortex as well as to external robots and any other connected person.

Read also“Faced with the advent of artificial intelligence, we lack philosophers”

Raymond Kurzweil, the ex-futurist engineer of Google, is a transhumanist and founder of the University of the Singularity, whose central concept consists in measuring the point of Singularity, the moment when collective human intelligence would be exceeded by artificial intelligence. .

Its current projections take it to 2045, when our understanding and capabilities in the fields of artificial intelligence (AI), autonomous systems, robotics, biotechnology and nanotechnology are expected to exceed all intelligence. combined human.

Matching schedules?

On March 28, 2023, Ray Kurzweil announced that humans would achieve immortality in eight years, that nanorobots would help guarantee human immortality because genetics, biotechnology,

nanotechnology and robotics would lead to the development of anti-aging "nanorobots".

These nanorobots would repair aging cells and tissues and perform the functions of most vital organs.

It is clear that the legislator only very temporarily restricts research, that the legislator is often behind science and that nothing seems to stop the technological machine.

Laetitia Pouliquen

Thanks to the Turing test, the founding father of AI, we can evaluate the passage of an oral or written interview where the machine would be indistinguishable from the man, thus constituting the tipping point towards “strong” AIs which would dominate definitely human intelligence.

We can consider that ChatGPT already fulfills the conditions of the Turing test.

AI pyromaniac firefighters, such as Ray Kurzweil, DeepMind or OpenAI researchers are therefore ideologues;

they leave the sphere of science to propose a new hybrid human-machine humanity: transhumanity.

However, it must be noted that the legislator only very temporarily limits research, that the legislator is often behind science and that nothing seems to stop the technological machine, which has gone mad with, among other things, the marketing of ChatGPT.

Biotechnologies are an illustration of this through their advances.

These are inseparable from artificial intelligence, through the convergent and progressive use of disruptive technologies such as nanotechnologies, cognitive technologies, computer technologies and biotechnologies, otherwise known as NBIC technologies.

Read alsoChatGPT: “The great threat”

For example, NBIC technologies make it possible to create synthetic bacteria, to create human-animal chimeras, to produce high-throughput stem cells.

Gene modification technologies (CrispR-Cas9 among others) allow research on gain of function.

And yet, as early as 1975, the Asilomar Conference expressed opposition to the patentability of the human genome;

the convention on human rights and biomedicine known as the Oviedo Convention, of 1997, ratified by France in 2011, prohibits cloning and the creation of embryos for research purposes and irreversible genetic modifications of the human genome .

Finally, the European Commission's science ethics group advocated a ban on irreversible genetic modification of the human genome in 2016.

Science without conscience is nothing but ruin of the soul

 ,” Rabelais reminds us in Pantagruel, thus criticizing those who recognize no limit or finitude in human nature.

Laetitia Pouliquen

It should be noted that, since then, France has made available human embryos from IVF and embryonic stem cells for research without reservation.

A French team publishes in 2021 their research on the creation of human-ape chimeras, arguing that these results would allow a better understanding of early human development and the evolution of primates.

In 2019, the French biologist, Philippe Marlière, created a viable bacterium whose DNA includes a synthetic compound, absent from the kingdom of life.

He claimed that the techniques of xenobiology (biology foreign to the living) were used to prevent any exchange between the natural living and micro-organisms.

The precautionary principle does not always guide scientific work,

Read alsoArtificial intelligence: should we fear the ChatGPT revolution?

So what about the open letter initiative by Musk and other experts?

What media and financial pressure could be exerted on the companies targeted by the “pause button”, requested by this group of experts from the developers of powerful algorithms?

When will “strong” AIs still be discernible from humans in all areas, without the critical boundary between human and machine still being discernible?

How to reaffirm who the human being is and stop the technological "increases" that would disfigure him, to the detriment of his humanity?

These questions are abyssal of harmful and unquantifiable consequences.

We can only praise Elon Musk's attempt and hope that the current scientific hubris will not kill anyone other than this man,

having confused an algorithm and the human being.

"

Science without conscience is only the ruin of the soul

,” Rabelais reminds us in Pantagruel, thus criticizing those who recognize no limit or finitude in human nature.

Affirm that our technological future must remain humanistic, respectful of our dignity, our freedom and our vulnerability.

Source: lefigaro

All news articles on 2023-03-30

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.