The Limited Times

Now you can see non-English news...

The danger of humanity extinction linked to AI is "exaggerated", according to specialist Gary Marcus

2023-06-04T08:41:11.482Z

Highlights: Artificial intelligence (AI) specialist Gary Marcus says the danger of humanity extinction is "exaggerated" Marcus designed his first AI program in high school and founded Geometric Intelligence, a "machine learning" company that was later acquired by Uber. In March, he co-signed the letter from hundreds of experts calling for a six-month pause in the development of ultra-powerful AI systems. The author of the book "Rebooting AI" does not think that everything should be thrown into this technology.


In March, the professor emeritus of New York University, co-signed the letter of hundreds of experts calling for a six-month pause in the development of ultra-powerful AI systems.


Artificial intelligence (AI) specialist Gary Marcus has spent the past few months alerting his peers, elected officials and the public about the risks associated with the development and ultra-rapid adoption of new AI tools. But the danger of humanity extinction is "exaggerated," he said in an interview in San Francisco. "Personally, and for now, I'm not very worried about it, because the scenarios are not very concrete," says the professor emeritus of New York University, who came to California for a conference. "What worries me is that we're building AI systems that we don't control well," he continues.

Gary Marcus designed his first AI program in high school — software to translate Latin into English — and after years of studying child psychology, he founded Geometric Intelligence, a "machine learning" company that was later acquired by Uber. In March, he co-signed the letter from hundreds of experts calling for a six-month pause in the development of ultra-powerful AI systems like those of the start-up OpenAI, the time to ensure that already existing programs are "reliable, safe, transparent, loyal (...) and aligned" with human values. But he did not sign the succinct statement of business leaders and specialists that made a splash this week.

Sam Altman, the boss of OpenAI, Geoffrey Hinton, a former prominent Google engineer, Demis Hassabis, the head of DeepMind (Google) and Kevin Scott, chief technology officer of Microsoft, in particular, call for fighting against "the extinction risks" of humanity "related to AI".

See alsoArtificial intelligence to attack what remains of privacy

An 'accidental war'

The unprecedented success of ChatGPT, OpenAI's conversational robot capable of producing all kinds of texts on simple query in everyday language, has sparked a race for this so-called "generative" artificial intelligence between the tech giants, but also many warnings and calls to regulate this field.

Including from those who build these computer systems to achieve "general" AI, with cognitive abilities similar to those of humans. "If you really think it's an existential risk, why are you working on it? It's a legitimate question," Marcus said. "The extinction of the human species... It's quite complicated, actually," he says. "You can imagine all kinds of plagues, but people would survive.

»

There are, however, realistic scenarios where the use of AI "can cause massive damage," he points out. "For example, people could succeed in manipulating markets. And maybe we would accuse the Russians of being responsible, and we would attack them when they would have nothing to do with it and we could end up in an accidental, potentially nuclear war," he said.

See alsoArtificial intelligence: Cleyrop plays trust to better exploit the market

'Authoritarianism'

In the shorter term, Gary Marcus is more concerned about democracy. Because generative AI software produces fake photographs, and soon videos, more and more convincing, at little cost. According to him, the elections are therefore likely to "be won by the people most adept at spreading disinformation. Once elected, they will be able to change the laws (...) and impose authoritarianism."

Above all, "democracy is based on access to the information needed to make the right decisions. If no one knows what's true or not, it's over." The author of the book "Rebooting AI" does not think that everything should be thrown into this technology. "There is a chance that one day we will use an AI that we have not yet invented, which will help us make progress in science, in medicine, in the care of the elderly (...) But for now, we are not ready. We need regulation, and we need to make programs more reliable."

At a hearing before a US congressional committee in May, he defended the creation of a national or international agency to govern artificial intelligence. A project also supported by Sam Altman, who has just returned from a European tour where he urged political leaders to find a "fair balance" between protection and innovation. But be careful not to leave power to companies, warns Gary Marcus: "The last few months have reminded us how much they are the ones who make important decisions, without necessarily taking into account (...) collateral effects'.

Source: lefigaro

All news articles on 2023-06-04

You may like

News/Politics 2024-03-26T08:14:46.454Z
News/Politics 2024-03-30T08:26:41.574Z
News/Politics 2024-04-05T05:57:39.924Z
News/Politics 2024-03-18T04:16:27.081Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.