The Limited Times

Now you can see non-English news...

Experts call for rules on Artificial Intelligence

2023-05-30T16:21:04.925Z

Highlights: Artificial intelligence is so pervasive that it has a strong impact in many sectors of social life. The alert recalls the manifesto in which Bertrand Russell and Albert Einstein in 1955 denounced the risks of nuclear weapons. We should think right now "about the need to set limits and constraints", says Luca Simoncini, former professor of Information Engineering at the University of Pisa. "These kinds of generative AI algorithms have proven to be very powerful at interfacing people using web data and natural language," says physicist Roberto Battiston.


It was the urgency of rules in a pervasive sector such as Artificial Intelligence, to dictate the alert launched by the Center for AI Safety.This is what some of the Italian experts who signed the declaration say (ANSA)


Simoncini, the alert on AI is dictated by the need for rules

It was the urgency of rules in a pervasive sector such as Artificial Intelligence that dictated the alert launched by the Center for AI Safety. "The extensive use of artificial intelligence on the one hand is leading to a real revolution and on the other is posing serious problems," notes one of the signatories of the declaration, information technology expert Luca Simoncini, former professor of Information Engineering at the University of Pisa and former director of the Institute of Information Technologies of the National Research Council.

"Artificial intelligence is so pervasive that it has a strong impact in many sectors of social life (just think of the risk of producing fake news or the control of autonomous cars), as well as on economic, financial, political, educational and ethical aspects", observes the expert. "It is evident - he adds - that no one can oppose if an emerging technology is used for beneficial purposes, for example in the biomedical or pharmacological field".

Consequently, if talking about the risk of extinction of humanity may seem hyperbole according to Simoncini, the CIAS statement recalls the manifesto in which Bertrand Russell and Albert Einstein in 1955 denounced the risks of nuclear weapons. The case of artificial intelligence is different, but the point is that we need clear rules and awareness. "We often forget that these systems are fallible", adds Simoncini, and the large companies active in the sector "base their activities only on technological prevalence, they have not posed the problem of regulation". As shown by what is happening in the field of autonomous cars, in the tests "an empirical approach is followed" and "the need to move towards systems that are not capable of making autonomous decisions without human intervention is not considered, while we should move towards systems that are of help to the driver, who still has the possibility to intervene and regain control at any time".

ANSA.it

AI can lead to extinction, the alert of dozens of researchers - Science & Technology

Artificial intelligence could lead to the extinction of humanity: this is the warning of experts in the field including Sam Altman, CEO of the producer of ChatGPT OpenAI, Demis Hassabis, CEO of Google DeepMind and Dario Amodei of Anthropic (ANSA)




Even in the case of Chatbots like ChatGpt, for example, "using them should be understood as an aid, not as the replacement of human capabilities by an artificial intelligence system". We should think right now "about the need to set limits and constraints", concludes Simoncini, considering the wrong uses of artificial intelligence in the packaging of fake news that are increasingly difficult to recognize: "the difficulty of distinguishing between true and false - he concludes - could create situations that are difficult to govern".

Battiston, 'powerful algorithms that require rules'Rules

are needed to manage powerful algorithms such as those of artificial intelligence and to avoid unexpected effects: this is the meaning of the alert launched by the Center for AI Safety, according to physicist Roberto Battiston, of the University of Trento and one of the signatories of the declaration. "These kinds of generative AI algorithms have proven to be very powerful at interfacing people using web data and natural language, so powerful that they could generate unexpected side effects," notes Battiston.

"No one today really knows what these effects could be, positive or negative: it takes time and experimentation - continues the physicist - to create rules and regulations that allow us to manage the effectiveness of this technology protecting us from the relative dangers. This is not about the threat of a super intelligence that can overwhelm humanity, but about the consequences of the way humans will get used to using these algorithms in their work and in the daily life of society." Let's think, for example, he adds, "of the possible interference in electoral processes, the spread of false news, the creation of news channels that respond to specific disinformation interests".

For this reason, he observes, "we must prepare to manage these situations, the first signs of problems of this kind we have already seen in past years with the Cambridge Analytica affair or with the guerrilla tactics of Russian trolls on the web". Usually, says Battiston, "when man fails to understand the reality that surrounds him, he invents myths, ghosts, monsters, to try to protect himself from danger through a certain type of mythological tale. The game is still firmly in the field of man, but the tools available are much more powerful than in the past."

Battiston notes that "when we discovered the strength of the atom, we had to find a way to contain the threat of nuclear confrontation. For the moment we have succeeded, for about 80 years. Someone - he says - has compared the power of these technologies to the nuclear one, asking for suitable rules to be created to deal with these risks. There is probably a grain of truth in this. I believe, however, that it is very important to understand well how these algorithms work, as only in this way - he concludes - we will be able to activate an appropriate set of social containment rules, exploiting at the same time the enormous positive potential".

Source: ansa

All tech articles on 2023-05-30

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.