The Limited Times

Now you can see non-English news...

The EU puts limits on artificial intelligence but forgets autonomous weapons

2021-04-21T08:31:18.892Z


The Commission today presents a regulation that will prohibit the most dangerous applications of this technology, such as "indiscriminate surveillance", but that leaves out military applications


Facial recognition, autonomous driving, listening to networks, robotics, research on new drugs, mechanisms for calculating the probability of return of a loan ... Artificial intelligence (AI) has many faces. The most feared is its

Orwellian

version

: this group of technologies - its growth has been such that it is no longer possible to speak of just one - has proven to be very useful for monitoring citizens and influencing their decisions. It already happens in China and, according to the mathematician Cathy O'Neil, also in the West, although in a veiled way. The European Union does not want to let the dark side of these systems flourish. The Commission today presents a regulation, the draft of which was leaked last week, that will lay the foundations for the development of artificial intelligence on the continent.

The text now starts the processing process in the European Parliament, and can therefore be altered.

Brussels' approach to the matter is the same as that applied to the General Data Protection Regulation (RGPD), in force since 2018: it evaluates the risk of the different applications of artificial intelligence and restricts or prohibits those it considers most dangerous.

The wording of the standard makes it clear in the preamble that its objective is for artificial intelligence to take a “human-centered” approach in the EU.

Translation: in Europe companies are not going to be left to do what they please.

Restrictions and prohibitions

It falls into the category of “high risk”, and therefore “indiscriminate surveillance” is prohibited.

This is understood as the systems that track citizens in physical environments, placing them in an exact location at a certain time, or that extract aggregated data on them.

Also high risk are the so-called “remote biometric identification systems”, a term that refers to facial recognition, a technology whose application has raised angry complaints at the Academy.

The regulation establishes legitimate exceptions to its prohibition: these systems will only be allowed if authorized by the EU or the Member States, if they are used for “the purpose of prevention, detention or investigation of serious crimes or terrorism” or if their application is limited a specified time and then it is cleared.

Image of the supercomputer Fugaku taken on June 16.STR / AFP

Social credit scoring systems, by which the reliability of the individual is calculated according to a series of variables, are also outlawed.

The Chinese authorities apply one, whose operation is unknown, which can have serious consequences on those who lose points.

Just as the RGPD requires those who want to manipulate an individual's data to give their consent, the artificial intelligence regulation establishes that the aforementioned must be notified when interacting with an AI system.

Unless, the regulations say, that is "obvious from the circumstances and context of use."

Surveillance and sanctions

Likewise, it is specified that systems considered high risk must be supervised, a category that, the text clarifies, will be in constant updating process.

In addition to the applications already mentioned, this definition includes AI systems designed to decide how and where to allocate resources in case of emergency, those used to admit or deny access requests to educational institutions, to evaluate candidates in recruitment processes. personnel, to calculate the creditworthiness of individuals or to assist judges, among others.

The regulation also establishes the creation of a "European Council of Artificial Intelligence", whose main attribution will be to decide which applications are considered high risk.

Companies that do not comply with the regulations can face fines of up to 20 million euros or 4% of their turnover, a figure that in the case of large technology companies can be very large.

The beginning of the road

The Secretary of State for Digitalization and Artificial Intelligence values ​​the regulation positively.

"It is a great step forward for the European Union to design the new digital reality," say sources from the department headed by Carme Artigas.

"It is a balanced proposal, which provides an environment of trust and guarantees to citizens, but at the same time without limiting the innovation capacity of a technology with as many opportunities as AI".

Borja Adsuara, consultant and expert in digital law, does not agree on this last point.

The intention of Brussels to try to foresee what the development of the set of technologies that we call AI may be, he believes, will constrain innovation.

“Europe, from Justinian to Napoleon, has always had a problem: it is a closed source system, everything that is not expressly allowed is prohibited.

This is not very prone to innovation, because innovating is precisely doing what is not planned ”, he reflects.

In the United States, on the other hand, the method is the reverse: what is not expressly prohibited is allowed.

"That's why the big

startups

are born there," he adds.

Facade of the 'Berlaymont' building of the European Commission in Brussels.EFE

For this jurist, the regulation is a good starting point, although he wonders if it would not make more sense to modify existing laws, from the Penal Code to the Administrative Procedure Law, than to create a new one for a particular technology.

“What matters about technology are its applications, specifically its misuses.

The right is to prevent the latter ”, he indicates.

A notable absence

"I have been very disappointed that autonomous lethal weapons have been left out," says Ramon López de Mántaras, director of the Institute for Artificial Intelligence Research at the CSIC. “The draft talks about high-risk applications of artificial intelligence; I don't know what could be more risky than a weapon that makes the decision to kill autonomously ”, he says. The document says verbatim that "this regulation does not apply to AI systems used exclusively for the handling of weapons or other military purposes."

Professor López de Mantaras also believes that the regulation is a good starting point to regulate AI, something that he considers necessary. “But I think it will be extremely difficult to comply with this regulation. In some cases for technical reasons, such as achieving transparency in the data and algorithms that meet the requirements, and also for practical reasons, such as the cost derived from doing so ”.

There are more elements that will make it difficult to comply with the regulations.

The draft says that high-risk systems have to make sure they use data that is of high quality so that artificial intelligence does what it is supposed to do, that it is not biased.

"That is very good, but getting training databases that are relevant, representative and free of errors is very complex," explains López de Mántaras.

Then there are the systems that are not closed, the ones that are continually learning.

For example, autonomous cars.

"How to ensure that a system that, when it was launched on the market, had databases without bias, well assembled and controlled, does not subsequently go haywire as it learns?"

You can follow EL PAÍS TECNOLOGÍA on

Facebook

and

Twitter

.

Source: elparis

All tech articles on 2021-04-21

You may like

News/Politics 2024-02-28T08:53:05.848Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.