The Limited Times

Now you can see non-English news...

A pioneering law for a technology with many question marks: the keys to AI regulation in the EU

2023-12-10T05:01:27.780Z

Highlights: The European Union has achieved its goal of being the first region in the world to have a comprehensive law to regulate artificial intelligence. The agreement reached after several marathon negotiating sessions is still provisional, pending ratification by both the member states and the European Parliament. Brussels hails the agreement as a "historic moment", which leaves doubts about its implementation and effective control of companies. The law agrees on a two-tiered approach: it calls for transparency for all these general-purpose AI models, and even stricter requirements for "powerful" models.


Brussels hails the agreement as a "historic moment", which leaves doubts about its implementation and effective control of companies


The European Union has achieved its goal of being the first region in the world to have a comprehensive law to regulate artificial intelligence (AI), a technology that generates as many expectations as concerns about its disruptive potential. Although the agreement reached after several marathon negotiating sessions is still provisional, pending ratification by both the member states and the European Parliament, in Brussels the pact has been celebrated as a "historic moment" with which, in the words of the President of the European Commission, Ursula von der Leyen, "European values are transposed into a new era", that of AI. With this milestone, Europe not only seeks to offer a legislative framework that boosts competitiveness and innovation in the sector while protecting its citizens; It also wants to set the normative model for the rest of the world to follow.

How do you cook up a law for the future?

Paradoxically, one of the most futuristic laws – to the point that it seeks to regulate technologies or functions that do not even exist yet – was negotiated under the rotating Spanish presidency of the EU in the most traditional way of Brussels: face to face, behind closed doors, in a marathon session in which representatives of all the European institutions, the Council (the States), the European Parliament and the European Commission discussed point by point and article by article the regulations, hardly without pauses, with a lot of coffee – although the machine broke in the first morning – sandwiches and juices to endure what ended up being unanimously described as a 36-hour "ultramarathon" (22 hours in a row for the first batch and, after a pause on Thursday, another 14 on Friday until almost midnight).

How is AI regulated in law?

European negotiators have tried to swim between two waters, seeking rules to control artificial intelligence and ensure developers share important information with intermediate AI providers, including many European SMEs, while avoiding an "undue burden" on businesses, according to Internal Market Commissioner Thierry Breton, one of the biggest proponents of the regulation.

The law looks especially at so-called "general-purpose AI," for example, the popular ChatGPT. The law agrees on a two-tiered approach: it calls for transparency for all these general-purpose AI models, and even stricter requirements for "powerful" models, it says, "with systemic impacts across our EU Single Market." What worries policymakers most is that models like ChatGPT are totally closed, meaning their technical innards are not known, as is the case with open source models.

For the former, the law includes requiring companies to produce technical documentation, complying with EU copyright law, and disseminating detailed summaries on the content used for training.

For high-impact "systemically risky" models, Parliament's negotiators sought stricter obligations. If these models meet certain criteria (which are yet to be defined), they will have to carry out model assessments, assess and mitigate systemic risks, conduct ongoing testing, report serious incidents to the Commission, ensure cybersecurity and report on their energy efficiency. And if they don't comply, they will be sanctioned.

What are the risks of AI that the EU could intervene to?

The EU's approach has been not to intervene at the source, but according to the risks of each of the technologies. In Breton's words: "[This law] allows us to prohibit uses of AI that violate the EU's fundamental rights and values, set clear rules for high-risk use cases, and promote barrier-free innovation for all low-risk use cases."

Examples of high-risk AI systems include certain critical infrastructure, e.g., water, gas, and electricity; medical devices; systems for determining access to educational institutions; or certain systems used in the areas of law enforcement, border control, administration of justice and democratic processes.

How can AI be prevented from violating citizens' rights?

Europe prides itself on its "European values" and respect for fundamental rights and, when it came to legislating on a technology with so many questions about the future and such intrusive capacity, the European Parliament waged a hard battle from the outset to preserve citizens' freedoms and rights as much as possible. Representatives of European lawmakers came to the negotiating table with a very long list of AI functions to be banned, especially so-called biometric surveillance systems. These limits were substantially lowered by several states in the interests of national security and military interests (beyond other less proclaimed economic interests) and which provoked some concessions. But after the negotiations, the two rapporteurs of the regulation, the Italian Social Democrat Brando Benifei and the Romanian liberal Dragos Tudorache, came out with a broad smile and stating that they had managed to "defend citizens from the risks that AI can entail on a day-to-day basis".

Thus, the future AI law points out "unacceptable risks" for which AI systems that are considered a clear threat to fundamental rights will be banned. This includes artificial intelligence systems or applications that "manipulate human behavior" to circumvent users' free will or systems that allow "social scoring" by governments or companies. These social scoring systems are highly controversial because they use AI to assess an individual's trustworthiness based on their gender, race, health, social behavior, or preferences. Emotion recognition systems in the workplace and educational institutions and biometric categorization to deduce sensitive data such as sexual orientation or political or religious beliefs will also be prohibited, as well as some cases of predictive policing for individuals and systems that create facial databases by capturing data indiscriminately through the internet or audiovisual recordings. as Clearview A.

And although MEPs relented on the red line they had drawn on the use of real-time biometric surveillance systems in public spaces, these can only be used by law enforcement and will require strict safeguards, such as a court order and their use being highly restricted: to search for kidnapping victims, human trafficking or sexual exploitation, to prevent a "genuine and foreseeable" or "genuine and present" terrorist threat, i.e. one that is already occurring, or for the location or identification of a suspect of specific crimes (terrorism, trafficking, murder, kidnapping, rape, armed robbery or an environmental crime, among others). Although with fewer restrictions, the "ex post" use of these systems will also be closely controlled.

The law will also mandate a "fundamental rights impact assessment" before a "high-risk" AI system can be brought to market.

How will compliance with the law be monitored?

Among others, the regulation provides for the creation of an AI Office, which will be within the European Commission and whose task will be to "supervise the most advanced AI models, contribute to the promotion of standards and testing practices, as well as compliance with regulations in all Member States. You will receive advice on GPAI models from a scientific panel composed of independent experts and civil society.

Moreover, it is not a toothless law: the regulations provide for harsh penalties for violators, either a percentage of the offending company's total turnover in the previous fiscal year or even an "even higher" predetermined amount. The fine can be as high as €35 million or 7% for violations of prohibited AI applications and the lowest is €7.5 million or 1.5% of turnover for providing incorrect information.

What are the next steps?

The "provisional" political agreement is now subject to formal approval by the European Parliament and the Council. Once the AI Act is adopted, there will be a transition period before it becomes applicable. To save this time, the Commission says it will launch an "AI pact": it will call on AI developers from Europe and around the world to voluntarily commit to implementing the key obligations of the AI Act before the legal deadlines. The law is not expected to be fully in force before 2026, although some parts will become operational sooner.

Image of a press conference held on the 7th, with Carme Artigas and Thierry Breton.European Union (SIERAKOWSKI FREDERIC)

What have been the reactions to the law?

The main companies behind the current AI models have already said that they will respect the law, although they have asked that its application "does not put a brake". The companies responsible for these developments have been working in parallel with the negotiation of the standard to ensure an ethical evolution of these tools, so the standard coincides with their general expectations as long as, according to Christina Montgomery, vice president and head of Privacy and Trust at IBM, "it provides protective barriers for society while promoting innovation."

NGOs and experts dedicated to cyberactivism, however, have been surprised and disappointed. Ella Jakubowska, an analyst specialising in biometric identification technologies at the European digital rights NGO EDRi, said: "Despite many promises, the law seems destined to do the exact opposite of what we wanted. It will pave the way for the EU's 27 member states to legalise live public facial recognition. This will set a dangerous precedent around the world, legitimize these deeply intrusive mass surveillance technologies, and imply that exceptions can be made to our human rights." Carmela Troncoso, a telecommunications engineer specializing in privacy at the École Polytechnique Fédérale de Lausanne, Switzerland, explains: "There are a lot of very promising bans, but also a lot of holes and exceptions that don't make it clear that the bans are actually going to protect human rights as we expect, for example, that law enforcement is going to use real-time facial recognition to search for suspects. It is also sad that Spain has been behind some of the most worrying proposals in this law," adds Troncoso, creator of the technology that made covid tracking apps possible.

Among the aspects that the agreement does not specify is the recognition of emotions. It is said that it will be banned in the workplace and education, but at the same time it is allowed (with restrictions) in police work and immigration management. The same goes for the scraping of biometric data: it is explicitly forbidden to collect facial patterns, but nothing is said about other biometric data. There are automatic systems capable of identifying people, for example, by the way they gait that would not be included in the ban.

What other countries have legislated AI?

No other territory in the world has a law that covers as many aspects as the European one. U.S. President Joe Biden signed a decree in October requiring technology companies to notify the government of any progress that poses a "serious risk to national security." Specifically, companies dedicated to AI in the United States, whether or not they work with the government, will be required to notify federal authorities of any developments that pose a "serious risk to national security, economic security, or public health and safety," as well as to improve mechanisms that strengthen confidence in these technological advances. Days later, British Prime Minister Rishi Sunak convened a summit that resulted in the first commitment of 28 countries (including the US and China) and the EU on these systems, the so-called Bletchley Declaration and the creation of a group of experts to monitor their progress.

You can follow EL PAÍS Tecnología on Facebook and X or sign up here to receive our weekly newsletter.

Subscribe to continue reading

Read without limits

Read more

I'm already a subscriber

_

Source: elparis

All tech articles on 2023-12-10

You may like

News/Politics 2023-11-02T17:29:30.650Z
News/Politics 2023-12-09T06:59:09.115Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.