Who controls the risks of artificial intelligence, especially so-called "foundational models" like ChatGPT? The new European directive on AI for this technology – revolutionary but also hugely disruptive – that the EU institutions are now negotiating to come up with a definitive text is increasingly leaning towards self-regulation. The latest proposal from Spain, which chairs the EU Council this semester and coordinates the negotiations, proposes "very limited obligations and the introduction of codes of conduct" for companies, although with several intermediate layers of supervision, according to documents to which EL PAÍS has had access. But the standoff continues: the European Parliament is calling for a somewhat tougher framework, while France, Italy and Germany – three of the most powerful members of the EU club – are pushing for the scale covered by companies' own codes of conduct to exceed that of specific regulations; They argue that strict regulation will harm the innovation of European research and companies. Europe comes after the United States, which has already passed its own law, which requires technology companies to notify the U.S. government of any development that poses a "serious risk to national security."
Open letter to Europe's leaders for the passage of the artificial intelligence law
Spain, which will hand over the presidency at the end of the month to Belgium and which has made it one of its main priorities to move forward with the historic directive, is navigating these balances and has proposed a series of codes of conduct for the founding models (or GPAI, those capable of creating audio content, text or images from the observation of other data) that imply a higher risk, actual or potential, those that the regulation calls "foundational models of systemic risk": that is, with high-impact capabilities whose results may "not be known or understood at the time of their development and publication, and may therefore lead to systemic risks at EU level". Codes that include both "internal measures" and an active dialogue with the European Commission to "identify potential systemic risks, develop possible mitigating measures and ensure an adequate level of cybersecurity protection," the plan says.
The codes of conduct would also include transparency obligations for "all" foundational models, according to the latest negotiating position, which proposes other elements, such as companies reporting their energy consumption. For all foundational models, some "horizontal obligations" would also be established. But, in addition, the new directive could include a clause that would empower the European Commission to adopt "secondary legislation" on foundational models of "systemic risk" to, if necessary, further specify the technical elements of GPAI models and keep benchmarks up to date with technological and market developments." This would be tantamount to leaving the door open for new regulatory chapters, according to EU sources.
The Spanish proposal also calls for the creation of a Supervisory Agency for Artificial Intelligence, a body that would paint an extra layer of security, which would provide a "centralized surveillance and implementation system." The agency could also satisfy the demands of the European Parliament, which had called for the construction of some kind of specialized body.
The proposals to finalise the directive will be debated on Wednesday between representatives of the member states (Spain, as president of the Council of the EU), the European Parliament and the Commission, in a decisive meeting. It's one of the last chances for him to pull through. The negotiations are already very "advanced" and there is even agreement on what constitutes the general architecture of the law, based on a risk pyramid and on the principle, maintained by the Spanish presidency in its latest proposal, that the approach is "technologically neutral", that is, not to regulate specific technologies, but their end uses through the creation of various categories of risk. as proposed by the European Parliament.
Spain is optimistic. "The European Union would become the first region in the world to legislate the uses of AI, its limits, the protection of citizens' fundamental rights and participation in its governance, while guaranteeing the competitiveness of our companies," said the Secretary of State for Digitalisation, Carme Artigas, to EL PAÍS. Artigas believes in the EU's responsibility to go further, for high-risk uses, the establishment of a code of conduct and self-regulation models and good practices in order to limit the risks already shown by this innovative technology, from disinformation to discrimination, manipulation, surveillance or deep fakes. All bearing in mind that innovation and advancement must be supported. "The European AI regulation is therefore not just a legal standard, nor just a technical standard. It's a moral standard," says Artigas.
The problem, however, is that two key points remain open — and will likely remain until negotiators meet face-to-face again on Wednesday afternoon: one is the issue of biometric surveillance systems; The second is who controls the most unpredictable foundational models, the so-called "systemic risk" models. A debate fueled by the latest events in the Open AI saga and the departure and return of Sam Altman to the leading company, as Open AI researchers warned the company's board of a powerful discovery of artificial intelligence that, according to them, threatened humanity before Altman's firing.
The tension is at its highest. Especially since Germany, France and Italy turned the tables a few weeks ago and declared themselves in favour of broad self-regulation by the companies that develop these systems, through codes of conduct, which, of course, would be mandatory. The three countries have sent the rest of the member states a position paper in which they advocate self-regulation for general-purpose AI, call for a "balanced innovation-friendly approach" based on AI risk and one that "reduces unnecessary administrative burdens" for companies that, they say, would "hamper Europe's ability to innovate". In addition, in the confidential document, to which this newspaper has had access, they are committed to "initially" eliminating the sanctions for non-compliance with the codes of conduct related to transparency and advocate dialogue.
However, the path taken by this proposal by three of the EU's largest companies – some, such as France, that host technology companies with links to AI, such as Mistral – is a red line for other member states and for many experts, as shown by the open letter sent last week to Paris, Berlin, Rome and Madrid. advanced by EL PAÍS, in which they urge that the law go ahead and that it not be diluted. In other words, they are asking for fewer codes of conduct and more rules.
"Self-regulation is not enough," says Leonardo Cervera Navas, secretary general of the European Data Protection Supervisor (EDPS), who makes no secret of the fact that he would like the hypothetical future AI Office to fall within the responsibilities of the EDPS. Such a supervisory body, he suggests, could serve as a hinge between those who prefer self-regulation and those who demand obligations put in black and white in a law, since it would allow a high degree of self-regulation, but ultimately supervised by a higher body independent of the interests of the companies. For the expert, the ideal is a "flexible regulatory approach, not excessively dogmatic, agile, but combined with strong supervision", which is what this office would do.
This is also the position of the negotiators of the European Parliament, who insist that the directive must be very comprehensive to guarantee citizen security and their fundamental rights in the face of technologies with an intrusive potential that is sometimes unimaginable as yet. "The Council must abandon the idea of having only voluntary commitments agreed with the developers of the most powerful models. We want clear obligations in the text," Italian MEP Brando Benifei, one of the European Parliament's negotiators in the interinstitutional talks (the so-called trilogues, which shed light on the real legal text), said by telephone.
Among the obligations that European lawmakers consider "crucial" and that should be set out in law are data governance, cybersecurity measures and energy efficiency standards. "We're not going to close a deal at any cost," Benifei warns.
What seems to be more resolved is the issue, very important for the European Parliament, of prohibiting or restricting as much as possible what it calls the "intrusive and discriminatory uses of AI", especially biometric systems in real time or in public spaces, with very few exceptions for security reasons. The position of the MEPs is much stricter than that of the states and, although the negotiations have been "difficult", there is optimism, cautious of course, about the possibility of finding a middle ground. As long as, the European Parliament stresses, the ban on predictive policing, biometric surveillance in public places and emotion recognition systems in workplaces and education systems continues to be maintained. "We need a sufficient degree of protection of fundamental rights with the necessary prohibitions on the use of [these technologies] for security and surveillance," Benifei sums up.
You can follow EL PAÍS Tecnología on Facebook and X or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits
I'm already a subscriber