Dear Mr Macron, Mrs Meloni, Mr Scholz and Mr Sanchez [as President-in-Office],
We are at a critical point in the life of the Proposed AI Act. In the trilogue phase, this regulation is threatened by what we consider to be misguided opposition from their government representatives, in favor of self-regulation by companies developing fundamental AI models (such as ChatGPT and Bard). This implies that such companies should adhere to their own codes of conduct, rather than being regulated directly by official bodies. This shift in focus is delaying the adoption of the AI Regulation, especially considering the upcoming EU Parliament elections scheduled for June. More seriously, this could undermine its effectiveness and pose severe risks to the rights of European citizens and European innovation. Against a self-regulatory approach, we urge all parties involved in the trilogue to approve the AI Regulation as soon as possible. Below, we will outline three key reasons to support the passage of the AI Regulation in its original form.
Companies shouldn't make the rules themselves
Codes of conduct, even when mandatory, are insufficient and often ineffective. When companies self-regulate, they can prioritize their profits over public safety and ethics. It is also unclear who will oversee the development and implementation of these codes of conduct, how, and with what degree of accountability. This approach rewards companies that take risks by not investing time and resources in strong codes of conduct, to the detriment of those that do.
The regulation of artificial intelligence accelerates after it already controls all areas
This is also a disservice to the AI industry, as it leaves companies with uncertainty as to whether their products and services will be allowed on the market and whether they may face fines upon commercialization. Uncertainties may have to be remedied by direct rules once the regulation has been adopted, thus limiting parliamentary debate. Finally, if each company or sector comes up with its own rules, the result can only be a confusing patchwork of rules, increasing the supervisory burden for the regulator, but also making it more difficult for companies to comply with codes, thus hampering both innovation and compliance. This runs counter to one of the fundamental objectives of the AI Regulation, which is to harmonise standards across the EU.
The EU's leadership in AI regulation
The current opposition of France, Italy and Germany to regulating foundational AI models jeopardizes the EU's leadership in AI regulation. The EU has been at the forefront, advocating for the development of regulations that ensure technology is safe and fair for all. But this advantage could be lost if the remaining regulatory challenges are not addressed quickly and successfully. An indecisive EU will lose its competitive edge against countries such as the US or China. European citizens are at risk of using AI products regulated according to values and agendas that are not aligned with European principles.
The Cost of Not Regulating AI
Delaying AI regulation has significant costs. Without common rules, citizens are vulnerable to AI applications that do not serve the public interest. This lack of regulation opens the door to potential misuse and abuse of AI technologies. The consequences are serious and include privacy violations, bias, discrimination, and threats to national security in critical areas such as healthcare, transportation, and law enforcement. From an economic standpoint, unregulated AI applications can distort competition and market dynamics, creating an uneven playing field in which only powerful and well-funded companies will triumph. It is a mistake to think that regulation works against innovation: it is only through regulation, and therefore fair competition, that innovation can flourish, for the benefit of markets, societies and environments. Only with better regulation can more innovation be achieved.
In conclusion, the AI Regulation is more than just a law. It is a statement about what values we, as Europeans, want to promote and what kind of society we want to build. It implements and reinforces the EU's identity and reputation. It highlights the EU's credibility and leadership role in the global AI community.
For all these reasons – five years after the publication of AI4People's Ethical Framework for a Good AI Society, which guided the initial work of the European Commission's High Level Group on AI – we urge the EU institutions and Member States to find a compromise that preserves the integrity and ambition of the AI Regulation. Let this legislation be a beacon of responsible and ethical AI governance, serving as a global example for others to follow.
The letter is signed by:
Luciano Floridi, Founding Director of the Center for Digital Ethics at Yale University and President of Atomium-EISMD.
Michelangelo Baracchi Bonvicini, First Chairman of the Scientific Committee of the AI4People Institute and President of the AI4People Institute.
Raja Chatila, Professor Emeritus of Artificial Intelligence, Robotics and Computer Ethics at the Sorbonne University.
Patrice Chazerand, Director of Public Affairs at AI4People Institute and former Director of Digital Public Affairs Europe.
Donald Combs, Vice President and Dean of the School of Health Professions at Eastern Virginia Medical School.
Bianca De Teffe' Erb, Director of Data Ethics and AI at Deloitte.
Virginia Dignum, Professor of Responsible Artificial Intelligence, Umeå University and Member of the United Nations High-Level Advisory Council on Artificial Intelligence.
Rónán Kennedy, Associate Professor, Faculty of Law, University of Galway.
Robert Madelin, Chair of the Advisory Board of the AI4People Institute.
Claudio Novelli, Postdoctoral Researcher, Department of Legal Studies at the University of Bologna and International Fellow at the Digital Ethics Center (DEC) at Yale University.
Burkhard Schafer, Professor of Computational Legal Theory, University of Edinburgh.
Afzal Siddiqui, Professor, Department of Computer Science and Systems Science, Stockholm University.
Sarah Spiekermann, President of the Institute for IS and Society at the Vienna University of Economics and Business.
Ugo Pagallo, Professor of Jurisprudence, Legal Theory and Legal Informatics at the Department of Law of the University of Turin.
Cory Robinson, Professor of Communication Design and Information Systems at Linköping University.
Elisabeth Staudegger, Professor of Legal Informatics and IT Law (IT Law), Head of the Legal and IT Department at the Institute of Legal Foundations at the University of Graz.
Mariarosaria Taddeo, Professor of Digital Ethics and Defence Technologies at the Oxford Internet Institute at the University of Oxford.
Peggy Valcke, Professor of Law and Technology at the Catholic University of Leuven and Vice-Dean for Research at the Faculty of Law and Criminology of Leuven.
You can follow EL PAÍS Tecnología on Facebook and X or sign up here to receive our weekly newsletter.