The Limited Times

Now you can see non-English news...

Only a continuous and humanized debate can regulate Artificial Intelligence

2024-03-25T14:14:26.514Z

Highlights: The European Parliament has given firm support to the AI ​​law, marking a milestone in the regulation of this technology. The new standard, based on risk levels, prohibits certain AI applications that violate rights. The EU can learn from experiences elsewhere in the world, adapting its approach according to the lessons learned. Latin America must lead the regional agenda in this matter, without neglecting its own social and economic vulnerabilities. The roadmap towards this law has been a deliberative process to balance innovation with the protection of fundamental rights and security.


The EU banned certain AI applications that violate rights. For its part, the European Parliament is setting the course for the global debate on the use and application of AI.


The European Parliament has given firm support to the AI ​​law, marking a milestone in the regulation of this technology.

This regulation was approved by a large majority last Wednesday, March 13 in Strasbourg.

The new standard, based on risk levels, prohibits certain AI applications that violate rights, offering protection against invasive and potentially harmful practices.

For example, biometric categorization systems based on sensitive characteristics and the indiscriminate capture of facial images, as well as emotion recognition in the workplace and schools, are prohibited.

These measures ensure ethical and responsible use of AI, from a preventive approach.

Furthermore, the use of biometric identification systems by security forces is regulated cautiously, ensuring their use only in specific situations.

Likewise, clear obligations are established for high-risk AI systems, with the aim of mitigating any negative impact on health, safety, the environment, democracy and fundamental rights.

Transparency and accountability are encouraged, giving citizens the right to file complaints and receive explanations about AI-based decisions that affect their rights.

These systems span a variety of critical sectors, such as essential infrastructure, education, employment, vital public and private services such as healthcare, the financial system, as well as applications in security forces, migration, customs management, justice and democratic processes. .

To mitigate the associated risks, these systems must undergo rigorous evaluations and have adequate human oversight.

General-purpose AI systems and associated models must meet specific transparency requirements, as well as adhere to EU copyright laws.

They are required to disclose detailed summaries of the content used for their training.

For more advanced models, requirements such as incident notification are imposed.

Also, any artificial or manipulated images, audio or video, known as “deepfakes,” must be labeled.

Likewise, small and medium-sized companies must be provided with controlled spaces for testing and rehearsals with expert systems.

Among the specialists who follow this debate there are voices for and against.

For the former, this regulation is necessary, as they consider this law to be central to addressing the ethical and safety challenges related to AI, and providing clear guidelines for responsible development and use.

Based on this, the balance between innovation and protection is highlighted.

Finally, a new governance model is highlighted with the formation of a multidisciplinary advisory panel.

Those who disapprove of this law emphasize the complexity of its implementation.

Some experts warn about the impracticality of applying the law, since they understand that it will require collaboration between governments, companies and civil society.

There are concerns that regulation will limit innovation, creativity and competitiveness.

However, I must highlight that the EU once again sets the course of the global debate as it is in the use and application of AI, which began with the issuance of the General Regulation on the Protection of Personal Data (GDPR), in 2016.

The path to reaching the decision made by the European Parliament began with the AI ​​White Paper, in 2020, where a risk management model was formulated.

Subsequently, in April 2021, the European Commission proposed the first regulatory framework.

Now, for the implementation of this norm, the law must also be formally adopted by the Council of Europe.

The roadmap towards this law has been a deliberative process to balance innovation with the protection of fundamental rights and security.

We must consider that AI is a constantly evolving field, with emerging challenges and opportunities.

A long-lasting debate allows regulation to be adapted as new technologies are developed and additional risks are discovered.

Under this context, Latin America must lead the regional agenda in this matter, without neglecting its own social and economic vulnerabilities.

The EU can learn from experiences elsewhere in the world, adapting its approach according to the lessons learned.

Flexibility is essential to ensure that regulation does not become outdated or rigid.

The consistency that this issue remains on the EU agenda has been key in this regulation, allowing for permanent adaptation and informed decision-making to address the challenges and maximize the benefits of this technology.

Source: clarin

All news articles on 2024-03-25

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.