The Limited Times

Now you can see non-English news...

Artificial intelligence: what are the 5 dangers that worry experts the most

2024-03-16T10:16:48.317Z

Highlights: Artificial intelligence (AI) has been installed in almost all areas and discussions in the last year and a half. Generative AI is, according to ChatGPT (the most popular chatbot in the world), “focuses on creating systems capable of generating new data, images, text or even music that appear to have been created by humans” “ChatGPT is an example of a program capable of holding diverse dialogues in natural language, trained with techniques where humans participate,” he explains to Clarín.


Copyright, misinformation and privacy appear as the main conflicts. What measures to take to minimize the risks.


Artificial intelligence

(AI)

has been installed in almost all areas and discussions in the last year and a half.

In particular, the

“generative”

, which brought to the mass public the possibility of creating, from text instructions, images, writings of all kinds and, now, even videos.

However,

cybersecurity specialists

warn about the potential dangers that the “wonders” of this technology have brought to the market, even in free versions.

Generative AI is, according to ChatGPT (the most popular chatbot in the world), “focuses on creating systems capable of generating new data, images, text or even music that appear to have been created by humans.”

“ChatGPT is an example of a program capable of holding diverse dialogues in natural language, trained with techniques where humans participate, both in accessing conversations from which to learn and in the subsequent process of evaluating and improving interactions,” he explains to Clarín

.

Javier Blanco, Doctor in Computer Science from the University of Eindhoven, Netherlands.

From this concept, various uses began to appear: from writing work emails to writing essays for faculty, to legal writing, everyone began to take advantage of this technology.

With the consequent

warnings

from the experts.

“These artificial generation systems are based on architectures called generative adversarial networks (GAN).

In general, machine learning systems produce classifier or discriminator programs from training with

large amounts of data

.

Facial recognition is carried out by programs of this type,” adds the expert.

This type of systems, made possible by the impressive advance of computing power, constitutes a discipline that takes large volumes of data already available to build a program that can recognize common patterns and create new data, based on human training (GPT: Generative Pre-training Transformer).

The advantages associated with these systems range from time savings to the imitation of capabilities that users do not have, from writing texts to accompanying them with images.

However, in the field of cybersecurity there are warnings that many experts point out to take into account.

What challenges does generative AI present?

Microsoft Copilot is the built-in ChatGPT system on Windows.

Photo Microsoft

“Generative AI presents particular risks that must be carefully addressed to ensure its ethical and safe use.

These risks include everything from the moderation of the content uploaded by users, to that which is misleading, biased or harmful to avoid the manipulation of information,” mentions David González Cuautle, Computer Security Researcher at

ESET Latin America.

From the cybersecurity company based in Slovakia, they warn of five points that will become more conflictive over time.

And, currently, they already represent a problem.

Company experts explain:

1. Content moderation:

Some of the social networks, websites or applications are not legally responsible, or are very ambiguous, regarding the content uploaded by their users, such as ideas or publications made by third parties or underlying, nor the content generated by AI: even When they have terms of use, community standards and privacy policies that protect copyright, there is a legal loophole that serves as a shield for providers against copyright violation.

The user has easy access and enormous availability of Generative AI tools that are unclear, even contradictory, in their use and implementation policies.

2. Infringement of copyright and image rights:

In the United States, in May 2023, a Writers Guild Strike began a series of conflicts that were joined in July of the same year by the Screen Actors Guild in Hollywood.

The main causes of this movement were due to the request for a salary increase derived from the rise of digital platforms, since there was an increase in the demand/creation of content and they wanted the profits to be distributed proportionally between scriptwriters/actors and the big companies. companies.

Another reason was the abuse of Generative AI by these companies to produce content without the consent of the actor or actress to use their face and voice for commercial purposes, which is why they requested a new contract that would protect them from the exploitation of their identity. and talent without their consent or remuneration for this technology.

The originality of some content regardless of the source, with the arrival of Generative AI, seems to be increasingly diffuse.

In this case, many copyrights and image permissions were ignored by large companies, generating enormous annoyance and as a consequence of this the United States chose to implement legal resources that would protect both parties.

3. Privacy:

For Generative AI models to function correctly, they need large volumes of data to be trained, but what happens when this volume can be obtained from any public source such as videos, audios, images, texts or source codes without the consent of the owner?

There are applications to "undress" users in images.

Photo Archive

4. Ethical issues:

Those countries where there are few or no regulations regarding AI are used by some users for unethical purposes such as identity theft (voice, image or video) to create false profiles that they use to commit fraud. extortion through a platform, application or social network, or also launching sophisticated phishing campaigns or catfishing scams.

5. Disinformation

:

Taking advantage of this type of AI, the practices of disseminating fake news on platforms and social networks are improved.

The viralization of these misleading content generated damages the image of a person, community, country, government or company.

IA: what care to take

Clarín has a system that summarizes notes by AI.

Photo Clarín

There are some deeper problems that, strictly speaking, have no way of being prevented at the individual level, such as misinformation.

However, in areas where the user has a certain level of control, ESET recommends:

Effective filtering and moderation:

Develop robust filtering and moderation systems that can identify and remove inappropriate, misleading or biased content generated by Generative AI models.

This will help maintain the integrity of the information and prevent potential negative consequences.

Training with ethical and diversified data:

Ensure that Generative AI models are trained with ethical, unbiased and representative data sets.

Diversifying data helps reduce bias and improves the model's ability to generate more equitable and accurate content.

Transparency in content regulation:

Establish regulations that allow users to understand how content is generated, the treatment that data receives and the implications it may have if they do not comply.

Continuous evaluation of ethics and quality:

Implement continuous evaluation mechanisms to monitor the ethics and quality of the generated content.

These mechanisms may include regular audits, robustness testing, and collaboration with the community to identify and address potential ethical issues.

Education and public awareness:

Promote education and public awareness about the risks associated with Generative AI.

Promoting understanding of how these models work and what their possible impacts are will allow the user to actively participate in the discussion and demand ethical practices from developers and companies.

In any case, generative artificial intelligence, taken to a massive level, is beginning to generate new problems and issues that personalities like Elon Musk have come to notice.

The way in which they develop will, in the end, be the responsibility not only of big tech but also of the different sectors of society.

SL

Source: clarin

All tech articles on 2024-03-16

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.