The Limited Times

Now you can see non-English news...

GAN Faces: The Dangers of Artificial Intelligence Generating Fictitious Faces

2023-04-27T10:07:43.288Z


The growth of computing power makes it possible to create images that are impossible to differentiate from the real ones. The precautions.


With the impressive rise of generative artificial intelligence (AI) various uses began to become popular.

One of them has to do with the creation of faces, through a type of AI called

GAN (Generative Adversial Network)

, a series of algorithms that work by automatic learning and that can do something very striking:

"invent" faces that do not exist in The real world.

Although the system that made the most noise in the tech field in recent months was ChatGPT, a generative text system that allows you to dialogue with a bot that interacts in natural language very similar to what a human would do, GAN systems draw attention for the precision with which they can create non-existent faces.

And not only faces:

also audios and even videos.

Despite this, in the videos it is more evident (at least still) that it is a self-generated image by an AI: in the images, it

is not obvious at all

.

Of course, all of this brings with it serious problems: from the creation of

fake news

to the growth of

cyber-scams

, these are the GAN faces and the dangers associated with them.

What is a GAN face

GAN face, generated by AI.

Photo: ThisPersonDoesNotExist

The first thing to understand is

how the GAN system works

, which uses automatic training (

deep learning

) of images, and was created in 2014 by

Ian Goodfellow

, an American computer scientist expert in neural networks.

"Systems for the artificial generation of faces -like those for generating texts, such as chatGPT- are based on architectures called antagonistic generative networks (GANs)," explains Javier Blanco, PhD in Computer Science from

Clarín

.

the University of Eindhoven, the Netherlands.

He continues: “In general, machine learning systems produce

classifier or discriminator programs

from training with large amounts of data.

Facial recognition is performed by programs of this type.

“In the case of GANs, another program that generates images (with a certain randomness) and takes as criteria that they are accepted by the discriminator, also under construction, is combined in the training of the classifier.

A feedback

process

is then built

between the two statistical models that are produced simultaneously.

The more the classifier improves, the generator also improves.

Many times, these training processes have human supervision to improve the classification criteria”, develops the also tenured professor of Famaf, National University of Córdoba.

Regarding how it works, it could be said that like all generative artificial intelligence, it works with

inputs

and

outputs

: input data that is processed with high computational power and produces a different generated result that did not exist before.

"You need to enter

input data

, such as real faces of different people, and the model offers new faces with new features of real appearance as a result," explains Camilo Gutiérrez Amaya, Head of the ESET Latin America Research Laboratory.

This creates a problem: published studies already reported, even at the end of 2021, that the images created by Artificial Intelligence were becoming more and more convincing and that there was a 50% chance of

confusing a fake face with a real one

.

“Although GAN faces are of great help to industries such as video games for the generation of faces for characters, or for visual effects that are later used in movies, they can also be used for malicious purposes,” adds

Amaya

.

This began to pose serious problems in other areas, such as journalism and disinformation, among other areas.

Create fake profiles

Fake profiles, a problem that can occur with GAN faces.

AFP photo

GAN networks allow you to create images or even videos of people you know, or not, to trick victims into revealing sensitive information,

such as usernames and passwords

, or even credit card numbers.

For example, they can create faces of

fictitious people

that are later used to build profiles of supposed

customer service representatives of a company.

These profiles then send phishing

emails

to that company's customers to trick them into revealing personal information.

Fake news: a problem out of control

Misinformation and fake news, a problem facing newsrooms around the world.

Photo War On Fakes

Content generation is a huge problem for fake news: when we consider that ChatGPT can create it in just minutes,

with an image generated by artificial intelligence, the combo can be deadly.

With the advancement of machine learning

technologies,

the challenge seems to be growing.

But with the images, there have already been cases where deepfakes have been created posing

as politicians to distribute fake news

, as was the case with Ukraine's President Volodymyr Zelensky and a fake video that was uploaded to compromised websites of this country in which he called on the Ukrainian soldiers to lay down their weapons.

In some scenarios, on the contrary, 

it is evident that it is a parody

, such as this deepfake of the Argentine Minister of Economy, Sergio Massa, in the context of a scene from television fiction The Office:

Another controversial example was caused by an artist who created a portrait with artificial intelligence.

Identity Theft

Creating faces similar to those of public figures, such as celebrities, can facilitate

identity theft or phishing scams.

“For example, you have to think about facial recognition as an authentication method and the

possibilities offered by GAN faces

as an instrument to circumvent this authentication method and access a third party account.

On the other hand, it is also important to mention that companies are aware of the risks and are developing functionalities

to detect these false images

”, they explain from ESET.

Identity theft can lead to multiple problems, from account theft to fraud

committed on behalf of third parties

.

Fraud on dating apps and social networks

Bumble dating app.

Photo Shutterstock

“With the advancement of technologies such as GAN networks, cybercriminals can create fake faces that are used to create fake profiles on dating apps and/or social media profiles as part of their strategy to deceive and then extort money from victims.

Companies like Meta

revealed the increase in false profiles

in which artificial images created by a computer were used, ”they explain from ESET.

This becomes particularly complex, for example, in dating apps, a gateway to multiple types of scams.

Tips to avoid falling into the trap

ESET spread these tips to be more forewarned:

  • Verify Source:

    Make sure the source of the image is trustworthy and verify the veracity of said image.

  • "Not all that glitters is gold":

    be wary of images that seem too perfect.

    Mostly the images generated by this type of technology look perfect and flawless, so it is important to be wary of them.

    If an image or video looks suspicious, seek more information about it from other reliable sources.

  • Verify the images and/or videos:

    There are some tools like Google Reverse Image Search online that can help verify the authenticity of images and videos.

  • Update security systems:

    Keep security systems up to date to protect against scams and malware.

  • Installing reputable antivirus software:

    It will not only help detect malicious code, but also detect fake or suspicious sites.

  • Do not share confidential information:

    Do not share personal or financial information with anyone you do not know.

However, the underlying problem lies and is likely to get worse.

“Being able to distinguish whether an image, a face, was built with a GAN

is not something that can be resolved in general

, especially due to the rapid evolution of these systems.

There are studies and programs have been created

capable of determining

, with a certain probability, if some images were created by a GAN.

Given the multiplicity and evolution of image generation programs,

it is likely that it will be increasingly difficult

to discriminate the origin of a given image”, warns Blanco.

“Simple

human inspection

of an image can sometimes be used for some kind of programs, and there are certain clues that could be looked for, but there are no precise or general methods.

Everything indicates that the distinction by human inspection

will become more and more difficult

, becoming impossible in a not too long term”, he closes.

look too

Controversy by a German artist who won a contest with a photo created with Artificial Intelligence and gave up the prize: "I participated like a cheek"

Bard, under the magnifying glass: Google employees criticize the artificial intelligence system

Source: clarin

All tech articles on 2023-04-27

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.