By Kelvin Chan and Ali Swenson —
The Associated Press
Content falsification through generative artificial intelligence (AI) is quickly becoming one of the biggest problems we face online.
Misleading images, videos and audio are proliferating as a result of the rise and misuse of these tools.
Almost daily we encounter AI
deepfakes
, as this false content is called in English, representing anyone from Taylor Swift to Donald Trump, and it is increasingly difficult to distinguish what is real from what is not. is.
[What's behind the fake images where Trump appears surrounded by black voters]
Video and image generators like
DALL-E
,
Midjourney
and
Sora
from the company
OpenAI
make it easy for non-technical people to create
deepfakes
: just type a request and the system will spit out a fake image or video.
Have you seen on social media the images of Pope Francis walking very fashionably with a huge white jacket and a crucifix on his chest?
They are fake images generated by AI.
Fake image of Pope Francis, generated with artificial intelligence.
Telemundo News
These fake images may seem harmless.
But they can be used to carry out scams and identity theft or propaganda and electoral manipulation.
Here's how to avoid being fooled by
deepfakes.
How to detect a fake AI image?
In the early days of
deepfakes
, the technology was far from perfect and often left telltale signs of manipulation.
Fact-checkers have flagged images that had obvious errors, such as hands with six fingers or glasses with differently shaped lenses.
But as artificial intelligence has improved, it has become much more difficult to detect fake images.
Some widely shared advice, such as looking for unnatural blinking patterns among people in fake videos, is no longer valid, explained Henry Ajder, founder of the consulting firm
Latent Space Advisory
and a leading expert in generative AI.
[New Hampshire Democrats receive a fake call that imitates Biden asking them not to vote in the primary]
Still, there are some clues to detecting these images, Ajder said.
Many
deepfake
photos , especially of people, have an electronic glow, “a kind of aesthetic softening effect” that leaves the skin “looking incredibly polished,” Ajder said.
However, he cautioned that creative prompts can sometimes eliminate this and many other signs of AI manipulation.
One tip is to carefully examine the consistency of shadows and lighting.
Often the main subject is clearly in focus and looks convincingly realistic, but
elements in the background of the image may not be as realistic or polished
.
Look closely at the faces
Face swapping is one of the most common methods of generating fake images.
Experts advise looking closely at the edges of the face
.
Does the skin tone on your face match that of the rest of your head or body?
Are the edges of the face sharp or blurry?
If you suspect that video of a person speaking has been manipulated, look closely at the mouth.
Do the lip movements match the audio perfectly?
If not, you have a clue that it could be fake content.
[Democratic consultant admits having hired the person who fabricated the fake Biden call with artificial intelligence]
Ajder suggests looking at the teeth.
Are they clear or are they blurry and somehow not consistent with how they look in real life?
Cybersecurity company Norton says the algorithms may not yet be sophisticated enough to generate individual teeth, so the lack of contours between each of the teeth could be another clue.
Think about the reality of the situation
Sometimes context matters.
Take a moment to consider whether what you're seeing is plausible.
Journalism website Poynter warns that if you see a public figure doing something that seems “exaggerated, unrealistic or out of character,” it could be a
deepfake
.
For example, would the Pope really be wearing a luxury quilted jacket, as shown in the notorious fake photo we mentioned?
If he did, wouldn't additional photos or videos from legitimate sources be posted?
Use artificial intelligence to detect misuse
Another approach is to use AI to fight AI.
Microsoft has developed an authentication tool that can analyze photos or videos to give a confidence score about whether they have been manipulated.
Chipmaker Intel's FakeCatcher uses algorithms to analyze the pixels of an image and determine whether it is real or fake.
There are online tools that promise to detect fakes if you upload a file or paste a link to suspicious material.
But some, like Microsoft's authenticator, are only available to select partners and not to the public.
This is because researchers do not want to alert bad actors and give them a greater advantage in developing
deepfakes
.
Open access to these screening tools could also lead people to believe that they are “god-like technologies that can use critical thinking for us,” when instead we should be aware of their limitations, Ajder said.
The obstacles to detecting counterfeits
Having said all this, artificial intelligence has advanced at dizzying speed and AI models are being trained with data from the Internet to produce content of increasingly higher quality and with fewer errors.
[Application of 'deepfakes' reveals a photo of Jenna Ortega at age 16 on Facebook and Instagram]
That means there's no guarantee these tips will still be valid even a year from now.
Experts say it could even be dangerous to put the burden of becoming digital investigators on ordinary people, because it could give them a false sense of confidence as it becomes increasingly difficult, even for trained eyes, to detect
deepfakes
.