Image of the cover of a website that offers the possibility of undressing anyone in seconds.
Deepfakes, hyperrealistic reproductions that artificial intelligence has taken to unprecedented limits of accessibility and dissemination, flood everything, as demonstrated by the case of the false recreations of nudes of teenagers in Almendralejo (Badajoz) this week. "We can all be victims. The Internet is the jungle. There is no control and it is not a game: it is a crime, "warns the president of the Extremaduran federation of parents of students Freapa, Maribel Rengel. The most cited report, by Deeptrace, tracked up to half a million fake files on the global wide web with nearly 100% growth every six months. But it's just the work of one entity. The actual volume is unknown. It is known that 96% is pornography and that these creations have jumped from the field of image to voice, texts, documents and facial recognition, or the combination of all, to become one of the most worrying emerging vectors of fraud from a technology still unregulated.
The European Telecommunications Standards Institute (ETSI) warns in one of the most comprehensive and up-to-date reports of the dangers of the "increasingly easy" use of artificial intelligence in the creation of fake files. The ETSI, made up of more than 900 entities from all walks of life, warns: "Due to significant advances in the application of artificial intelligence to the generation and modification of data, new threats have emerged that can lead to substantial risks in various environments ranging from personal defamation and the opening of bank accounts using false identities (through attacks on biometric authentication procedures) to campaigns to influence opinion. public." These are the main objectives:
Pornography. Danielle Citron, a law professor at Boston University and author of Hate crimes in cyberspace, says, "Deepfake technology is used as a weapon against women by inserting their faces into pornography. It is terrifying, shameful, degrading and silencing. Deepfake sex videos tell people that their bodies are not theirs and can make it difficult for them to have relationships online, get or keep a job, or feel safe."
Rengel shares it after the case that has affected several institutes in his community. "The feeling is one of vulnerability and it's very difficult to control. The damage is irreparable. They can ruin the life of a girl, "says the representative of Extremadura to demand, urgently, a State pact that regulates the use of this technology.
From Rosalía to high school: artificial intelligence generalizes the creation of non-consensual pornographic images
These attacks consist of the dissemination of fake videos, images, audio or text on social networks to ruin the reputation of the victim or humiliate them. The Deeptrace report estimated the number of reproductions of this material at more than 134 million. Well-known women have been victims of these practices for years. The singer Rosalía has been the last famous Spanish of a long international list that headed Emma Watson and Natalie Portman.
But as Rengel warns, "no one is free." At the click of a button, on open platforms or in encrypted messaging systems you can find applications and pages that offer to "undress anyone" in seconds. They warn that it is for people over 18 and that you can not use photos of a person without their consent, although they do not check it, they offload the responsibility on the user and claim that the goal is "entertainment".
This is the frame leaked by the streamer, where a chrome tab containing an adult site featuring a deepfake video of Pokimane and Maya Higa can be seen pic.twitter.com/IeWwA8BmUP
— Dexerto (@Dexerto) January 30, 2023
Content creator Brandon Ewing, known as Atrioc, was caught live earlier this year with a page of pornographic deepfakes with fake recreations of his colleagues Pokimane, Maya Higa and Blaire (he asks to avoid the last name to protect the family), known as QTCinderella. Ewing claimed that the consultation was "accidental", but the reaction of the victims was devastating.
Higa said she felt "disgusted, vulnerable and outraged." QTCinderella posted a shocking video of revulsion: "Although it's not my body, it might as well be. It was the same feeling of rape. The constant exploitation and objectification of women is exhausting!" Pokimane recalled the need for "consent for certain practices, such as sexualizing women."
The Federal Bureau of Investigation of the United States (FBI) has warned of the increase in cybercriminals who use images and videos of social networks to develop deepfakes with which to harass and extort victims.
Hoaxes to influence public opinion. Giorgio Patrini, founder of Deeptrace, says: "Deepfakes are already destabilising political processes. Without defensive countermeasures, the integrity of democracies around the world is at risk."
The ETSI report identifies these attacks as false posts that create the impression that people in influential positions have written, said or done certain things. "It applies to all purposes where the stakes are high and the benefit justifies the effort from an attacker's perspective," the entity warns. They can be used to discredit a character, manipulate prices, attack competitors, influence public opinion in the run-up to elections or plebiscites, strengthen the reach of a disinformation campaign and as propaganda, especially in times of war.
As a matter of principle, I never post or link to fake or false content. But @MikaelThalen has helpfully whacked a label on this Zelensky one, so here goes.
I've seen some well-made deepfakes. This, however, has to rank among the worst of all time.pic.twitter.com/6OTjGxT28a
— Shayan Sardarizadeh (@Shayan86) March 16, 2022
In March 2022, a deepfake video of Ukrainian President Volodymyr Zelensky was published, announcing the capitulation to the Russian invasion. American politician Nancy Pelosy has suffered another in which she seemed drunk, and neither Donald Trump nor Barack Obama nor the Pope nor Elon Musk have been spared similar creations.
Attacks on authenticity. They explicitly address remote biometric identification and authentication procedures. Such procedures are widely used in many countries to give users access to digital services, such as opening bank accounts, because they reduce costs and make it easier to purchase products.
In 2022, a hacker group called the Chaos Computer Club executed successful attacks against video identification procedures using deepfake methods.
Internet security. Many attacks are based on human error to access corporate systems, such as the recent kidnapping suffered by the Seville City Council. But hyper-realistic fake files multiply the ability to access by providing false data that is difficult to identify. They may involve writings in the style of the alleged sender, voice, and videos of people allegedly communicating with victims of the attack. The goal is for victims to click on malicious links, which attackers can use to obtain login credentials or to distribute malware (malicious programs).
A bank manager in Hong Kong was tricked in 2020 by attackers who forged the voice of a bank director to achieve a transfer of 35 million dollars, according to Forbes. An energy company in the UK also suffered a similar attack a year earlier that cost it $243,000.
Not gonna lie... I love nerds 👨 🎓 @Harvard @milesfisher #deeptomcruise pic.twitter.com/SpZP3CggGD
— Chris Umé (@vfxchrisume) April 29, 2022
Cinema with fake actors and screenwriters. In the artistic area, this technology can be used to create cinematic content. The SAG-AFTRA union, which groups 160,000 workers in the entertainment industry, called a strike in Hollywood in July to demand, in addition to better pay, guarantees of protection against the use of artificial intelligence in productions.
Both actors and writers demand the regulation of technologies that can write a story or replace actors through the use of deepfake, which allows replicating a person's physique, voice and movements. The generalization of these tools would allow large producers to dispense with human beings in the creation of content.
British actor Stephen Fry, narrator of the Harry Potter audiobooks in the UK, has denounced the use of his voice without his consent in a documentary. "They could make me read anything from a message to storm Parliament to hardcore porn footage. All without my knowledge and without my permission," he said.
You can follow EL PAÍS Tecnología on Facebook and Twitter or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits
I'm already a subscriber