The volume of AI-generated misinformation, particularly election-related deepfake images, has increased by an average of 130% per month on X over the past year.
These are some data from a study published by the Center for Countering Digital Hate (CCDH), a British non-profit organization committed to fighting online hate speech.
To measure the increase of the phenomenon - the latest fake photos of Trump with African Americans, generated by his supporters - the study examined the four most popular image generators: Midjourney, OpenAI's DALL-E 3, Stability AI's DreamStudio or Image Creator by Microsoft.
In particular, the 130% figure on
All the companies examined, among other things, have written down policies against the creation of misleading content and have joined an agreement among the big tech companies to prevent misleading AI content from interfering with the 2024 elections.
The researchers said the AI tools generated images in 41% of their tests and were more susceptible to requests requesting photos depicting election fraud, such as ballots in the trash, rather than images of Biden or Trump.
According to the analysis, ChatGpt Plus and Image Creator managed to block all requests when asking for candidate images;
Midjourney performed the worst among the tools, generating misleading images in 65% of tests.
“The possibility of AI-generated images serving as 'photographic evidence' could exacerbate the spread of false claims, posing a significant challenge to preserving the integrity of elections,” the researchers say.
Reproduction reserved © Copyright ANSA