The Limited Times

Now you can see non-English news...

Bullying with AI tools: Any of your photos can be used against you


No, the armed, naked clown you'll soon see on WhatsApp isn't really your kids' teacher. But creating such images is now child's play.

I don't know exactly when tech journalism went from permanent optimism to permanent dystopia.

But the days when we reporters found new technology initially exciting and not immediately scary seem to be over.

"AI image generators can now easily create life-destroying deepfakes," headlined this week's Ars Technica, for example.

At »Technology Review« it was recently said: »The viral AI avatar app Lensa undressed me – without my consent.«

And TechCrunch asks, "Is ChatGPT 'a virus unleashed on the world'?"

On the one hand, all three articles deal with the current manifestations of so-called artificial intelligence (AI).

On the other hand, it is always about why people are already abusing the new technology because it lacks security barriers:


is an interactive text generator that can create SPIEGEL columns, computer code, business plans and homework - and lots of bullshit.

I wrote more about this last week in this newsletter.


is the most downloaded app of the past week.

In the words of my colleague Jörg Breithut: »A selfie machine that uses AI to create completely exaggerated portrait images« – but which, among other things, due to the carelessly compiled training material, tends to sexualize women.

The Ars Technica article discusses the text-to-image generator

Stable Diffusion

, which we've covered extensively here , and a supplement originally conceived by Google researchers called



It is able to subsequently train an actually »trained« model such as Stable Diffusion on any new images.

A handful of motifs is enough.

Google has kept the code under wraps, but others figured out how it worked and released it.

This sounds abstract at first, but it becomes problematic as soon as images of people come into play.

Because Dreambooth also works with portrait photos.

If you feed the software three to five pictures of yourself, it then draws them into any motif.

But that's not what malicious people would do.

Instead, they would take photos of other people and show them in extremely unfavorable settings and situations: ex-partners, classmates, hated colleagues, political opponents.

The footage could come from social media profiles or from videos.

The digitally created images aren't perfect: if you look closely, you'll see indications that the material is computer-generated.

But Ars Technica's experiment with a real person was so successful, according to the article, that publishing the images could have ruined the reputation of the volunteer test subject.

So, as a precaution, the editors—again using AI—generated images of a nonexistent person they named John and fed them to Dreambooth.

The journalists were then able to turn »John« into a clown, a member of a paramilitary group, a burglar, a porn star or a drug user in a matter of seconds.

The technology is freely available and can no longer be captured

What can be done with "John" would work with anyone you could find a few training photos of.

Of course, you don't necessarily need an AI to fake a photo, image editing programs such as Photoshop are sufficient.

But AI tools facilitate and accelerate the process to such an extent that the effort is no longer a barrier.

The technology is in the world, freely accessible and can no longer be captured.

"Everyone should know that something like this is now possible," writes Ars Technica reporter Benj Edwards.

His first suggestion for a solution is as drastic as it is hopeless: »At the moment you can try to get all your photos off the Internet.

Maybe that's a good idea."

But that's not realistic, he admits.

And I agree with him there.

I tried this myself because I didn't want to be plugged into any facial recognition software training databases any longer without being asked.

But even after weeks of effort, I was not able to remove all of my recordings from the Internet.

All that remains is a healthy, critical approach to digital images.

Rule of thumb: The armed, naked clown in the classroom that you may soon receive on WhatsApp may look like your children's class teacher.

But he probably isn't.

Our current Netzwelt reading tips for

  • »Microsoft's answer to the iPad Pro is a late riser« (six minutes of reading)

    The Surface Pro 9 is available with either an Intel or Microsoft's ARM-based SQ3 processor.

    Both variants have their advantages and disadvantages, as shown in the test by Matthias Kremp, including Windows emergency operation.

  • "The best new games for the Advent season" (eight minutes of reading)

    From "Sonic" to "Bayonetta" to "God of War", as well as trombones, Pokémon and two medieval games: Matthias Kreienbrink recommends ten video games.

  • "iPhone apps can soon cost up to 12,000 euros" (three minutes of reading)

    Why is Apple changing the rules for prices in the App Store?

    And who is asking four-digit sums for an app now?

    Here are the answers.

External links: Three tips from other media

  • "Musk's ruthless behavior does not follow any master plan" (Podcast, partly in English, 57 minutes)

    Lisa Hegemann and Jens Tönnesmann from Zeit Online and "Zeit" together with marketing professor and Musk critic Scott Galloway calmly take the whole Twitter drama apart.

  • "ChatGPT cheats: Invents sources that don't even exist" (three minutes of reading)

    AI nonsense again: Barbara Wimmer on a scientist's experiences with the "incredibly plausible-sounding hallucinations" of an AI.

  • »The World Cup of Microsoft Excel« (English, four minutes of reading)

    »A small vision of how we want the Internet to be, not how it really is«: Jacob Stern reports on the Microsoft Excel World Cup for »The Atlantic« and the friendly atmosphere of the competition.

I wish you a fake-free week,

Patrick Beuth

Source: spiegel

All tech articles on 2022-12-14

You may like

News/Politics 2023-02-06T17:07:32.349Z

Trends 24h


© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.