The warning when a nude photo is detected in the app, here in the English version
Perhaps first things first, Apple isn't suddenly going to start checking all photos and videos shared through the Messages app for nudity.
The function can now also be used in Germany and France, it is part of the "Family Release" in the beta version of iOS 16, iPadOS 16 and macOS Ventura released on Wednesday.
This means: The respective beta version must be installed manually and the child protection function must also be activated manually before the scanner can start.
If the child then receives a photo identified by Apple as nude via the Messages app (Apple's SMS alternative), the image is blurred and help options appear.
So the child can block the contact, inform someone else or view the content after explicit confirmation.
The scanner works in both directions.
Even if the child tries to send a corresponding picture themselves, they are presented with a warning.
Apple wants to prevent anyone from persuading children to send or view nude photos of themselves.
This is intended to prevent sexual abuse and so-called cybergrooming.
No automatic warning to parents or to Apple
According to the company, the images are only checked locally on the iPhone.
This is called client-side scanning.
Neither Apple nor the parents automatically find out whether the software wants to have recognized the relevant material.
The English description of the company does not make it clear what is and is not considered "nudity".
In autumn 2021 there was talk of "explicitly sexual photos".
When asked, Apple now stated that the mere visibility of genitals can be sufficient to trigger the notifications.
According to the company, it collects feedback on various channels in order to improve recognition.
However, an update is only possible as part of an operating system update, so it does not take place permanently.
Apple originally announced a larger package to curb the spread of abuse images through its services and devices, as well as grooming attempts.
But at least one of the company's ideas faced so much backlash from civil rights organizations in 2021 that Apple temporarily shelved the plan.
Apple's goal was to automatically detect and report child sexual abuse images on its customers' devices as soon as they were uploaded to iCloud online storage.
To do this, the photos – again locally on the device – should be automatically compared with a database of known abuse images provided by child protection organizations.
Apple's idea was picked up by politicians
If Apple's filter had identified a certain number of known abuse images, Apple employees should first view the relevant files and, if necessary, report them to the authorities.
(You can read everything you need to know about the technical details here.) At the time, the criticism was that authoritarian governments could demand that Apple use the new technology to search for other content and thus to censor and monitor it.
Although Apple rejected this fear, the debate was not without consequences.
To date, the company has not rolled out the feature anywhere.
But at the political level, for example in the EU Commission's proposal to combat sexual abuse, which critics call "chat control," client-side scanning is now a regularly proposed instrument.
That Apple will one day be legally forced to activate the technology cannot be ruled out.