The Limited Times

Now you can see non-English news...

Clearview AI now offers facial recognition to schools

2022-05-25T13:36:13.933Z


The New York company monetizes its illegally created biometrics database with a new product. After the mass murder in Uvalde, anything that sounds like more security for schools should sell.


What is perhaps the sleakiest biometrics shack on the planet no longer just wants to make money from facial recognition for law enforcement officers, but also from facial recognition for schools.

I'm talking about Clearview AI, my favorite example of questionable AI applications.

I'm not alone in my opinion: the New York firm was fined this week by the UK data protection regulator ICO to pay £7.5million and wipe all UK residents' data.

Because the unsolicited collection of facial photos on the Internet to create a gigantic, commercially marketed biometrics database is on the one hand Clearview's specialty and on the other hand a data protection violation.

And not only in Great Britain - the company has already received similar rebukes in France, Canada, Australia and Germany.

In Italy, she even received a fine of 20 million euros.

(Read more about how to get out of the Clearview database as an internet user here.)

A few days before the ICO decision, Clearview had also agreed to a settlement with the civil rights organization ACLU to stop a lawsuit in the US state of Illinois.

Among other things, the agreement states that Clearview cannot make its face database available to most private institutions in the USA, neither for free nor for money.

So the company is now trying to sell another product: an access system based on voluntary facial recognition.

As the Reuters news agency reported on Wednesday, the respective users should upload photos of themselves.

When entering a physical or digital space, a camera image is then compared with the previously uploaded photo.

According to Clearview CEO Hoan Ton-That, one of the first customers is a US company that sells access systems to schools.

For several years, the use of such technologies has been part of attempts to better protect US schools from armed attackers.

It is foreseeable that Clearview will hardly be criticized for its deal, if it really exists, after such terrible events as the one in Uvalde, Texas.

Equally foreseeable is that facial recognition may not be the right answer to Uvalde.

According to Reuters, Hoan Ton-That emphasized that the new technology will not be used to collect additional photos for the large Clearview database.

However, the access system was trained with the unsolicited and illegally collected facial photos, at least in several countries.

So Clearview is happily monetizing its past data breaches.

Our current Netzwelt reading tips for SPIEGEL.de

  • "How We Checked the Xinjiang Police Files" (seven minutes read)


    The documents and photos from China's gulags are harrowing.

    Particularly interesting from Netzwelt's point of view: What means can be used to check whether they are authentic?

    Alexander Epp and Roman Höfner on metadata, satellite images and – see above – facial recognition.

  • "Authorities record large wave of fake Europol calls" (three minutes of reading)


    "This is a message from the Federal Police Department": Max Hoppenstedt has received several such spam calls, and he is not alone.

    In the article he explains how many complaints there have been and why the police can do little against the "Europol scam".

  • "Space battle under the cold buffet" (seven minutes of reading)


    Did someone say game night?

    And why play on the boring dining table when you can buy a special table for several thousand euros called "Zeus, King of Gods"?

External links: three tips from other media

  • »Darknet Diaries Episode 114: HD« (Podcast, English, 78 minutes)


    The story of the notorious hacking tool Metasploit is told by its inventor HD Moore: usual nerdy, but also extremely interesting insights from the »Darknet Diaries«.

  • "All these images were generated by Google's latest text-to-image AI"


    Text-to-image generators are apparently the next big AI thing.

    You type in »raccoon in an astronaut helmet looks out the window at night« and the software creates the image.

    The one from OpenAI is called "DALL-E", now Google is gradually moving ahead with "Imagen" according to its own statements.

    »The Verge« shows an impressive selection of images and explains why the technology is problematic.

  • "Protection of minors means data protection" (three minutes of reading)


    What does a 16-year-old think of the EU Commission's plan to force messenger operators to screen encrypted communications after attempts at grooming?

    Not much, despite negative experiences online, as Carla Siepmann writes on Netzpolitik.org.

I wish you a sunny holiday,

Patrick Beuth

Source: spiegel

All tech articles on 2022-05-25

You may like

News/Politics 2024-03-25T15:04:44.045Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.