The Limited Times

Now you can see non-English news...

How to prevent facial recognition systems from decrypting photos on your networks

2021-01-28T17:13:37.808Z


A team of researchers has created a tool that modifies the photographs to prevent them from being processed by machine learning systems


FOLLOW

  • Follow

In the 21st century, anonymity is not an easy thing.

Not with our lives portrayed on our social networks, and not with companies like the American Clearview collecting billions of photos of citizens around the world to develop and offer facial recognition services.

“What companies like this do is sift through tons of images from social media and create a huge database with many photos for each person.

The one on your LinkedIn profile, the one on Facebook… ”, explains Micah Goldblum, a researcher specialized in machine learning at the University of Maryland.

To avoid that our photos end up in those populous galleries, we have two options: not to upload them to the internet or to leave them unrecognizable.

LowKey bets on the second option.

This tool modifies the images with the double objective that the people portrayed in them remain identifiable to the human eye, but are indecipherable for facial recognition systems.

Once processed on this platform, the precision with which these images are identified falls below 1%.

The application is the result of a team of researchers that includes Golblum along with Valeria Cherepanova, Tom Goldstein, Shiyuan Duan, John Dickerson, Gavin Taylor and Harrison Foley.

LowKey's technology uses so-called adversarial attacks, which are characterized by seeking to deceive machine learning systems.

In research, these systems are often used to establish a kind of cat-and-mouse game in which advances in one model drive improvements in the other, but are of little practical use to any ordinary citizen.

"A lot of what has been done in this field focuses on problems that are only of interest to researchers," says Goldblum.

In contrast, LowKey is already available to anyone on a website that allows you to upload the original images, adjust the intensity of the attack and download its poisoned version.

As it does?

The tool retouches the appearance of photos from the machine's point of view.

"These neural networks take your face and transform it into a series of numbers that describe its properties," says Goldblum.

This translation of the face into numbers is unreadable for us: it does not represent the width of the nose nor is it associated with hair color.

It is a representation of the way the computer thinks, so that to fool the system we do not need makeup or wigs, but to introduce modifications that result in a different numerical series.

"You can always modify an image successfully," emphasizes Goldstein.

The real challenge for LowKey is that, to the human eye, the modified image still has an acceptable resemblance to the original.

"In some cases, we achieve the objective with very small disturbances, but in others the changes are very large," admits the expert.

Black boxes


The ins and outs of facial recognition systems such as Amazon Rekognition or Microsoft Azure Face Recognition API, with which LowKey has been tested, are not accessible to the general public, since they are the private property of the companies that have developed them.

This has forced researchers to grope their adversarial technology, basing their attacks on the information available on the latest tools applied to the identification of individuals.

It is also the reason for the variability of the results obtained, but the team continues to work to improve this section.

"For now, LowKey works with individual photos and achieves less visible modifications with small, low-resolution images," says Valeria Cherepanova.

Can we expect your system to continue working as these companies modify their facial recognition systems?

One happy consequence of having designed LowKey blindly, Goldblum explains, is that its effects are quite general: "As neural networks have become more complex, they have also become easier to fool. If they are not specifically trying to defend themselves from LowKey, but from achieving more and more advanced recognition systems, I see no reason to think that they will become less vulnerable to deception. "

In addition to making an easy-to-use tool to protect your photos available to the public, LowKey pursues the goal of raising awareness of the vulnerability of the content that is uploaded to the network.

If we use this tool, we blur the new content, but we cannot rescue the information that has already been collected from previous images.

"A lot of people don't have good practices when it comes to social media and privacy. They just upload lots of images of themselves in different places. And as a result, they offer a lot of information that can be used to identify them," Goldstein reasons.

In this sense, a safer approach would be to limit the number of photos we use as much as possible: if we have three accounts on different platforms, use the same one on all three.

"The best we can get out of LowKey is a better understanding of the issue and how personal information is being used."

MORE INFORMATION

  • How can facial recognition systems identify me?

  • The silent invasion of facial recognition

Faith of errors

l

Source: elparis

All news articles on 2021-01-28

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.