A masked man at a January 2020 protest in Cardiff against the use of facial recognition cameras by police.Matthew Horwood / Getty Images
"The police will have no choice but to use facial recognition in conjunction with whatever other technology is at their disposal." That's the opinion of Fraser Sampson, the new one in charge of overseeing the use of surveillance cameras and other biometric monitoring measures in the United Kingdom. If criminals turn to increasingly sophisticated technologies, the security forces must not be left behind; artificial intelligence will be "increasingly necessary in police work." Sampson's position, collected by the
, contrasts with that of his predecessor in office, Paul Wiles, much more skeptical of the use of these systems, and clashes head-on with the vision of the European Union, which considers that facial recognition ―which crosses images of cameras with databases of suspects - is a high-risk technology and therefore can only be used in very few exceptions.
The draft European regulation on artificial intelligence, presented by the Commission on April 21 and pending ratification by the European Parliament and the Member States, circumscribes the use of "remote biometric identification systems" to cases where it remains expressly authorized by the EU or the Member States, if they are used for “purposes of prevention, arrest or investigation of serious crimes or terrorism” or if their application is limited to a certain time and then deleted.
Appointed in March, the job of the independent commissioner for Biometrics and Video Surveillance, according to the official website, is to monitor the police use of DNA samples, fingerprints and records and to promote appropriate use of video surveillance systems. . Samspon is the first commissioner appointed in the Brexit era. In a report presented in the summer of 2019, Wiles, who held the position from 2016 to two months ago, pointed out that the lack of a specific regulation for this technology left it up to the police to decide at their discretion when the public benefit exceeds the “ significant intrusion of individual privacy ”that involves being scanned and recorded.
The current commissariat is more in favor of giving the initiative to the police.
In his opinion, the legal framework to be developed in the future should "allow public bodies to reasonably use all available means to relieve them of regulatory responsibility," he told the
The UK is one of the most policed countries in the world.
There are no official figures on how many surveillance cameras dot the streets of major British cities.
Some studies speak of about four million.
In London alone there would be about 500,000 cameras, according to data provided by the metropolitan police (other sources raise the figure even more), which places the capital in the world's top 20.
The rest of the cities on that list, with the exception of the Indian Hyderabad, are Chinese.
The London Metropolitan Police use facial recognition on city streets to scan pedestrians for suspected serious crimes such as knife attacks or sexual exploitation, according to the
This technology has been tested in Cardiff by the South Wales Police, although a study by the University of Cardiff reveals that of the 2,900 possible suspects identified by the system, 2,755 were false positives.
An activist wearing makeup to avoid facial recognition from Moscow cameras, arrested by the police in a protest against the video surveillance system, on February 9, 2020. / IVAN KRASNOV (RTVI) IVAN KRASNOV
A controversial technology
The dilemma of giving up individual privacy to gain collective security has been a classic in political science since the time of Thomas Hobbes. The digital age, whose advances are outpacing legislators, is turning the side of those who advocate hypervigilance. Under what assumptions can one resort to images of citizens recorded on the street? How is the confidentiality of this information guaranteed? How long should the recordings be kept? What happens if the algorithm fails?
"There is a risk of erroneous arrests," Wiles told
in 2019 after publishing a report on the performance of these systems. "What people have to remember is that if someone is arrested and then it is proven that it was not him or her, the arrest remains in the system." For these and other reasons, Wiles concluded in his report that the deployment of this technology in the United Kingdom was being chaotic and outside of a regulation that could regulate its misuse.
The United States, a country where virtually any adult with no prior record can buy a firearm, is far more tolerant of facial recognition than Europe.
If a cautious approach prevails in the EU for the moment, in the North American country this technology has been part of the arsenal of police forces for years.
Civil rights activists and organizations are working to ban the use of systems that have been shown to be racially biased.
Robert Williams, the first known case of erroneous arrest due to an algorithm, has launched a legal battle against Detroit police that could precipitate changes around its application by law enforcement.
China, the paradigm of the Big Brother
At the forefront of artificial intelligence, China is a gigantic laboratory for testing the effects of the unreserved application of facial recognition. Dragonfly Eye, the system developed by the Yitu company and used by cities like Shanghai for years, is capable of easily recognizing anyone from its database of 1.7 billion people. That figure includes the entire population of the country and 320 million foreigners, whose biometric data are recorded as they disembark at Chinese airports.
The army of cameras (hundreds of millions, according to some sources) scattered throughout large Chinese cities allow authorities to quickly identify suspects.
A BBC reporter tested the facial recognition system in the southern city of Guiyang of four million people in 2017. The system took seven minutes to find.
You can follow EL PAÍS TECNOLOGÍA on