The Limited Times

Now you can see non-English news...

To detect terrorist content: Facebook wants to evaluate police bodycams

2019-09-18T10:28:32.785Z


Facebook has problems recognizing violence in live recordings. The group wants to train its AI software now with data from police body cameras to detect firearms violence faster.



Facebook has announced a number of measures in a communication with which the Group wants to oppose violence and terrorism. Among other things, Facebook is planning to analyze footage from police bodycams in order to better recognize livestreams of terrorist attacks in the future. The announcement came shortly before a US Congressional hearing on Wednesday, in which Facebook must comment on its handling of "mass violence, extremism and digital responsibility".

As of October, police units such as the London Metropolitan Police and US police will provide Facebook footage of their gun training sessions, which will then be used to train AI software. In this way, violence and attacks will soon be detected automatically in livestreams - the livestream of the attack was not discovered in time during the terrorist attack in Christchurch, New Zealand.

"With this initiative, we want to improve the detection of real, personal footage of violent events," says Facebook's blog post, "and avoid misidentifying other types of footage, such as fictional content from movies or video games."

During the terrorist attack in Christchurch, the attacker had documented in a 17-minute livestream, as he drove his weapons to a mosque, stormed them and shot people in the mosque and in their environment. 200 users had tracked the attack on Facebook in real time - but the video was reported by users only twelve minutes after the stream was terminated.

Variants difficult to discover

Facebook had removed around 1.5 million videos from the attack in the first 24 hours after the attack. More than 1.2 million clips have been locked according to the group directly upload. "We removed and hashed the original Facebook live video so that other shares that are visually similar to the video are recognized and automatically removed from Facebook and Instagram," Facebook explained. "Some variants, such as screenshots, were harder to spot, so we also used additional recognition systems, including audio."

Police officers also hope that the analysis of the Bodycam data reveals that terrorist attacks or their preparation are already underway during the deed or their preparation: "The technology that Facebook is attempting to develop could help identify firearms attacks at an early stage and potentially police officers support their response to such incidents all over the world, "said Neil Basu, head of the Metropolitan Police's counter-terrorism unit.

Firearms Command members regularly train police to respond to different scenarios, from terrorist attacks to hostage arrests on land, in public transport and on water, according to a press release from the police. The material provided by them therefore shows a variety of shooter perspectives. is it [called.

The Bodycam recordings of the Metropolitan Police should also be made available to the British Ministry of the Interior. The recordings are to be shared in this way with other technology companies, which also work on software to curb livestreams of attacks with firearms.

Extended concept of terrorism

Facebook has also expanded its definition of terrorism in the guidelines this week. Terrorism is no longer limited to attacks carried out for political or ideological reasons, but also to acts of violence directed in particular against civilians with the aim of intimidating and threatening them.

Domestic terrorism and right-wing extremist content from racist groups such as the White Supremacists has long been no priority for Facebook. The group focused primarily on international terrorist groups such as the Islamic State (IS) and Al Qaeda. According to Facebook, more than 26 million pieces of content have been removed from this area over the past two years.

In March, after the terrorist attack in New Zealand and public and political criticism, Facebook had explicitly banned content from right-wing extremist racist groups and started to remove such content from Facebook.

"We have banished more than 200 White Supremacists organizations from our platform, and we use a combination of AI and human expertise to remove content that these organizations praise or support," said Facebook. Since mid-2018, according to the Group also automatic filter software is also used against such content.

Source: spiegel

All tech articles on 2019-09-18

You may like

News/Politics 2024-02-26T13:13:29.152Z
Life/Entertain 2024-02-06T15:42:09.623Z
News/Politics 2024-02-28T04:56:40.233Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.