The Limited Times

Now you can see non-English news...

Social media platforms pledged to curb extremism. The Buffalo massacre puts them to the test

2022-05-17T19:14:45.092Z


Following the shooting in Buffalo, social media scrambled to stop a video from spreading. Will they be able to curb extremism online?


Anti-Semite and white supremacist, this is how the Buffalo attacker proclaims himself 2:18

New York (CNN Business) --

In the aftermath of Saturday's mass shooting in Buffalo, New York, Big Tech platforms scrambled to stop the spread of a video of the attack filmed by the suspect and a document allegedly also produced by him and where he outlines his beliefs.

Major social media platforms have tried to improve how they respond when this type of content is shared since the 2019 mass shooting in Christchurch, New Zealand, which was broadcast live on Facebook.

In the 24 hours after that attack, Facebook said it had removed 1.5 million copies of the video.

Experts on online extremism say such content can act as far-right terrorist propaganda and inspire others to carry out similar attacks.

The Buffalo attacker had a direct influence on the Christchurch attack, according to the document he allegedly shared.

  • Online Posts Reveal Alleged Buffalo Shooter Spent Months Planning Racist Supermarket Attack

The stakes in addressing the spread of such content quickly are significant.

"This fits a pattern we've seen over and over again," said Ben Decker, CEO of digital research consultancy Memetica and an expert on radicalization and extremism online.

"At this point we know that consumption of these videos creates mass imitation shootings."

Still, social media companies face challenges responding to what appear to be users posting a barrage of copies of the Buffalo shooting video and document.

He was saved from the massacre in Buffalo by going to buy a coffee 2:27

The big tech response

Saturday's attack was broadcast live on Twitch, an Amazon-owned video streaming service that's especially popular with gamers.

Twitch said it removed the video two minutes after the violence began, before it could be widely viewed, but not before other users downloaded it.

Since then, the video has been shared hundreds of thousands of times on major social media platforms and has also been posted on lesser-known video hosting sites.

Spokespeople for Facebook, Twitter, YouTube and Reddit told CNN they have banned sharing of the video on their sites and are working to identify and remove copies of it.

(TikTok did not respond to requests for comment on its response.)

But companies appear to be struggling to contain the spread and manage users who are looking for loopholes in their content moderation practices.

CNN noted a link to a copy of the video circulating on Facebook late Sunday.

Facebook included a warning that the link violated its Community Standards, but still allowed users to click and view the video.

Facebook's parent company, Meta, said it had removed the link after CNN inquired about it.

  • The Buffalo massacre puts the focus on a web full of hate messages

Meta on Saturday called the event a "terrorist attack," prompting internal company teams to identify and delete the suspect's account, as well as begin removing copies of the video and document and links to them from other sites, according to a company spokesman.

The company added the video and document to an internal database that helps automatically detect and remove copies if they are re-uploaded.

Meta has also banned content that praises or supports the attacker, the spokesperson said.

The video was also hosted on a lesser-known video service called Streamable, where it was only removed after it was reportedly viewed more than three million times and its link was shared on Facebook and Twitter, according to

The New .

YorkTimes

.

A Streamable spokesperson told CNN the company was "working diligently" to remove copies of the video "promptly."

The spokesman did not respond when asked how a video had reached millions of views before it was taken down.

Copies of the document allegedly written by the attacker were uploaded to Google Drive and other smaller online storage sites and shared over the weekend via links to those platforms.

Google did not respond to requests for comment about the use of Drive to disseminate the document.

Challenges in dealing with extremism on social media

In some cases, the big platforms seemed to struggle with common moderation mistakes, such as removing video uploads in English faster than in other languages, according to Tim Squirrell, head of communications at the Institute for Strategic Dialogue, a think tank. dedicated to tackling extremism.


But the big tech companies also have to deal with the fact that not all Internet platforms want to take action against this type of content.

In 2017, Facebook, Microsoft, YouTube and Twitter founded the Global Internet Forum to Counter Terrorism, an organization designed to help promote collaboration to prevent terrorists and violent extremists from exploiting their platforms, which has since grown to include to more than a dozen companies.

Following the Christchurch attack in 2019, the group pledged to prevent live streaming of attacks on its platforms and to coordinate to tackle violent and extremist content.

"Now, technically, that failed. It was on Twitch. Then it started being posted out there in the first 24 hours," Decker said, adding that the platforms have more work to do in terms of effectively coordinating to remove harmful content during crisis situations.

Still, the work done by the major platforms from Christchurch meant that their response to Saturday's attack was quicker and more robust than the reaction three years ago.

But elsewhere on the Internet, smaller sites like 4chan and the Telegram messaging platform provided a place where users could congregate and coordinate to repeatedly re-upload the video and document, according to Squirrell.

(For its part, Telegram says it "expressly prohibits" violence and is working to remove footage of the Buffalo shooting.)

Governor of New York speaks about the posts of the 4:53 suspect

"A lot of the threads on the 4chan message board were just people asking to stream over and over again, and once they got a seven-minute version, they'd repost it over and over again" on larger platforms, he said. Squirrell.

As with other content on the Internet, videos like the one of Saturday's shooting are often quickly manipulated by extremist online communities and incorporated into memes and other content that may be more difficult for mainstream platforms to identify and remove.

Like Facebook, YouTube, and Twitter, platforms like 4chan rely on user-generated content, and are legally shielded from liability for much of what users post (at least in the US) by a law called Section 230. But while the major Big Tech platforms are incentivized by advertisers, social pressures, and users to stand up to harmful content, the smaller, fringe platforms are not motivated by a desire to protect ad revenue or attract a broad user base.

In some cases, they want to be online homes for a speech that would be moderated elsewhere.

"The consequence of this is that you can never complete the hit-the-mole game," Squirrell said.

"There's always going to be somewhere, somebody circulating a Google Drive link or a Samsung cloud link or whatever else that allows people to access this (...). Once it's in the ether, it is impossible to remove it all.

Source: cnnespanol

All news articles on 2022-05-17

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.