The Limited Times

Now you can see non-English news...

Kicking toxic leaders off social media reduces the spread of hate online

2023-06-05T20:01:02.188Z

Highlights: A Facebook study shows that deleting a hundred accounts of 'insulters' has a positive impact on the audience. Controlling hate speech on the internet is one of the biggest challenges of this information age. The elimination of a hundred such accounts had a huge impact because denying the speaker to the main members leads to the improvement of the network in general. Some previous studies pointed out that the exclusion of these harmful profiles on platforms such as Twitter, Reddit or Telegram helped reduce unwanted activities, even those speeches in a general way.


A Facebook study shows that deleting a hundred accounts of 'insulters' has a positive impact on the audience


Controlling hate speech on the internet is one of the biggest challenges of this information age. No one denies it, but it is also not known how to do it effectively. For example, removing those who disseminate toxic content is one of the alternatives chosen by some platforms; Now, a study conducted internally by Facebook, with 26,000 of its users, shows that excluding leaders of extremist communities is an efficient way to dismantle hate speech on social networks, especially in the long term. The elimination of a hundred such accounts had a huge impact because denying the speaker to the main members leads to the improvement of the network in general.

Some previous studies pointed out that the exclusion of these harmful profiles on platforms such as Twitter, Reddit or Telegram helped reduce unwanted activities, even those speeches in a general way. But it was not enough to demonstrate the cause-and-effect relationship exposed by this study, carried out by researchers at Meta, Facebook's parent company, and published today in PNAS.

To reach these conclusions, Daniel Robert Thomas and Laila A. Wahedia analyzed the effects produced in the audience of six communities, whose most active representatives were expelled from the platform. Specifically, the Meta researchers wanted to understand to what extent this audience continued to observe, publish and share hateful content or interact with other profiles, after their referents ceased to exist. The results show that, on average, all these factors decreased. "They reduce their consumption and production of hateful content, and engage less with other audience members," they say in the study.

Following profile exclusions, users went on to see almost half of a hateful content daily. This means that, on average, those who saw around five publications with toxic content went on to see fewer than three. In addition, by ceasing to interact with members of the same toxic community, members were exposed to other types of content, groups or communities, that were not essentially violent in nature. None of the study's data can be linked to the original user accounts, due to Facebook's privacy protection terms.

The audience most loyal to those organizations that spread hate can look for other sources after the expulsion of professional haters. However, it is a short-lived reaction that slows down in just two months. The audience furthest from those leaders decreases their interaction with this content from the start. According to the study, this is positive, because it is the group that is most at risk of being influenced by toxic communities.

Overall, the results suggest that selective deletion can lead to "healthier" social networks. "Removal of leaders and network degradation efforts can reduce the ability of hate organizations to operate successfully on the internet," they explain.

Online hate speech erupts after controversial events, including against unrelated groups

It is not easy, in any case. By being excluded from popular platforms, those profiles could easily create new ones, and try to build a new network. They could also migrate to other platforms. And in addition, other toxic organizations that are still in place could replace their position and co-opt supporters, who would continue to be exposed to harmful content. To make this removal strategy more effective, the authors propose that deletions be made by several profiles at once because it "prevents organizations from rebuilding their networks," making it difficult for members to find each other again, because there are no remaining accounts to coordinate those returning to the platform.

Hate speech and toxics

If it is a decision that is left to the platforms, will they really want to carry them out? Sílvia Majó-Vázquez, associate researcher at the Reuters Institute for the Study of Journalism at the University of Oxford and professor at Vrije University Amsterdam, explains that the moderation of content on social networks must be "done seeking a balance between freedom of expression and the preservation of other rights that may be damaged", so it is essential to differentiate between hate speech, toxic discourse and incivility.

In conceptual terms, as Majó-Vázquez explains, incivility is the mildest level, ranging from informal language that includes disrespect or sarcasm. When it comes to a more extreme manifestation and "others are scared away from participating in a conversation", toxic speech is born, which can become violent. "From a democratic point of view, they are very harmful, because they do not allow the democratic ideal of public deliberation," he said by email.

According to this expert, the suspension of profiles should be done taking into account these conceptual dimensions and with manual mechanisms "that can guarantee that freedom of expression is being preserved." And this criterion must also apply to political figures. "We must carry out an exercise like the one we would do outside the networks, in which the right to freedom of expression of the one who emits the message and the preservation of the other fundamental rights of the audience are balanced. The automated mechanisms of deletion of messages and suspension of accounts must be continuously reviewed and the evaluation of those messages by experts must be prioritized, as some platforms already do with external advisory boards for the most relevant cases, "he stresses.

One of his studies carried out at the Reuters Institute in seven countries has shown that the relationship between toxicity and engagement is not always positive, that each case is different: it depends on the theme of each discussion and how severe the content is. In the context of the pandemic and when analyzing Twitter, the results showed that toxicity and popularity of toxic content do not go hand in hand. "In fact, we see the most toxic tweets losing popularity with the audience. However, messages with low levels of toxicity do, which see their popularity levels grow," says Majó-Vázquez. Therefore, it is not possible to say whether this relationship is the result of a decision by the audience "not to reward toxicity" or if the result of the moderations carried out by the platform. "It's something we can't answer with the data from our work, but this result challenges the belief that toxic content is always the most popular."

You can follow EL PAÍS Tecnología on Facebook and Twitter or sign up here to receive our weekly newsletter.

Source: elparis

All tech articles on 2023-06-05

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.