The Limited Times

Now you can see non-English news...

No matter the platform or the algorithm, it is humans who make social networks toxic

2024-03-20T21:02:45.676Z

Highlights: New study analyzes 500 million messages over three decades to better understand bad behavior on the internet. The analyzed platforms from which messages in English come are Facebook, Gab, Reddit, Telegram, Twitter, YouTube, Usenet (a forum created in 1979) and Voat (an American news aggregator) The authors have defined toxicity as “a rude, disrespectful, or unreasonable comment that could cause someone to leave a discussion” Toxicity does not scare away users from a network.


A new study published in 'Nature' analyzes 500 million messages over three decades to better understand bad behavior on the internet


Social media changes over the years, but toxic human behavior persists.

A persistent debate today in academia is to define the impact of social networks on our lives and democracies, especially if it has contributed to making public debate more toxic.

A new study published in

Nature

isolates various behaviors to try to better understand where online toxicity begins and ends.

Analyzes more than 500 million threads, messages and conversations across eight platforms over 34 years.

The result is that toxicity is much more linked to humans and has not specifically emerged now as a result of networks: “The study indicates that despite changes in networks and social norms over time, certain human behaviors, including toxicity,” says Walter Quattrociocchi, professor at Sapienza University (Rome) and co-author with other academics from his university and from City University and the Alain Turing Institute in London.

“This implies that toxicity is a natural result of online

discussions

, regardless of the platform.”

The analyzed platforms from which messages in English come are Facebook, Gab, Reddit, Telegram, Twitter, YouTube, Usenet (a forum created in 1979) and Voat (an American news aggregator).

The authors have defined toxicity as “a rude, disrespectful, or unreasonable comment that could cause someone to leave a discussion.”

Toxicity does not scare away

Another novelty of this study that goes against what has usually been thought about networks is that toxicity does not scare away users from a network.

Being a human reflex, it is assumed to be normal in an environment where users do not detect other signs of attitude such as gestures or tone of voice.

“The study's findings challenge the common belief that toxicity diminishes a platform's appeal,” says Quattrociocchi.

“Showing that user behavior in toxic and non-toxic conversations has almost identical patterns in terms of participation, it suggests that the presence of toxicity may not deter participation as commonly assumed.”

Academic research on

online

behavior has the difficulty of finding good data that serves to distinguish which behavior is properly human and which is caused by the design of the network and its famous algorithms.

This work on toxicity attempts to partially unravel that difference.

The result is that toxicity in networks is a product more of human nature than of technology: “Toxicity in

online

conversations does not necessarily stop people from participating or promote interaction.

It is more a reflection of human behavior itself seen across platforms and contexts,” says Quattrociocchi.

The study also found that polarization and diversity of opinions may contribute more to hostile

online

discussions than toxicity itself.

Users may end up prolonging the conversation and disrespecting a political rival due to conflicting opinions rather than reading rude or hostile comments.

“It can be concluded that polarization, by promoting interactions with debates between users of different opinions, tends to reinforce participation in platforms,” says Quattrociocchi.

“These interactions generated by controversy and debate may have a greater impact on maintaining user activity than toxicity,” she adds.

This finding may help platforms treat content moderation differently and better filter out toxic content so that human behavior survives less

online

: “Systems could be designed that encourage healthy debates without falling into toxicity, and moderation could be sensitive.” to the complexities of human behavior,” explains the Italian researcher.

Although the study points out that a certain toxicity is linked to human behavior on networks, this does not imply that all online interactions are condemned to be toxic or that efforts to mitigate them are useless.

“The most effective way to reduce online toxicity is to make people aware of the behavior we have

online

, and for that we need cognitive media training above all,” says Quattrociocchi.

You can follow

EL PAÍS Tecnología

on

Facebook

and

X

or sign up here to receive our

weekly newsletter

.

Source: elparis

All tech articles on 2024-03-20

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.