The Limited Times

Now you can see non-English news...

Artificial intelligence: Facebook's researchers doubt its hate detection capabilities

2021-10-18T14:18:27.949Z


Facebook's AI only deletes a fraction of all hate postings and confuses pictures of car washing with shootings, according to a media report. The group says the interpretation falls short.


Enlarge image

Facebook service provider for content moderation in Germany: »Reduce the distribution of content« (archive image)

Photo: Soeren Stache / dpa

The "Wall Street Journal" has continued its series of articles on the "Facebook files" and has questioned the efficiency of the group's automated hatspeech recognition systems. Citing, among other things, internal company documents handed over by whistleblower Frances Haugen, the newspaper writes: "Artificial intelligence has only minimal success in removing hate speech, depictions of violence and other problematic content."

The 2019 documents therefore contain examples of specific difficulties.

These include problems with the detection systems, identifying recordings of bloody cockfights and car accidents that violate Facebook's community standards.

Videos of shootings are also not always recognized, while harmless clips from a car wash are incorrectly interpreted as a shootout.

Above all, there are numbers in the documents that at first glance seem to contradict Facebook's story of its now largely automated clean-up work.

In 2019, according to the report, a Facebook researcher estimated that AI would only delete two percent of all hateful content viewed by someone on Facebook.

In the summer of this year, another Facebook team assumed it was between three and five percent - and only 0.6 percent of all content that violates Facebook's guidelines on violence and calls to violence.

Deleted content is the wrong yardstick, says Facebook

But Facebook rejects the representation of the newspaper. It is not important how much the AI ​​erases. But what Facebook calls »prevalence« - dissemination. In other words, how much problematic the users actually get to see. This number is 0.05 percent: out of 10,000 views, only five are hate speech. AI has a large share in this balance because it helps to identify prohibited content before users report it. But only clear cases would be automatically deleted, writes Guy Rosen, Facebook's Vice President of Integrity, in a blog post.

"We have to be sure that it is hatespeech before we remove anything," it says. “If something could be hatspeech but we're not sure enough, our technology can reduce the spread of the content. Or it no longer recommends groups, pages, and users who regularly post such content. We also use technology to mark content for manual review. ”In short:“ Focusing only on deleted content is the wrong way to evaluate our fight against hatspeech. ”

The newspaper explains Facebook's approach of making dubious content less visible, but writes: "The accounts that post this material get away with it." In addition, it accuses Facebook of having beautified its numbers. In 2019, the company not only reduced the number of hours its external moderation teams worked, but also "started using an algorithm that resulted in a greater proportion of reports being ignored by users because the system considered a violation to be unlikely." In addition, Facebook made the complaint process more cumbersome, which significantly reduced the number of reported content.

A Facebook spokesman told the Wall Street Journal that all of this should increase the efficiency of the system.

The restructuring of the complaint process has also been partially withdrawn.

pbe

Source: spiegel

All tech articles on 2021-10-18

You may like

News/Politics 2024-03-24T04:43:37.273Z
News/Politics 2024-03-09T00:37:18.550Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.