Beautiful Virgin Islands


Facebook denies weak performance on hateful content

Facebook denies weak performance on hateful content

Facebook has denied allegations that its algorithms only remove a small number of posts containing hate speech.

The company uses automated systems, alongside other methods, to identify and take down such posts.

The Wall Street Journal (WSJ) reported that leaked documents suggest only a small percentage of offending content is actually removed by the technology.

Facebook, however, insisted it has seen recent success in reducing hate speech on its platform.

The leaked internal Facebook documents seen by the WSJ include information on a team of employees that allegedly found the technology was successful in removing only 1% of posts that break the social media company's own rules.

In March 2021, an internal assessment allegedly discovered that Facebook's automated takedown efforts were eliminating posts generating only an estimated 3 to 5% of total views of hate speech.

Facebook is also alleged to have cut the amount of time that human reviewers spend on checking hate speech complaints made by users.

This change, reported to have occurred two years ago, "made the company more dependent on AI enforcement of its rules and inflated the apparent success of the technology in its public statistics", the WSJ alleged.

Facebook firmly denied that it is failing on hate speech.

Guy Rosen, Facebook's vice-president of integrity, wrote in a blog post that a different metric should be used to evaluate Facebook's progress in this area.

Mr Rosen pointed out that the prevalence of hate speech on Facebook - the amount of such material viewed on the site - has fallen as a percentage of all content viewed by users.

Hate speech currently accounts for 0.05%, or five views per every 10,000, and has fallen by 50% in the last nine months, he said.

"Prevalence is how we measure our work internally, and that's why we share the same metric externally," he added.

Mr Rosen also noted that more than 97% of removed content is proactively detected by Facebook's algorithms - before it is reported by users who have seen it.

The latest story about hate speech is just one in a series of similar articles about Facebook published by the WSJ in recent weeks.

Frances Haugen identified herself as the source of several leaks

The stories are largely based on leaked internal documents provided to the newspaper by former Facebook employee Frances Haugen. They refer to a series of content moderation difficulties, from anti-vaccine misinformation to graphic videos, as well as the experiences of younger users on Instagram, which is owned by Facebook.

On Monday, Facebook's vice-president of global affairs, Nick Clegg - the former UK deputy prime minister, added his voice to Facebook's pushback.

In a blog, he argued that "these stories have contained deliberate mischaracterisations of what we are trying to do, and conferred egregiously false motives to Facebook's leadership and employees".

A WSJ spokesman told the BBC: "None of Facebook's defences have cited a single factual error in our reporting.

"Instead of attempting to aggressively spin, the company should address the troubling issues directly, and publicly release all the internal research we based our reporting from, that they claim we misrepresented."

Newsletter

Related Articles

Beautiful Virgin Islands
0:00
0:00
Close
×