Skip to main content

Facebook struggles to get machines to stamp out hate speech

Protesters clash over anti-Islamophobia bill in Montreal, March 4, 2017. Photo by the Canadian Press

Facebook is struggling to catch much of the hateful content posted on its platform because the computer algorithms it uses to track it down still require human assistance to judge context, the company said Tuesday.

While artificial intelligence is able to sort through almost all spam and content glorifying al-Qaeda and ISIS and most violent and sexually explicit content, it is not yet able to do the same for attacks on people based on personal attributes like race, ethnicity, religion, or sexual and gender identity, the company said in its first ever Community Standards Enforcement Report.

The company defines hate speech as a direct attack using “violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation."

Facebook has faced a storm of criticism for what critics have said was a failure to stop the spread of misleading or inflammatory information on its platform ahead of the U.S. presidential election and the Brexit vote to leave the European Union, both in 2016. It has made a series of changes including new policies limiting political advertisements. ​

In Canada, it announced it was suspending Victoria-based tech firm, AggregateIQ, which was involved in the Brexit campaign. Yesterday, it disclosed that it had suspended roughly 200 apps out of thousands it has investigated, as it continues to try and contain the damage stemming from a period of time before 2014 when apps could suck up large amounts of personal information.

On Tuesday, Facebook said it took action on some 2.5 million hateful pieces of content in the first three months of 2018, up from 1.6 million in the last three months of 2017. But of the more recent total, only 38 per cent was flagged by Facebook before users reported it (an improvement on the 23.6 per cent in the prior three months).

"For hate speech, our technology still doesn’t work that well and so it needs to be checked by our review teams," said Guy Rosen, the company’s vice-president of product management, in a statement posted online announcing the release of the report.

By comparison, the company was first to spot more than 85 percent of the graphically violent content it took action on, and almost 96 per cent of the nudity and sexual content.

“Artificial intelligence isn’t good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue,” said Rosen.

“Hate speech content often requires detailed scrutiny by our trained reviewers to understand context and decide whether the material violates standards,” the company added in the report. “We tend to find and flag less of it, and rely more on user reports, than with some other violation types.”

The report did not cover the spread of false news directly, which it has previously said it was trying to stamp out by increasing transparency on who buys political ads, strengthening enforcement and making it harder for so-called “clickbait” from showing up in users’ feeds.

The report covers the six months from October 2017 to March 2018, and also covered graphic violence, nudity and sex, terrorist propaganda, spam and fake accounts.

The social media giant promised the report will be the first of a series seeking to measure how prevalent violations of its content rules are, how much content they remove or otherwise take action on, how much of it they find before it is flagged by users, and how quickly they take action on violations.

It did not include details on that last metric, saying it will appear in future versions once their methodology is finalized.

The new report was released in conjunction with Facebook’s latest Transparency Report, which said that across the world government requests for account data increased by four percent in the second half of 2017 compared to the first half.

Comments