Facebook said that hate speech violations in the second quarter surged.
Our mission to help you navigate the new normal is fueled by subscribers. To enjoy unlimited access to our journalism, subscribe today.
Facebook and Instagram users are increasingly posting hateful content despite backlash from advertisers.
Facebook removed 22.5 million posts during the second quarter that violated its rules against hate speech, more than double the number during the first quarter and nearly quadruple from the same period in 2019.
Meanwhile on Instagram, the number of hateful posts removed more than quadrupled to 3.3 million in the second quarter from 800,000 during the preceding one. Facebook didn’t report the number of hate speech violations on Instagram during the second quarter of last year.
“This change is largely driven by an increase in proactive technology detection,” said Guy Rosen, Facebook’s vice president of integrity, attributing the rise to better technology for detecting and removing posts, including in more languages.
The news comes as Facebook faces increasing scrutiny for the hateful and discriminatory content that users post. In July, more than 1,000 advertisers, including large companies like Unilever and Coca-Cola, boycotted buying ads on Facebook in an effort to force the company to do more to police its service. Facebook has also been criticized for allowing inflammatory posts by President Trump to remain on its service. Since those complaints, Facebook has tweaked its rules for politicians so that it now flags violating posts with a label that says the post is considered newsworthy but breaks its policies.
In its ongoing efforts to show that it aggressively fights hate, Facebook said that its algorithms are getting better at identifying violations before users complain. The company claimed that its technology flagged nearly 95% of hateful posts on Facebook before users reported it, up from 89% the previous quarter, and that it proactively identified 84% of them on Instagram, up from 45%.
Facebook also announced a new policy against users posting images of people in blackface. Though it has announced the new rule, enforcement has not yet begun.
Instagram also saw a rise in adult nudity and sexual activity. The service removed 12.4 million posts during the second quarter, up from 8.1 million in the prior quarter. Additionally, posts that included bullying and harassment—an area which Instagram and Facebook still have trouble policing, using technology—increased to 2.3 million, up from 1.5 million in the first quarter. However, posts that exploit children declined to 479,400 during the second quarter from 1 million in the preceding period. The decrease was likely the result, at least in part, of fewer human reviewers focusing on the problem.
Because of the coronavirus pandemic, many of Facebook’s content reviewers started working from home. In response, the company shifted some of the kinds of content they reviewed and increased the use of artificial intelligence to police its service.
In terms of policing misinformation about the coronavirus, Facebook said it removed more than 7 million posts on Facebook and Instagram. It also added misinformation labels to 98 million posts that made false claims about the coronavirus on Facebook, but it did not remove them entirely.
Facebook also said that it had removed 1.5 billion fake accounts during the second quarter. During the first half of the year, Facebook removed 3.2 billion fake accounts—or far more than its huge base of 2.7 billion monthly active users.