Illustration for article titled Facebook Says Covid-19 Shutdowns Hurt Its Ability to Fight Suicide, Self-Injury, Child Exploitation Content

Photo: Olivier Douliery (Getty Images)

Facebook said Tuesday it can’t moderate its own site or its subsidiary Instagram as effectively as possible for certain categories of rule violations during the novel coronavirus pandemic, while almost nobody got the chance to appeal its moderators’ decisions in the second quarter of 2020.

Advertisement

Per the latest version of its Community Standards Enforcement Report, which covers the Q2 period of April 2020 to June 2020, Facebook took action on 1.7 million pieces of content that violated its policies on suicide and self-injury in Q1, but just 911,000 in Q2. (That number is down from 5 million in Q4 2019.) While enforcement against content in violation of Facebook policies on child nudity and sexual exploitation rose from 8.6 million to 9.5 million, it was way down on Instagram, where the number fell from around 1 million to just over 479,000. Enforcement of rules prohibiting suicide and self-injury content on Instagram also plummeted from 1.3 million actions in Q1 to 275,000 actions in Q2. Instagram increased enforcement against graphic and violent content, but on Facebook that category fell from 25.4 million actions in Q1 to 15.1 million actions in Q2.

Facebook Vice President of Integrity Guy Rosen wrote in a blog post that the lower number of actions taken was the direct result of the coronavirus, as enforcing rules in those categories requires increased human oversight. The company’s long-suffering force of content moderators, many of whom are contractors, can’t do their job well or at all from home, according to Rosen:

With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram. Despite these decreases, we prioritized and took action on the most harmful content within these categories. Our focus remains on finding and removing this content while increasing reviewer capacity as quickly and as safely as possible.

Advertisement

The report didn’t offer estimates of the prevalence of violent, graphic, or adult nudity and sexual activity on Facebook or Instagram, with Rosen claiming that the company “prioritized removing harmful content over measuring certain efforts.”

The Facebook appeals process, by which users can challenge a moderation decision, has also flatlined to near-zero levels in every category. The company previously announced in July that with moderators out of the office, it would give users who want to appeal “the option to tell us that they disagree with our decision and we’ll monitor that feedback to improve our accuracy, but we likely won’t review content a second time.”

Facebook took action on a far larger number of posts for violating rules against hate speech in Q2 (22.5 million, up from 9.6 million in Q1). It wrote in the report that automated machine learning tools now find 94.5 percent of the hate speech the company ends up taking down, something it attributed to support for more languages (English, Spanish, and Burmese). Enforcement against organized hate group content fell (4.7 million to 4 million) while that against terrorism content rose (6.3 million to 8.7 million).

Curiously, the amount of content that was later restored without an appeal after being removed under the anti-organized hate and terrorism rules skyrocketed in Q2; Facebook restored 135,000 posts in the first category and 533,000 in the second. It doesn’t appear that Facebook processed a single appeal on either category in Q2, suggesting the company’s human moderators have their eyes turned elsewhere. Facebook does not release the internal data which may show how prevalent hate speech or organized hate groups are on the site.

Advertisement

Keep in mind that this is all according to Facebook, which has recently faced accusations it turns a blind eye to rule violations that are politically inconvenient as well as an employee walkout and advertiser boycott pressuring the company to do more about hate speech and misinformation. By definition, the report only shows the prohibited content that Facebook is already aware of. Independent assessments of the company’s handling of issues like hate speech haven’t always reflected Facebook’s insistence that progress is being made.

A civil rights audit released in July 2020 that found it failed to build a civil rights infrastructure and made “vexing and heartbreaking” choices that have actively caused “significant setbacks for civil rights.” A United Nations report in 2019 assessed Facebook’s reaction to accusations of complicity in the genocide of the Rohingya people in Myanmar as slow and subpar, in particular calling out the company for not doing enough to remove racist content on the site quickly or prevent it from behind uploaded in the first place. (It’s possible that some of the surge in hate speech on Facebook is due to the introduction of more tools to detect in Burmese, the majority language of Myanmar.)

Advertisement

It remains broadly unclear just how well Facebook’s AI tools are doing their job. Seattle University associate professor Caitlin Carlson told Wired hate speech is “not hard to find” on the site, and the 9.6 million posts Facebook said it removed for hate speech in Q1 2020 seemed very low. In January 2020, Carlson and a colleague published results of an experiment in which they assembled 300 posts they believed would violate company standards and reported them to Facebook moderators, who took only about half of them down. Groups dedicated to conspiracy theories, such as the far-right QAnon one, continue to run rampant on the site and have millions of members. Facebook, as well as other social media sites, also played a major role in the spread of coronavirus misinformation this year.

Many of Facebook’s moderators have begun to return to work. According to VentureBeat, Facebook said that it is working to see how its metrics can be audited “most effectively,” and said that it is calling for an external, independent audit of Community Standards Enforcement Report data that it expects to begin in 2021.

Advertisement