A Facebook logo on Station F in Paris, 2017.

A Facebook logo on Station F in Paris, 2017.
Photo: Thibault Camus (AP)

The nation of France has approved a new law that will force tech giants to delete terrorism and pedophilia-related content from sites in as little as an hour or be exposed to fines that could total up to four percent of global revenue, Reuters reported on Wednesday.

Advertisement

Under the law, which also set up a government hate speech monitoring office and a special prosecutor to pursue violators, platforms will have 24 hours to purge other forms of prohibited, “manifestly illicit” content before they risk a massive fine. It also establishes a process for French authorities to audit content moderation systems, per the Wall Street Journal. According to Reuters, France’s Justice Minister Nicole Belloubet said she believes “People will think twice before crossing the red line if they know that there is a high likelihood that they will be held to account.”

A French civil liberties group, La Quadrature du Net, protested the new rules as setting an unrealistic standard. The organization told Reuters in a statement that it makes no allowance for scheduling outside of traditional work hours and allows the police to hound platforms off the web for failing to act within an arbitrary timeline: “If the site does not censure the content (for instance because the complaint was sent during the weekend or at night), then police can force Internet service providers to block the site everywhere in France.”

Advertisement

Facebook recently announced it would be launching a machine learning-assisted tool to identify and take down hateful memes and said in a recent report that it had now removed 88.8 percent of hate speech via automated systems in Q1 2020, up from 80.2 percent the prior quarter. It also recently settled a lawsuit with over 11,000 current and former human moderators who said they experienced mental health problems as a result of their continual exposure to hateful, violent, and misogynistic content on the job to the tune of $52 million. It also said that it implemented warning labels for misinformation related to the ongoing coronavirus pandemic that reduced clickthroughs to the actual content by 95 percent.

Twitter France public affairs chief Audrey Herblin-Stoop told Reuters that the company was implementing more AI tools as well, with one out of two tweets that Twitter moderators removed being flagged by AI tools, up from one in five in 2018.

The inherent scale of platforms like Facebook, however, means that the use of AI-powered tools for moderation is an extremely complicated problem with no clear resolution—and commercial companies obsessed with growth at all costs have continually implemented these tools in a manner that fails to keep up with the deluge of prohibited content. Far from a sanitized internet, Facebook, Twitter, and Google’s YouTube all remain riddled with extremist content.

As the Journal noted, Germany passed a similar law in 2018 that resulted in Facebook receiving a roughly $2.16 million fine, while the European Union’s executive branch is widely believed to be planning a Digital Services Tax that could expose tech companies to far greater liability for user-generated content.

Advertisement

“We can no longer afford to rely on the goodwill of platforms and their voluntary promises,” French junior minister for digital affairs Cédric O told the country’s parliament on Wednesday, per the Journal. “This is the first brick in a new paradigm of platform regulation.”