On Tuesday, Twitter placed fact-checking labels on a pair of tweets by President Donald Trump, in which he made baseless claims that the use of mail-in ballots in the 2020 general election will enable large-scale fraud. Rebukes from the Trump administration swiftly followed, and Trump took to Twitter with a warning: “Republicans feel that Social Media Platforms totally silence conservatives voices. We will strongly regulate, or close them down, before we can ever allow this to happen.”

Two days later, the promised retribution arrived: an executive order taking aim at key protections for online platforms provided under Section 230 of the Communications Decency Act. Enacted in 1996, CDA 230 protects platforms like Twitter in two ways, immunizing them against liability arising from most content posted by users, and from their “good faith” efforts to remove or restrict access to “objectionable” content. The law distinguishes between internet platforms and print publishers like newspapers or magazines, which can be held liable when they print legally actionable speech like defamation—even when it’s sourced from others, like letters to the editor. In the absence of CDA 230’s protections, social media platforms would, by their nature, be forced either to accept liability on a crushing scale (Twitter would be responsible for the hundreds of millions of tweets its users produce every day), or to take on the Sisyphian task of weeding out any user-generated content that might be actionable—a predicament recognized by the lawmakers and courts responsible for CDA 230’s development.

WIRED OPINION

ABOUT

John Bowers (@john_bowers_) is a researcher at the Berkman Klein Center for Internet and Society at Harvard University.

Trump’s executive order attempts, probably futilely, to attach significant new contingencies to these foundational protections. (It’s highly doubtful the president is in a position to singlehandedly recast the decades of law and jurisprudence that form 230’s foundation.) The order contains a handful of provisions, but its most essential attempts to revisit and expand the notion of “good faith” content moderation practices as a precondition for legal immunity. It states that platforms undertaking “deceptive or pretextual” practices—like imposing restrictions on content for reasons beyond those laid out in their public terms of service—are not acting “in good faith,” and tries to deny them CDA 230 protections broadly by establishing that a lack of good faith undermines all immunities offered by the law. Through a chain of administrative steps, the order then suggests the FCC should clarify the conditions for “good faith” accordingly. Likely anticipating the practically and constitutionally fraught quagmire such an approach would unleash, the order also throws in more oblique measures, including one attempting to dissuade federal agencies from purchasing ads on sites with “viewpoint-based speech restrictions.”

The administration’s intentions here are clear. Under the executive order’s preferred interpretation, a platform’s undeclared tendency towards “selective censorship” of conservative political speech could result in the catastrophic loss of its CDA 230 protections. But while the order is sweeping and vigorously partisan in its intentions, it reflects a critical orientation towards CDA 230 that has found champions on both sides of the aisle. The law has become a favorite target of policymakers wary of big tech’s expanding influence. Republicans like senators Ted Cruz and Josh Hawley have, in keeping with the executive order, castigated CDA 230 for supposedly giving left-leaning internet platforms free rein in what they say is the censoring of conservative speech. Democrats, including presumptive presidential nominee Joe Biden, have tended to take a different tack, arguing that the law’s protections limit platforms’ incentives to address the circulation and amplification of harmful content, resulting in too few takedowns.

We’ve seen the latter kind of concerns broached in controversial–but still bipartisan–legislative efforts. April 2018 saw the passage of FOSTA-SESTA, which–with the help of an overwhelming bipartisan majority–amended CDA 230 to strip platforms of immunities for material violating sex trafficking laws. This past March, a bipartisan group of senators introduced the EARN IT Act, which suggests a second domain-specific exception to CDA 230 immunities, this time applying to child exploitation content. Under the act, platforms would lose CDA 230 immunities in relation to user-generated content that runs afoul of child protection laws, unless they can certify that they adhere to a set of preventative “best practices” to be developed by a government commission. Both acts drew significant criticism from industry and civil liberties communities, rooted largely in concerns that they might contribute to the erosion of CDA 230’s protections–and, with them, the prospects for free speech online.

Kiss collectibles and action figures.