Former president Donald Trump fiercely advocated for getting rid of Section 230 in his final months in office. Now that Democrats control the White House and Congress, a few of them have proposed their own reforms to the landmark internet speech law.

“Section 230” is a shorthand reference to one piece of the 1996 Communications Decency Act that has come to be a key protection for online platforms in the 21st century. The most noteworthy snippet of Section 230 states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

In other words, internet service providers like Comcast as well as platforms like Twitter or Facebook, aka “provider…of an interactive computer service,” aren’t legally responsible for posts made or shared by their users. While there’s been plenty of disagreement over how to handle Section 230, most officials (and people impacted by bad behavior online) agree that the 25-year-old law’s language is outdated and needs to change.

Senators Amy Klobuchar (D-MN), Mark Warner (D-VA), and Mazie Hirono (D-HI) introduced a bill called the SAFE TECH Act on Friday, which would alter Section 230 of the 1996 Communication Decency Act to give tech platforms like Facebook and Twitter more legal responsibility over the content posted by users. It’s intended to curb things like “cyber-stalking, targeted harassment, and discrimination,” but as with anything involving Section 230, there are major free speech implications, as well.

Broadly speaking, the SAFE TECH Act would open up more legal avenues to hold online platforms accountable for harmful content created by their users. It’s not just one thing; the proposed legislation is designed to address a broad range of legal issues.

One piece of it addresses paid speech, so Facebook or any other site that sells digital ad space would be held accountable for false or malicious ads someone paid to run on the site. Another piece gives victims of cyber-stalking or online harassment legal grounds to sue platforms that enabled those things, which is not currently the case under Section 230.

The legislation would also make it clear that Section 230 protections can’t get in the way of enforcing civil rights laws, possibly lessening online discrimination. Victims of situations where online content contributed to “irreparable harm” could also seek relief, as could families of those who suffered wrongful deaths as a result of online activity. Even those who suffer human rights violations overseas could sue in U.S. courts if an American online platform is involved, with the Rohingya genocide used as an example.

All of that may sound great on paper, but as internet policy advocates pointed out to us in January and to TechCrunch in response to the SAFE TECH Act’s introduction this week, any reform to such a key piece of legislation could have disastrous effects down the road. 

Holding platforms accountable for paid speech might slow down false ads, but it could also hurt those who make an honest living off Patreon or Substack. The same goes for liability in harassment or discrimination cases, which could be a boon to victims but could also result in sites like Twitter becoming significantly less free for good-faith expression because their parent companies would like to avoid even the possibility of getting sued.

That’s not to mention smaller sites whose content would also fall under the jurisdiction of a reformed Section 230, and who may not have the money or resources to back themselves up in court if it came to that. To be clear, the SAFE TECH Act is nowhere near guaranteed to become law anytime soon, especially with the glacial pace at which Congress does anything. But it’s worth keeping in mind that even well-intentioned reforms, unlike some of those proposed by Republicans, could ultimately make the internet a worse place.

Are graded toys more valuable than non graded toys 500 words.