Social media companies have always struggled to find the right balance between enabling free speech and while moderating inappropriate content. But since Donald Trump entered the White House, this debate has taken on a complicated new layer: Should private companies censor the President of the United States, even when he violates the company’s posting guidelines

After years of waffling, Twitter finally began taking a stand this year, placing warning labels on misinformation and hiding (but not removing) calls for violence.  Today, Mark Zuckerberg announced Facebook would follow a similar set of rules, placing a label on content it would otherwise remove were it not from a politician or government official.

The change comes as a part of a broader set of initiatives to fight misinformation during the election season.

Specifically, Zuckerberg says:

A handful of times a year, we leave up content that would otherwise violate our policies if the public interest value outweighs the risk of harm. Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms.

We will soon start labeling some of the content we leave up because it is deemed newsworthy, so people can know when this is the case. We’ll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what’s acceptable in our society — but we’ll add a prompt to tell people that the content they’re sharing may violate our policies.

Moreover, the Facebook CEO clarifies that “there is no newsworthiness exemption to content that incites violence or suppresses voting. Even if a politician or government official says it, if we determine that content may lead to violence or deprive people of their right to vote, we will take that content down.”

So maybe Facebook will actually do something about the president’s propensity for misinformation now.

Other changes announced today include:

  • Providing authoritative information on voting during the pandemic: Facebook is creating a Voting Information Center to help ensure people can locate factual information about how and when they can vote. A link to the Voting Information Center will show up at the top of Facebook and Instagram over the coming months.
  • Fighting voter suppression: Facebook will label posts that discuss voting with a link to its Voting Information Center. Within the 72 hours leading into election day, the company will use its “Elections Operations Center to quickly respond and remove false claims about polling conditions.” It’ll also ban posts that intimidate voters from showing up at polling places.
  • Regulating hateful content in ads: Facebook says its tightening up rules around divisive content in ads, specifically expanding its policy to “prohibit claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status are a threat to the physical safety, health or survival of others. We’re also expanding our policies to better protect immigrants, migrants, refugees, and asylum seekers from ads suggesting these groups are inferior or expressing contempt, dismissal, or disgust directed at them.”

Facebook has also recently come under fire for its handling of ads showing up alongside hateful content, compelling many advertisers to boycott the network. Zuckerberg did not go into detail on how he plans to address these specific concerns.

on Facebook

Celebrate Pride 2020 with us this month!

Why is queer representation so important? What’s it like being trans in tech? How do I participate virtually? You can find all our Pride 2020 coverage here.

Advantages of overseas domestic helper.