Meta announced in a blog post update yesterday that it will apply an obscure Instagram setting to Threads that lets users control how much fact-checked content they see in their feed. Meta says its fact-checking is intended to address misinformation, so effectively, users will be able to decide how much they want to see controversial topics on the site.
The controls have three levels: “Don’t reduce,” “Reduce,” and “Reduce more.” While none of the choices can hide content entirely, they will affect the ranking of posts that are “found to contain false or partly false information, altered content, or missing context.”
To get to the setting from Threads, users will need to tap the two lines in the upper-right corner from the profile tab, then Account > Other account settings (which takes you to Instagram) > Content preferences > Reduced by fact-checking.
The concept, on its face, feels really compelling. It could essentially be a “drama” filter, and who hasn’t wanted that in some facet of their life? Meta said in a statement to NBC News that the options are intended to give users “more power to control the algorithm that ranks posts in their feed,” adding that it’s responding to users’ demands for “a greater ability to decide what they see on our apps.”
NBC News pointed to a post with thousands of likes saying the change is intended to censor content related to the Israel-Hamas War. Whether that’s true or not, there’s clearly plenty of room for censorship with a tool that invites users to be complicit.
Meta uses third-party fact-checkers to rate content on Instagram and Facebook as factual or not, and what they determine now applies indirectly to Threads content. The company says that although fact-checkers can’t directly rate Threads content, Meta will transfer ratings from Instagram and Facebook to “near-identical content on Threads.”
Meta says Instagram has had the fact-check ranking options for years, but doesn’t seem to have ever properly announced it. According to The Economic Times, Meta added the feature to Facebook in May, with a Meta spokesperson saying it was intended “to make user controls on Facebook more consistent with the ones that already exist on Instagram.”
Moderation hasn’t scaled up gracefully with the rapid expansion of online communication from the tiny pockets of the web forums that were. No massive social networks have found the silver bullet that solves the issue, and in some cases, their efforts have only stirred up anger and suspicion about their motives or raised questions about the involvement of the federal government.
But Meta has to moderate its platform, and not only because of laws that require it in the European Union, nor the US’ own continued regulation efforts. Advertisers are a big part of the equation, and the company has a perfect example of how giving up on moderation affects a platform in X (formerly Twitter), where revenue has reportedly tanked after increasingly charged and unmoderated rhetoric contributed to its ongoing hemorrhaging of advertisers.
Services Marketplace – Listings, Bookings & Reviews