Meta, Snapchat and TikTok are finally banding together to do something about the harmful effects of some of the content hosted on their platforms – and it’s about time.  

In partnership with the Mental Health Coalition, the three brands are using a program called Thrive which is designed to flag and securely share information about harmful content, targeting content around suicide and self-harm.

A Meta blog post reads: “Like many other types of potentially problematic content, suicide and self-harm content is not limited to any one platform… That’s why we’ve worked with the Mental Health Coalition to establish Thrive, the first signal-sharing program to share signals about violating suicide and self-harm content. 

“Through Thrive, participating tech companies will be able to share signals about violating suicide or self-harm content so that other companies can investigate and take action if the same or similar content is being shared on their platforms. Meta is providing the technical infrastructure that underpins Thrive… which enables signals to be shared securely.”

When a participating company like Meta discovers harmful content on its app, it shares hashes (anonymized code pertaining to pieces of content relating to self-harm or suicide) with other tech companies, so they can examine their own databases for the same content, as it tends to spread across platforms. 

 Analysis: A good start

Social media logos on an Apple iPhone

(Image credit: Getty Images)

As long as there are platforms that rely on users uploading their own content, there will be those that violate regulations and spread harmful messages online. This could come in the form of grifters attempting to sell bogus courses, inappropriate content on channels aimed at kids, and content relating to suicide or self-harm. Accounts posting this kind of content are generally very good at skirting the rules and flying under the radar to reach their target audience; the content often being taken down too late. 

It’s good to see social media platforms – which use comprehensive algorithms and casino-like architecture to keep their users addicted and automatically serve up content they’ll engage with – actually taking some responsibility and working together. This sort of ethical cooperation between the most popular social media apps is sorely needed. However, this should just be the first step on the road to success. 

The problem with user-generated content is that it needs to be policed constantly. Artificial intelligence can certainly help to flag harmful content automatically, but some will still slip through – much of this content is nuanced, containing subtext that a human somewhere in the chain will need to view and flag up as harmful. I’ll certainly be keeping an eye on Meta, TikTok and other companies when it comes to their evolving policies on harmful content. 

You might also like

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums