“Our hate speech policies make clear that we prohibit content promoting violence or hatred against individuals or groups based on attributes like gender identity and expression,” Javier Hernandez, YouTube spokesperson, told WIRED. “We’re currently reviewing content provided by WIRED, and remain committed to removing any material that violates our Community Guidelines.”

Barrett also says that some of Youtube’s issues likely reflect a lack of company investment in non-Western countries and non-English languages.

“One can understand their desire to keep costs to a minimum,” he says. But choosing to cut costs via an outsourcer means that “moderation, in all likelihood, is going to be inadequate because the hiring, training, and supervision of the people doing the job is being pushed onto the shoulders of a vendor whose sole purpose is to keep costs low.”

Koo co-founder Mayank Bidawatka told WIRED that Koo uses a combination of freelance and staff moderators to police the platform in English, Hindi, and Portuguese.

And while Waghre says the platforms function in the context of a very complicated information environment within India, “the responsibility is still on them to take action especially if it’s defined by their own policies,” he says. “Especially when it comes to things around hateful conduct and hate speech, especially in a gender context.”

Koo has a much smaller reach—only about 3.2 million users—but has been a favorite of the BJP and its supporters. Posts flagged by Global Witness and the IFF on Koo promoted the Islamaphobic “love jihad” conspiracy theory, that Muslim men are trying to marry, seduce, or kidnap Hindu women in order to force a demographic change in Hindu-majority India, similar to the Great Replacement conspiracy theory in the US. Koo’s terms of service prohibit “hateful or discriminatory speech,” including “comments which encourage violence; are racially or ethnically objectionable; attempts to disparage anyone based on their nationality; sex/gender; sexual orientation; religious affiliation; political affiliation; any disability; or any disease they might be suffering from.”

In response to the report, Koo told Global Witness in an email that it screens content algorithmically, and then manually for sensitive topics. “Koos that do not target individuals and do not contain explicit profanity are typically not deleted,” but may be deprioritized, the email says.

“In line with our guidelines, action was taken against most of the Koo’s flagged by Global Witness,” Bidawatka told WIRED in response to a request for comment. “Out of the 23 Koos, we have deleted 10 which violated our guidelines and taken action against the remaining. Actions taken ranged from reduced visibility of the post [or] deletion, to account level actions such as blacklisting for an account exhibiting repeated problematic behavior.”

With major elections around the corner, it’s likely that platforms’ systems will be placed under even more strain.

“If you’re going to run a global platform in a place like India with millions of people participating, you need to put up the guardrails to make that safe,” says Barrett. “And to make sure that your business is not doing things like disrupting the election process in the country where you’re operating.”

This article has been updated to reflect comments from Koo.

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums

Premium black unisex t shirt.