Over the past year, YouTube and social media platforms such as Facebook and Twitter have been dealing with fake news and misinformation. While most social networking sites flagged-off such content, YouTube is using artificial intelligence to block offensive content. 

The world’s largest video platform has used its expertise in AI during the ongoing pandemic to remove close to 11 million videos from its platform. YouTube’s latest Community Guidelines Enforcement Report says this is the maximum number of videos that the platform managed to remove in a single quarter. And it happened in Q2 of 2020. 

How AI managed the show

Of the 11.4 million videos that got removed from the platform during the second quarter, the AI moderators flagged-off 10.85 million of them. As a result, the company says it did not feel the pressure of employees working from home. The use of automated systems proved successful in removing content, that is designated as harmful as per the company policy

While the United States, with 2.06 million videos, took the top slot in the list of countries from where YouTube removed maximum content, India came second though in terms of actual video numbers, it accounted for 1.4 million. Brazil, Indonesia, and Russia made up the top-5. 

[embedded content]

Reasons for removal

Child safety, spam and misleading content, and violence were among the top reasons for the removal of these videos. The report also revealed that as many as 1.9 million channels (or shows) were also removed during this timeframe, with as many as 92% of the reported cases relating to spam or misleading content. 

While 42% of the video removals occurred even before the video or the channel accrued any views, there were also instances where they got removed in spite of not violating any of the company’s policies around content. The report admitted that they received some complaints over wrongful removal and had since dealt with them manually. 

This is also the first full quarter the company ever operated with all algorithm-based content moderators. YouTube accepted that the accuracy in case of videos covering certain sensitive areas such as violent extremism and child safety wasn’t as expected.