Bottom line: Automated technology is becoming an increasingly effective tool for flagging potentially harmful content for review by one of Google’s more than 10,000 human content reviewers, often times before it has much time to generate a substantial number of views. YouTube said the nearly 30,000 videos removed for hate speech over the last month generated just three percent of the views that knitting videos did over the same period.
YouTube in June updated its hate speech policy, a fundamental shift that was many months in the making. The effort to create and enforce the policy update is already having a profound impact as evident by the numbers revealed in YouTube’s latest community guidelines enforcement report.
YouTube said it removed more than 100,000 videos and over 17,000 channels for violating its hate speech policies in Q2. That’s five times as many videos and channels removed compared to the first quarter.
The video sharing giant also deleted more than 500 million harmful comments, nearly twice as many versus efforts in Q1.
YouTube conceded that the spikes in removal numbers are due in part to the deletion of older videos, channels and comments that were previously permitted. This means we probably won’t get an accurate representation of how effective the new policies are until all of the legacy content is purged from the system. Who knows how long that could take, or if the effort of looking at older content has already wrapped up.
If the latter is the case, we should have a much clearer view come next quarter.
A progress report on the effectiveness of YouTube’s harassment policy update is coming in the next few months, we’re told.