Why it matters: Like Facebook earlier this year, YouTube is looking to outright ban channels that have "supremacist and hateful content". The company also wants to clamp down on misleading videos and promote those that are more "authoritative". While YouTube will have to determine what constitutes hate speech, it's under extreme pressure by governments and businesses to moderate its video content given recent controversies.

YouTube is swinging the ban hammer hard today as the company looks to delete thousands of videos and channels in an effort to stem all hateful content. The company outlined it's reasoning in a blog post along with a three pronged approach.

The first change will be the removal of videos that it deems "hateful and supremacist". YouTube has spent at least two years trying to remove videos that encourage violent extremism but wants to extend that to hateful content. The post specifically points out that it is "prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status".

The second change is the attempt to reduce "borderline content". This was actually reported earlier this year as the company tries stop the spread of misinformation about widely reported events like the Holocaust, the September 11, 2001 terrorist attacks, and the Sandy Hook shooting. Videos that claim the Earth is flat or a purport "phony miracle cures" are also on the chopping block. That said, YouTube claims that the misleading videos based on recommendations have seen a 50% drop in views.

Finally, YouTube will suspend channels from the YouTube Partner program that repeatedly "brush up" against hate speech policies. This prevents any monetization from ads or Super Chat, which allows subscribers pay creators directly in exchange for more chat features.

There are business and political reasons for outright banning channels. Social media companies are facing immense pressure from governments to moderate hate speech. Facebook itself took a similar action by attempting to ban white nationalism after the New Zealand shooting. YouTube also wants to remind advertiser friendly and prevent hateful channels from ruining YouTube's ad revenue.

As I've written before, while the intention of these policies are probably noble, YouTube and other social media giants still face the question of what constitutes hate speech. Can the algorithm detect the difference between legitimate hate speech and someone just simply discussing hate speech? To be fair, YouTube does state in the blog post that "context matters" and that videos that merely discuss hate speech topics or condemn hate would stay up.

Even then, YouTube doesn't always enforce its own anti-hate policies. There is a current controversy surrounding Steven Crowder making racial and homophobic videos against Vox host Carlos Maza. Maza took to Twitter to voice his displeasure with YouTube's inaction against Crowder. YouTube finally tweeted back to Maza saying that "while we found language that was clearly hurtful, the videos as posted don't violate our policies". However, it went back on that decision by suspending monetization on Steven Crowder's channel.

The move to further restrict hateful content will likely also rile up conservatives who continue to believe that big tech companies unfairly discriminate against them. Senate Republicans held hearings in April that once again brought up the charge that tech companies are biased against conservative thought. While the evidence of political bias is probably anecdotal at best, most Americans believe social media censors political thought anyway.

YouTube says that it will begin enforcing the new policies today with full ramp up to continue gradually over the next few months.