What just happened? Twitter today announced an upcoming expansion to its "hateful conduct" policy, which would extend its existing hate speech rules to ban "dehumanizing" speech. This would prevent users from comparing "identifiable groups" of people to animals, viruses, or other less-than-human forms of life.
Social media platforms have faced a variety of issues lately, but one of the biggest problems companies like Facebook and Twitter contend with is platform moderation.
On the one hand, said tech giants have a vested interest in promoting discussion between users. After all, that's arguably what the platforms are designed for. On the other hand, they feel the need to protect their users from language and actions they deem offensive, including "hateful conduct" like race, sex, or disability-based verbal attacks.
While lively debates surrounding perceived censorship are rampant on Twitter, many users agree that the company's current policy is fairly neutral. It may be enforced too little or too much depending on who you ask, but the wording of the policy itself does not seem terribly problematic.
However, Twitter's latest moderation rules will likely prove much more controversial. According to a blog post published today, Twitter has been developing a set of policies designed to address "dehumanizing language" on its platform.
This new policy would expand Twitter's existing hateful conduct rules to cover language that "dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target."
This is a remarkably vague rule proposal, and it's tough to even guess at what it might mean. Fortunately, Twitter elaborates on some of the terms it uses.
For starters, it considers "dehumanization" to be any language that treats others as, well, less than human.
For example, if a Twitter user decides to apply animalistic attributes to a group of users -- such as comparing said group to a "virus" or animal -- that would be considered dehumanization.
To reference an internet cliche, calling a group of people "cancer" might be grounds for a ban, warning, or some other form of disciplinary action under this future policy.
So, that still leaves the term "identifiable group," which Twitter defines as follows:
Any group of people that can be distinguished by their shared characteristics such as their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, serious disease, occupation, political beliefs, location, or social practices.
Whether or not this policy will backfire remains to be seen, but regardless of where you stand on the matter, now is the time to make your voice heard. Twitter has opened up a feedback survey to its users ahead of this policy's launch in a couple of weeks.