Twitch has introduced a new tool to the arsenal of channel operators today, known as AutoMod. This moderation tool uses machine learning to deduce when those in Twitch's chat room are abusing or harassing others, and seeks to automatically remove offensive messages before they hit the chat stream.
Streamers have control over how AutoMod operates through selecting four different levels of aggression. AutoMod is able to filter out content relating to identity, sexual language, aggressive speech, and profanity, according to Twitch's blog post.
The tool is designed to make chat moderation easier for streamers. Any posts flagged by AutoMod will be sent to moderators for review, who will have an easy option to accept or reject the message. AutoMod will also tell the poster of the potentially offensive content that their content will be checked by a moderator before it's posted.
AutoMod does not use simple banned word lists to check chat posts; instead it employs machine learning to identify offensive content. The tool has the ability to identify inappropriate words, phrases, and even strings of emotes, symbols and other characters. Through machine learning, AutoMod will improve its offensive language detection and evolve as chat posters find new ways to avoid the filter.
Streamers can employ AutoMod in English right now from their personal settings page, and there are beta versions for the following languages: Arabic, Czech, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, and Turkish.