Twitter will now use behavioral signals on accounts to filter public content

Shawn Knight

Posts: 15,279   +192
Staff member

Twitter’s inability to contain the trolls that populate its platform has plagued the social network for years. It may seem like a never-ending battle (and it probably is) but it is one Twitter simply can’t afford to abandon.

The site has enacted numerous measures to combat abusive behavior and hateful content and it seems to be having an impact. According to VP of Trust and Safety Del Harvey and Director of Product Management for Health David Gasca, less than one percent of accounts make up the majority of accounts reported for abuse.

The problem is that a lot of what’s reported doesn’t technically violate Twitter’s rules. This realization presents a unique challenge for the team: how can they proactively address troll-like behavior that distorts and detracts from the public conversation on Twitter that doesn’t violate policies?

At present, Twitter uses human review processes, policies and machine learning to determine how tweets are organized and presented in “communal” places like search and conversations. Moving forward, however, Twitter will also be injecting new behavioral signals into how public tweets are presented.

Harvey and Gasca said many of the new signals are not visible externally. Examples cited include whether or not a person has confirmed their e-mail address, if someone signs up for multiple accounts at the same time, behavior that could indicate a coordinated attack or accounts that repeatedly tweet or mention accounts that don’t follow them.

Twitter says it also looks at how accounts are connected to those that do violate rules and how they interact with each other.

The good news is that Twitter’s new approach is showing promise. In early testing, Twitter said it has seen a four percent drop in abuse reports from search and an eight percent decline in abuse reports from conversations.

Permalink to story.

 
Twitter said it has seen a four percent drop in abuse reports from search and an eight percent decline in abuse reports from conversations.
I have to wonder whether that is statistical noise at this point. While the actual report numbers may be large due to the number of users, this seems statistically insignificant. If it were more like 20-percent, then that would be statistically significant - at least as I see it.
 
Back