A hot potato: Facebook's latest approach to curbing the spread of fake news doesn't target misinformation specifically but rather, the people who share it.

In 2015, Facebook rolled out the ability for people to report posts they believe to be false. The social network soon realized, however, that many people were reporting posts as unreliable simply because they didn't agree with the content or were intentionally trying to harm a specific publisher. This led Facebook to develop methods to assess whether people flagging posts were themselves trustworthy.

Tessa Lyons, Facebook's product manager in charge of combating fake news, told The Washington Post that the social network has been developing a previously unreported user reputation score over the past year.

The system assigns users a "trustworthiness" score on a scale of zero to one. Lyons wasn't forthcoming with details on how the scoring system works, whether all users have a score or how the scores are specifically used. Her hesitation stems from not wanting to tip off bad actors on how the process works as doing so could allow them to easily game the system.

Lyons did say, however, that the score isn't meant to be an absolute indicator of a person's credibility, nor is there a single unified reputation score that users are assigned. As the Post highlights, the score is one of thousands of new behavioral measurements that Facebook is now monitoring in order to assess risk.