Ever since CEO Mark Zuckerberg was forced to deny its fake news stories influence the US election, the social network has implemented several features to tackle the problem. Earlier this year, it introduced a function that let users tag bogus items to stop them spreading, but a new report says this simply isn’t working.
If enough people report a Facebook article as fake, it may be reviewed by independent fact checkers, such as Snopes and Politifact. If it is then determined to be factually incorrect, a “disputed by third-party fact checkers” label will appear at the bottom of the post.
Facebook hoped that users would be less likely to believe and share ‘disputed’ articles, but a study from Yale claims this is rarely the case. It says that the label has “only a very modest impact on people’s perceptions.”
The study, which will be submitted for peer review, asked 7534 people to judge the accuracy of 24 headlines, 12 true and 12 false. It found that those articles with the disputed tag made participants just 3.7 percent more likely to correctly identify them as false.
“The main potential benefit of the tag is that it (slightly) increased belief in real news headlines,” the researchers said. “This seems insufficient, however, to stem the tide of false and misleading information circulating on social media.”
It was also discovered that the flagging feature is causing more people to believe fake news stories. With so many of the posts appearing on Facebook, it’s impossible for the fact-checkers to address them all. This leads to a “backfire effect” where users see all untagged fake news stories as likely to be real. The researchers say this is especially true among Trump supporters and adults under 26.
Facebook said the flagging system was just one element of its battle against fake news. Regarding the Yale report, a spokesperson said it was an “opt-in study of people being paid to respond to survey questions. It is not real data from people using Facebook.”
Facebook recently announced more new measures to stop fake news appearing on the platform, including preventing ads from appearing on offensive and false content.