In brief: Facebook's latest effort to fight misinformation will involve checking the identities of Page managers as well as individual accounts with a high reach to see if they're part of a shady campaign. Whether or not that will restore some sanity on the platform, only time will tell, but with the 2020 United States presidential election fast-approaching, it should at least reduce the chances of false claims spreading like wildfire.

This week President Trump signed an executive order meant to limit the legal protections that have prevented social media platforms from being held liable for user-generated content. The move came right after Twitter started labeling some of his tweets with a fact-check warning – an action he described as interference in the upcoming elections.

Facebook CEO Mark Zuckerberg told CNBC that social networks – and private companies in general – should not be the "arbiters of truth," noting that "political speech is one of the most sensitive parts in a democracy, and people should be able to see what politicians say."

Zuckerberg explained that while Facebook does have a fact-checking program in place, it's purpose is to "really catch the worst of the worst stuff." This is the same line of thinking that led to last year's decision to allow "newsworthy posts" from politicians that break the rules.

The company is more inclined in trying to make sure that content posted on Facebook comes from real people, so it will now verify the identities of high-profile Facebook page owners in the US. Specifically, it will prioritize extensive ID verification for accounts with a large audience that exhibit a "pattern of inauthentic behavior" and whose content goes viral on a regular basis.

The ID verification process does need the user behind the account to consent to it. Failing that, Facebook will restrict the visibility of the content posted and reduce the chances of it getting viral.