Facebook last month published the internal guidelines it uses to determine what is and isn’t permitted on the social network. On Tuesday, the Menlo Park-based company shared numbers from its first Community Standards Enforcement Report that help illustrate Facebook’s performance as of late.

The report highlights six key areas: fake accounts, spam, adult nudity and sexual activity, graphic violence, terrorist propaganda and hate speech.

Guy Rosen, VP of Product Management, said most of the action Facebook takes to remove bad content revolves around fake accounts and spam. In the first quarter alone, Facebook disabled roughly 583 million fake accounts, most of which were disabled within minutes of registration.

This, Rosen says, is in addition to the millions of fake accounts that are blocked from registering on a daily basis yet even still, it is estimated that as many as four percent of the active Facebook accounts during the quarter were still fake.

Facebook also took down 837 million pieces of spam in Q1, almost all of which were identified and flagged before anyone reported them.

Elsewhere, Facebook removed 21 million pieces of content classified as adult nudity or sexual activity and took down or applied warning labels to nearly 3.5 million pieces of violent content during the quarter. Furthermore, 2.5 million pieces of hate speech were removed although Rosen concedes that Facebook’s technology still has some work to do in this category as only 38 percent was flagged automatically.