About 5% of Facebook’s monthly active users are fake.
The figure may not sound like much, but given Facebook’s 2.4 billion monthly active users, that’s an astounding 120 million active users on the platform who are not who ― or even what ― they claim to be.
The social media platform included the figure in its Community Standards Enforcement report, the third edition of which it published Thursday.
That’s roughly equivalent to 1.6% of the entire population of Earth, if you assume the population of Earth is 7.5 billion people. That’s a couple million people shy of the entirety of Mexico.
The figure doesn’t include fake accounts that Facebook flagged and removed after they were registered but before they could become “active” on the network. From January through March of this year, the company disabled 2.2 billion such accounts, a sharp uptick from 1.2 billion accounts in the three months prior.
In a call with reporters Thursday to discuss the report, Facebook vice president of integrity Guy Rosen said a large quantity of fake accounts are driven by spammers using automated methods to try to create millions of accounts at a time.
That makes sense, given the immense amount of spam floating around the site. Per Facebook’s report, it removed 1.76 billion spam posts in the first quarter of 2019 alone.
While Facebook has become increasingly adept at identifying and removing fake accounts, Rosen acknowledged that “some do slip through.”
The company declined to say where the attacks primarily originate from or how many might be politically ― instead of commercially ― motivated.
Some of the fraudulent accounts can be easily waved off as harmless, like a user who sets up an account for their pet. But others, like those created by Russia or even shady Israeli companies intent on disrupting elections around the world, are undoubtedly capable of doing great damage.
Other Interesting Tidbits From The Report:
- Facebook removed 4 million instances of hate speech from the site in the first quarter of 2019. That’s up from 3.3 million in the last quarter of 2018. Facebook’s automated filters caught 65.4 percent of that ― the remaining 34 percent was flagged by users.
- In the same quarter, Facebook took action on 6.4 million posts it identified as “terrorist propaganda.” Facebook’s AI captured 99.3 percent of that without any human involvement.
- 900,000 pieces of content concerning drug sales had to be removed in the last quarter. An additional 670,000 posts concerning firearms sales were also removed.