Facebook removed 583 million fake accounts this year

Arvind Hickman
By Arvind Hickman | 25 May 2018
 
Facebook has released a new report that identifies the prevalence of activity that doesn't meet its community standards.

Facebook has removed 583 million fake accounts in the first quarter of 2018 as the platform gets tougher on inauthentic users.

To put that into context, Facebook has about 2.1 billion monthly active users and has removed about a quarter of this figure.

In a community standards enforcement report, the social media network estimates that there are still around 4% of fake accounts (about 84 million users) still on the platform.

The report highlights Facebook's effort to police the platform on hate speech, graphic violence, spam, terrorist propaganda, nudity and sexual activity and spam.

In the first quarter of 2018, the amount of graphic violence content that Facebook took action on more than doubled from 1.2 million posts in Q4 2017 to 3.4 million posts in Q1 2018.

The amount of hate speech posts removed also increased from 1.6 million to 2.5 million posts.

Terrorist propaganda posts that were removed increased from 1.1 million to 1.8 million, while the amount of spam posts removed went up from 727 million in Q4 2017 to to 836 million in the first quarter of this year.

Adult nudity and sexual activity posts that were removed stayed relatively consistent at around 21 million in Q1 of 2018.

Although these figures appear large, Facebook points out they are only a tiny fraction of the vast volume of posts published on the platform.

For example, Facebook estimates that the prevalence of adult nudity and sexual activity is only between 0.07% to 0.09% of the overall number of posts.

What is surprising, however, is the large number of fake accounts that Facebook needs to remove each quarter. In Q4 of 2017, it removed 693 million fake accounts and while this dropped to 583 million in the following quarter, it's still a very large number. 

This illustrates the challenges Facebook faces in trying to keep the platform an authentic and enjoyable experience for users and advertisers.

“The vast majority of content shared on Facebook is positive and does not violate our community standards,” Facebook director of policy in Australia and New Zealand Mia Garlick tells AdNews.

“But we are increasingly looking to use technology to help us to get to content and find it.”

Facebook uses a combination of machine learning and human reviewers to identify dodgy content and remove it. Users can also flag undesirable content for Facebook to look at.

Facebook plans to grow its content review team to 20,000 people this year. Its review team is spread across the world and speaks more than 40 different languages.

Growing its languages capability is important. The platform has been used by Buddhist nationalists in Myanmar to spread hate speech and incite violence against the persecuted Rohingya people, according to several reports. It has also been used for similar reasons in Sri Lanka against the Tamil community.

Garlick admitted the nuance in language and slang is difficult to decipher and police.

"Patterns of speech can depend a lot on tone and context," Garlick said. "I might tell a friend there is a shoe sale on that I'm going to die for, but we still need human reviews to get that nuance.

"We think that the percentage increase from the end of 2017 to the beginning of 2018 is higher than what we would like to see, but it also reflects improvements in our ability to detect it. Hopefully over time we will get better and better at identifying and removing it before people see it."

Garlick said the "vast majority" of undesirable content is picked up by Facebook's machine learning technology that identifies patterns and imagery as soon as it is posted.

"Improvements in computer vision has allowed us to detect and remove things like adult nudity before it gets reported on by users," she added.

"In the case of fake accounts, because we have 14 years of experience identifying what fake accounts look like, there's a large amount that we are automatically stopping from being created. There's another series of technology running across the site after new accounts are created that might checkpoint that account because it is showing behaviour that is inauthentic."

Cracking down on things like graphic violence is tricky. Sometimes highlighting cases of violence is important to public debate and Facebook has been used to highlight cases of violence for positive causes, such as videos of police brutality against African Americans in the Black Lives Matter movement.

The challenge is finding the right nuance between good and bad. It's a challenge that all media companies and newsrooms deal with on a day-to-day basis and one that Facebook appears to be taking a lot more seriously.

Have something to say on this? Share your views in the comments section below. Or if you have a news story or tip-off, drop us a line at adnews@yaffa.com.au

Sign up to the AdNews newsletter, like us on Facebook or follow us on Twitter for breaking stories and campaigns throughout the day.

comments powered by Disqus