This article exists as part of the online archive for HuffPost India, which closed in 2020. Some features are no longer enabled. If you have questions or concerns about this article, please contact indiasupport@huffpost.com.

Elections 2019: Facebook Explains Why It Can Fight Porn, Sex and Spam, But Not Hate Speech

The tech giant's artificial intelligence can now figure out porn and spam, but only humans know what hate speech is
SOPA Images via Getty Images

BENGALURU, Karnataka—Facebook took down 583 million fake accounts, 837 million pieces of spam content, and 21 million pieces of nudity, pornography and sex in just the first three months of 2018. In the same time, Facebook only removed 2.5 million pieces of hate speech.

That's not to say that is much less hate speech than everything else on Facebook though. It's just that the social network is a lot better at detecting fake accounts and spam.

According to the company's data, 99% of the fake accounts were identified through artificial intelligence, and 100% of spam. The AI also spotted 96% of the adult nudity without human intervention. But only 38%—little over a third—of hate speech was identified through the AI, and the rest had to be taken down through direct human intervention.

Humans are slow though, and can't scale up the way an AI can. This is a major problem, in India and around the world. Facebook recognises the problem, and is working to remedy this, but it's going to be challenging.

"The AI will get better on hate speech," said Varun Reddy, a public policy manager for Facebook based in New Delhi. "The reason is that hate speech is often much more contextual, and you need to have a person reading to make the decision."

99% of the fake accounts were identified through artificial intelligence, and 100% of spam. AI spotted 96% of porn. But only 38% of hate speech was identified by the AI.

As we gear up for elections in India, the polarised nature of content on social media is a big concern. In countries like Myanmar and Sri Lanka, Facebook has been used to incite violence and foment unrest, and there are fears of the same happening in India as well. Although the focus in India tends to be on its sister company WhatsApp, which has been used to spread rumours and misinformation, and has been linked to child porn and lynchings, Facebook has nearly 300 million users here.

Online hate, real violence

Spam, fake accounts, and porn are fairly easy to recognise, although Facebook still stumbles on all of these categories. But hate speech can have a direct real world impact, and Facebook said AI has its limitations here.

This is a concern. A study from Karsten Muller and Carlo Schwarz of the University of Warwick has linked high Facebook use and violence.

In March, a United Nations investigator said Facebook was used to incite violence and hatred against Rohingya's in Myanmar. The problem, according to a report, is that the company continues to rely heavily on users reporting hate speech in part because its systems struggle to interpret Burmese text.

A University of Warwick study has linked high Facebook use and violence against refugees.

The company wants to avoid similar problems in India, and Reddy said that it reviews 18 Indian languages, along with local reviewers who bring in local cultural context. And Reddy added that part of the problem with issues like Hate Speech and Fake News is that people often report content in these categories simply because they disagree with the content, and not because it actually belongs in them.

"There's also the problem of biased content," added Reddy. "For example if there's a page which is carrying very biased content, say it's a publisher and it has a particular view... Now, people might report it, but it might not actually be hate speech. And people can torture statistics, so that it's not even fake news. So in this instance, we can't do anything."

Since take-downs can also be contentious, the social network needs to be careful when removing content. "We tend to err on the side of allowing more content," added Sheen Handoo, a public policy manager at Facebook based in Singapore. "We want to prevent organised misuse of the community reporting tools."

A lack of manpower

Handoo acknowledged that with the upcoming elections in India, there is lots of potential for hate speech, which will require a contextual decision. The same is true for fake news. "We have a remove and reduce policy for misinformation, after Sri Lanka and Myanmar," Reddy said. "We work with third-party fact checkers, and if an item is flagged to them, they can check it and determine whether this is real or not. They can then flag it to Facebook, and we will reduce the distribution of the post, and show alternative sources on the same topic as well."

This process of fact-checking takes some time though, and in the duration, the fake news is still reaching people. That's a big challenge because there's not enough fact checkers available.

In India, Facebook partnered with Indian fact-checking website Boom Live as a trial to fight fake news. Reddy said that the experiment went well, and is going to be expanded nationally, although he couldn't give more details on how it will be scaled up, for the upcoming elections in various states, and the Lok Sabha elections in 2019.

However, Boom Live had identified only 30 pieces of fake news in the Karnataka campaign. According to Jency Jacob, Managing Editor of Boom, it sometimes takes 2-3 people working all day to fact-check a single video. And Boom only has 6 fact-checkers in all, including the two Facebook-funded hires. Given these constraints, they could act on only a fraction of the tip-offs.

Brigading isn't a problem

In terms of organised misuse though, Reddy wanted to dispel one of the myths that has come up around community reports. "The number of people who report a post does not matter," Reddy said. "It's marked as a single incident, and reviewed. If a thousand people report the page at once, it won't increase the chance of a take-down, and if you organise to send one report a day from a different person, that doesn't matter either."

Given the opaque nature of its operations, there's a lot of confusion about what signals Facebook actually takes note of, and how it decides what content to take down and what to leave up, and this is part of why Facebook launched its community standards page earlier this year. It's also opened up an appeal process, and offered users more communication about takedowns, Reddy said, in a bid to make things more transparent.

He added that the company tries to differentiate between people venting and expressing themselves, and actionable speech. So for example, someone saying "I hate Christianity" is acceptable, while "I hate Christians" is marked as hate speech, Handoo explained, "because it targets a group of people".

It's a somewhat convoluted example, but it also highlights the thorny question of forming rules around community standards, and their enforcement. And although Facebook has dramatically increased the amount of detail it puts into the rules, not everything is revealed even now.

Reddy said that 95% of the policy is what's been shared publicly, but there are some additional rules that are not shown. "We don't fully reveal the policy so as not to reveal the full measures to bad actors, out to game the system," Handoo clarified.

Yet, Facebook's success in targeting pornography offers hope.

It wasn't so long ago that humans struggled to define precisely what pornography was—United States Supreme Court Justice Potter Stewart was asked to rule on whether the State of Ohio could ban Louis Malle film's The Lovers (Les Amants) on grounds of obscenity.

As Stewart famously ruled,

I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description, and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that.

Close
This article exists as part of the online archive for HuffPost India, which closed in 2020. Some features are no longer enabled. If you have questions or concerns about this article, please contact indiasupport@huffpost.com.