This article exists as part of the online archive for HuffPost India, which closed in 2020. Some features are no longer enabled. If you have questions or concerns about this article, please contact

Can Facebook Fix Fake News In Time For Indian Elections? Don’t Hold Your Breath

Facebook says it has a fake news plan in place. But when it comes to answering specific questions, it falters.
Leah Millis / Reuters

NEW DELHI — The white wall at the entrance to Facebook’s New Delhi office sums up the technology giant’s frenetic, but ultimately failing efforts, to somehow undo the tribalised political hate it has spawned in countries as diverse as the United States, Brazil, Myanmar and India.

Nestled amongst the anodyne handwritten shout-outs by passing visitors (Sample: APAC Accounting is here!) are more sinister messages scrawled in All-Caps and shouty exclamation points — “HUM HINDU HAI”, “Mandir Yahi Banega!!!”.

On the bottom left is a short platitude by the Bharatiya Janata Party’s (BJP’s) former education minister-turned-textiles minister Smriti Irani. “Technology is a great leveller. Facebook helps bridge divide of language, gender, demographies. Grow strong with each passing year.” On the bottom right is a slogan adopted by many of the current regime’s critics, “Love beats Hate!”

Much like in the digital realm, banality far out-numbered politics and extremism, but it was the latter that caught the eye.

In a conference room across from the wall, this Monday, Samidh Chakrabarti, Facebook’s global Director of Product Management, Civic Engagement and India Vice President Ajit Mohan met with journalists to list all the many things Facebook was doing to ensure its vast, built-for-virality, network was not used to subvert India’s pivotal and polarising general election.

The stakes are high, and the numbers staggering: 900 million Indians are eligible to vote in the polls starting 11 April, and over 200 million Indians use Facebook and WhatsApp. The sheer number of online users has transformed political campaigning in India, with political parties and their affiliates of all stripes generating vast amounts of fake news.

“There has been a tremendous amount of effort over the last two years to get ready for the Indian elections,” Mohan said. “This conversation about India has been in the works for a long time.”

“The Indian election in particular is one that we are really committed to as a top priority,” Chakrabarti echoed. “I’ve staffed a particular product team dedicated to the Indian election for over a year. This is extremely unusual for Facebook.”

Over the next 40 minutes, Chakrabarti laid out a series of steps intended to arrest the spread of fake news from its point of origin to when a given bit of fake news attains virality. Yet when it came down to the specifics of the quantum of resources Facebook was throwing at the problem, Chakrabarti and Mohan steered clear.

There’s also some scepticism about Facebook’s commitment to the issue. “When a non-local entity like Facebook decides it will fight fake news in India, the first question is whether this is an organic process initiated as part of company policy or is it externally motivated,” said Raghu/ Godavar, a social activist who has played a prominent role in the Rethink Aadhaar collective.

“I believe there is a concern that Facebook is acting under political pressure. Given that the current dispensation has been trying to woo voters through every medium possible, the monitoring of social media immediately becomes suspect as another opportunity.”

“The second aspect is the outsourcing of such fact-checking to external, potentially local entities. If this is a public-spirited effort, there needs to be transparency about the process of selecting fact checkers,” he added. “Potentially, Facebook could make this a data-gathering exercise: what sources do people trust?”

“I believe Alt News had pointed out how some of these fact checkers were themselves fooled into carrying fake news. Such an incident only makes transparency and public trust more vital.”

Can Facebook AI detect fake news?

Facebook’s war on fake news begins at the source, where the company says it blocks millions of fake accounts each day at the point of creation.

“Using machine-learning we have been able to get much better at detecting signals for what could be a fake account that might be created by a real human being,” Chakrabarti said, explaining that “real” accounts have a particular usage pattern that begins with a few messages, a few friend requests, and occasional posting activity as genuine new users grow accustomed to the network.

“Often times when a human being is creating a fake account, they may send a hundred friend requests, they may send a thousand friend requests, they may join 300 groups, they may start posting in a lot of different places,” he said, describing the typical usage pattern of troll accounts. “Any one of these things may not be useful, but thanks to machine learning we can combine tens of thousands or hundreds of thousands of these signals to get a fingerprint of what’s likely to be a fake account.”

Facebook has also worked to make fake news less profitable by down-ranking accounts that use inflammatory content to lure readers to ad-farms.

However, in November, Facebook has admitted that its AI is of limited use at detecting the content of hate speech, or determining the difference between biased messages and ‘fake news’. While AI can detect behaviour that is out of the ordinary, it still requires a lot of human intervention to deal with an issue like fake news, which means that progress is going to be slow.

The company has also appointed seven media organisations and fact checking sites — the India Today group, the Jagran Group, Factly, Boom Live, Newsmobile, and Fact Crescendo — to fact check objectionable posts. The partnership will fact-check news in English, Hindi, Bengali, Telugu, Malayalam and Marathi.

Chakrabarti and Mohan pointed out that these organisations had been certified by the International Fact-Checking Network, a unit of the respected Poynter institute, but declined to answer some fairly basic questions about this fact-checking operation:

  • Are the fact checking organisations required to commit a minimum number of fact-checkers to the initiative?
  • Has Facebook mandated a minimum team size each fact-checking organisation must commit to be eligible?
  • Do they commit to fact-checking a certain minimum number of content pieces each day?
  • How many shifts do they work in?
  • How much money is Facebook paying these organisations to fact check?
  • How did they zero-in on these seven organisations?

(HuffPost India has forwarded these queries to Facebook’s comms team and will update the copy once they reply.)

Chakrabarti said all fact checkers were provided with an online dashboard of stories that needed fact checking and each fact checking organisation could choose the stories they want to fact check, raising the prospect of fact-checking organisations seeking to dismiss or debunk rumours directed at a particular party in line with the political convictions of the organisations.

Chakrabarti sought to assuage these concerns, stating, “All the ones that are credited with the IFCN are eligible to work with us.”

Why Facebook could struggle against fake news

Chakrabarti and Mohan insisted that while the fact-checking initiatives had understandably attracted more attention from the public, the company’s automated systems were far more likely to spot, and kill, fake news posts than human fact checkers.

“The behavioural signals that we look at when it comes to integrity actions are actually much more powerful — what’s the structure of a friend network, what’s the pattern of posting behaviour etcetera, those are the things that end up being much more powerful signals,” Chakrabarti said.

However, there is no way to independently verify Facebook’s claims. “The AI for anomalous behaviour will stop badly coded automation, but not real humans engaging in abusive behaviour,” said Kiran Jonnalagadda, co-founder and CTO at HasGeek, who describes himself as a ‘social technologist’. “The fact is our family groups are still full of fake news. WhatsApp has done nothing to stop it.”

Apar Gupta, a lawyer and executive director of the Internet Freedom Foundation added, “Platforms are only showing the deployments from their side in terms of the expected outcome, but they don’t share the guidelines and process followed by the fact-checkers. They won’t give a practical demonstration of their tech or show the source code. They are not transparent in what they do, and there is no real, meaningful accountability as opposed to a buzzword.”

“These are the two main problems with how platforms are approaching this problem. Nobody expects Facebook to do a miracle, and we are also to a large extent thinking of Facebook as an entity to which we should extend good faith, but for that, they need to be completely transparent, which they are not doing.”

In May 2018, HuffPost India reported how Facebook-empanelled fact-checker BOOM Live could only check a total of 30 pieces of content in the course of the month-long election campaign. The partnership, BOOM Live executives told HuffPost India, meant they could hire just two additional fact checkers who worked single shifts to sift through the endless torrent of lies, misinformation, and communally inflammatory content posted on the Facebook platform each day.

Chakrabarti also declined to provide any specific details on how the company’s WhatsApp messaging service hoped to curb fake news, noting that he did not work on the product.

“At a high level, I think the WhatsApp team has made substantial changes to the platform, starting in India,” Chakrabarti said. “India was the first place the WhatsApp team restricted messaging forwards to just five messages, labeling messages that have been forwarded.”

The forwarding protocols started in India and has since been rolled out globally, he said.

This article exists as part of the online archive for HuffPost India, which closed in 2020. Some features are no longer enabled. If you have questions or concerns about this article, please contact