Recently, in accordance with its latest “journalism project,” Facebook released a new feature that you might have already seen floating around your newsfeed. It’s a small marker that labels certain articles as “disputed news,” indicating that the content within the article is questionable in accuracy or reliability.
In the age of “fake news,” this seems like an important step forward—at least on the surface. The spread of intentionally misleading or outright false information has led to political polarization, real-world violence, and some would argue influenced the outcome of the 2016 Presidential election.
But is Facebook’s new disclaimer going to be the bullet that puts fake news in its place? Are we currently winning the battle against fake content?
How the Disputed News Tag Works
First, let’s take a look at how the disputed news tag works. First, Facebook users see an article and either judge the headline or click through to read the entire article. If they find it questionable in any way, they can flag it as disputed or untrustworthy. If an article reaches a certain threshold of user-submitted flags, it’s then sent to third-party, nonpartisan, transparent review organizations like Snopes and Politifact to determine its accuracy.
If these sites determine the article to be false or misleading, Facebook then labels the article as “disputed news,” with a link to relevant reports that illustrate this. Then, any users who see the article in the future will also see the “disputed news” tag, warning them of the article’s questioned legitimacy.
There are some clear benefits to this system, but there are a number of flaws preventing it from dealing a crushing blow to the fake news epidemic.
One of the biggest problems is the time it takes to mark a disputed article as fake news. It may take days, or even longer, for enough users to mark a story as fake before Facebook sends it to its bank of fact-checking organizations. From there, it may take another few days of research before the article can be marked with a “disputed news” tag.
By that time, many people have likely already seen the article, and formed an opinion about its subject matter. By that point, much of the damage has already been done, and often can’t be reversed.
Subtlety in Nomenclature
Some have also taken issue with the wording of Facebook’s phrasing. “Disputed news” is much gentler than something like “fake news” or the even more ostentatious “LIE.” Obviously, this is done to protect the interests of Facebook and the fact checkers, conservatively estimating their own reliability in evaluating the article’s truth.
If a “disputed” article somehow ends up being true, there’s no harm—after all, it was only disputed, not disproven. The problem is, the wording gives people more room for belief in the face of contradicting evidence (or an absolute lack of evidence), i.e., “it’s disputed, but that doesn’t mean it’s untrue.”
Using controversial content and polarizing views is a surefire way to gain followers on any social platform—not just Facebook. Though Politifact and Snopes are constantly combing through information on their own to evaluate legitimacy, Facebook’s new tool only works on Facebook.
Of course, it’s not Facebook’s job to fact check articles on Twitter or any other source, but more social platforms and publishers should be getting involved in this process if we’re going to overcome this crisis in journalism.
Finally, this entire system starts with a dependency on users. Only users can flag articles as being disputed, before fact-checking sites pick them up. The problem here is that we’re so entrenched in social media echo chambers that fake news stories are most likely to be circulated in social circles who will assume they’re true—because they already agree with their beliefs.
This creates a bigger delay, and makes it harder to pinpoint articles that shouldn’t be circulating.
Can We Do Better?
There are a lot of problems with Facebook’s disputed news system, but what could we really change? There are a handful of obstacles preventing us from creating something better:
- Censorship. We could remove disputed news articles from circulation entirely, stopping them in their tracks, or label them as “PROVEN FALSE” or as “LIES.” But at that point, we’d be leaning toward censorship, rather than the provision of more information. Censorship is problematic for many reasons, including the suppression of personal liberties.
- User alienation. Taking a harder stance on fake news could push people away from the platform entirely. Obviously, that’s bad for Facebook’s profitability, but it would also further entrench us in the political and cultural divides we’ve created for ourselves. The goal is to help people separate fact from fiction better—not give people credit or blame for being right or wrong.
- Monopolization on truth. Facebook is working with outside organizations, but it still has the final say on its own systems. It’s a bad idea for any organization to go “too far” in controlling which news stories circulate and which ones are trustworthy. This could feasibly create a monopolization on truth, which has potential for grave consequences.
A Step in the Right Direction
For all its current flaws, the disputed news system is still a step in the right direction. It’s imperfect, and doesn’t cover as much ground as I’d like to see, but if it makes some impact on user impressions and the overall quality of the content that hits our newsfeeds, it’s worth the effort it’s taken to roll out.
I look forward to more, even better content quality systems to come in the future from Facebook and other content platforms. Until then, it’s on us to stay wary, producing and sharing the best content we can.