If you’ve been rating things on Facebook as false, the company has been quietly rating you, too.
As part of a broader effort to reduce the spread of misinformation, the social media platform has been calculating and assigning reputation scores to users who report content as fake, the company confirmed to The Washington Post. Facebook then takes a user’s score into account when the individual flags future stories as false or misleading.
“If someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false news feedback more than someone who indiscriminately provides false news feedback on lots of articles, including ones that end up being rated as true,” Facebook product manager Tessa Lyons explained to the Post.
Facebook began ranking news outlets by trustworthiness earlier this year, aiming to show highly trusted outlets in the News Feed more often than their less-reputable counterparts. The latest scoring system is an offshoot of the same effort to curtail misinformation, but it’s a completely separate metric based on completely separate signals, a representative told HuffPost.
The rating only applies to people who flag content as fake, and is meant to weed out users who flag everything.
Lyons told the Post it’s “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher.”
A Facebook spokesperson confirmed to HuffPost that the company is keeping an eye on user behavior for the purpose of establishing credibility, but disputed the Post’s characterization of it as a broad assessment of trustworthiness on par with, say, a credit score.
“The idea that we have a centralized ‘reputation’ score for people that use Facebook is just plain wrong, and the headline in the Washington Post is misleading,” the company said in an emailed statement. (The Post’s headline is: “Facebook is rating the trustworthiness of its users on a scale from zero to 1.”)
“What we’re actually doing: We developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system. The reason we do this is to make sure that our fight against misinformation is as effective as possible,” the Facebook statement continued.
The company declined to elaborate on what other factors could affect how Facebook perceives someone’s credibility. Similar to someone’s reputation in real life, the score isn’t fixed and adapts over time.
This latest revelation comes amid a tense year for Facebook, amid reports that the company inadequately safeguarded user data, enabled informational warfare by Russia, and failed to limit the spread of clearly false, harmful conspiracy theories.