Court Allows Class Action Against Facebook Over Use Of Facial Recognition Tech

The lawsuit says the social media platform unlawfully stores people’s biometric data, pointing to its “tag suggestions” feature for photos.

A federal appeals court ruled Thursday that a group of Facebook users can move forward with their class action lawsuit challenging the company’s use of facial recognition technology.

In a 3-0 decision from the U.S. Court of Appeals for the 9th Circuit in San Francisco, a panel of judges rejected Facebook’s attempt to stop a class action from Illinois plaintiffs who claim the company unlawfully gathers and stores users’ biometric data without their consent.

Three Illinois Facebook users sued the company under a state law that says a private company can’t collect and store people’s biometric facial information without written consent.

Specifically, the case points to Facebook’s “tag suggestions” feature, which allows Facebook to use facial recognition technology to determine if users’ Facebook “friends” are in photos they posted. It scans photos and then compares faces in a given image to those in its database of users’ “face templates” (or facial features previously matched to a user’s profile), per an explanation in the ruling.

In the opinion, Judge Sandra Ikuta wrote that “the development of a face template using facial-recognition technology without consent (as alleged in this case) invades an individual’s private affairs.”

“This decision is a strong recognition of the dangers of unfettered use of face surveillance technology,” American Civil Liberties Union attorney Nathan Freed Wessler said in a news release. “The capability to instantaneously identify and track people based on their faces raises chilling potential for privacy violations at an unprecedented scale.”

A Facebook spokesperson told HuffPost that the company plans to “seek further review of the decision,” adding that it has “always disclosed our use of face recognition technology and that people can turn it on or off at any time.”

Facebook has come under scrutiny in recent years for its privacy practices. Last year, CEO Mark Zuckerberg testified before Congress about a breach in which data from up to 87 million users may have been improperly shared with Cambridge Analytica, a political research firm that worked with Donald Trump’s 2016 presidential campaign. Following an investigation, federal regulators last month fined Facebook a record $5 billion for privacy violations.

Lawmakers have also been taking a closer look at the risks of private companies and government agencies using facial recognition technology. At a congressional hearing on the issue earlier this year, Rep. Alexandria Ocasio-Cortez (D-N.Y.) probed how the technology can “exacerbate” racial bias in the criminal justice system.

Facial recognition technology has drawn criticism notably for often improperly identifying darker-skinned people. In one high-profile test by the ACLU last year, Amazon’s facial recognition tool incorrectly matched the faces of 28 lawmakers with people in mug shots, disproportionately misidentifying people of color.

Earlier this year, San Francisco became the first major U.S. city to ban the use of facial recognition technology by city agencies, including law enforcement.

“The propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits,” the city ordinance says. “And the technology will exacerbate racial injustice and threaten our ability to live free of continuous government monitoring.”

Popular in the Community


What's Hot