The effort is an iteration of a similar “war room” Facebook set up in the U.S. for the 2018 midterms. A couple dozen data scientists, engineers, researchers and country experts ― plus specialists in all 24 of the EU’s official languages ― will screen content on the platform in real time.
In line with the company’s new push to unify its messaging platforms, Instagram and WhatsApp will also be monitored (a welcome development, given the latter’s pesky tendency to incite mob violence, hurt public health and generally spread harmful lies around the world).
Facebook has opened a similar monitoring outpost in Singapore to keep an eye on elections in India, the company said, with 24-hour monitoring coordinated through an office in Menlo Park, California.
Back in 2018, critics argued the U.S. war room was more effective as a marketing tool and a public relations talking point than a real-time content screening service. And Facebook is already catching grief in Dublin for much of the same, as it has barred reporters from interviewing any of the employees and limited journalists from spending more than a few minutes on-site at the center.
It’s unclear just how effective these operations centers are at limiting bad actors during an election. But beyond election security, it’s clear political malfeasance continues to plague the platform.
The center is in response to the deluge of fake accounts, fake content and other disinformation efforts the company lumps together under the catchall term “coordinated inauthentic behavior,” which began in earnest ahead of the 2016 U.S. election.
While the social media platform has upped its game since then ― at the time, CEO Mark Zuckerberg famously dismissed as “crazy” the notion that fake content on Facebook could influence an election ― the company still hasn’t done enough.
Well-intentioned efforts to regulate political ads aren’t working and are easily circumvented. An investigation by Politico’s European edition found on the site all manner of paid-for political messaging that violates Facebook’s rules.
In April alone, the Trump campaign ran “hundreds” of ads that violated Facebook ad policies; three far-right networks in Spain engaged in disinformation campaigns spread propaganda to at least 7.4 million people ahead of elections in the country; and a widespread, supposedly grassroots pro-Brexit effort on the network was discovered to have been run in secret by a shadowy lobbying company.
Facebook had no clue about any of it ― at least, not until it was flagged by external sources.
“Facebook did a great job in acting fast, but these networks are likely just the tip of the disinformation iceberg — and if Facebook doesn’t scale up, such operations could sink democracy across the [European] continent,” warned Christoph Schott, campaign director at Avaaz, the nonprofit that alerted Facebook to the disinformation campaigns in Spain.
“This is how hate goes viral,” he said. “A bunch of extremists use fake and duplicate accounts to create entire networks to fake public support for their divisive agenda. It’s how voters were misled in the U.S., and it happened again in Spain.”