Facebook Monitoring Failure Helped Spread Christchurch Hate Around The World

Company's touted policing system was no match for a live-streamed video and dedicated right-wing extremists.

One of the most horrific massacre videos in human history was live-streamed on Facebook for 17 excruciating minutes — despite recent raves by executives that the company has become a crack content monitor. The film wasn’t yanked until New Zealand police informed Facebook that a gunman was filming live Thursday during an attack at two mosques that killed 49 Muslims.

“Police alerted us to a video on Facebook shortly after the livestream commenced and we quickly removed both the shooter’s Facebook and Instagram accounts and the video,” Facebook said in a statement. “We’re also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware.”

But it was too late. The video had already been snatched by right-wing extremists and replicated like a cancer across the internet, including, again and again, on Facebook.

YouTube — a new flourishing option for white supremacists — and Twitter reportedly took hours to remove the video from their platforms. Some media outlets also repeatedly broadcast clips from the video, despite police pleas that they not do so.

Police told reporters that they were also investigating a report that a Facebook page may have carried a warning of the attack.

What happened to the new and improved Facebook content police?

Social media companies rely on sophisticated artificial intelligence systems — backed up by human monitors — to catch offensive content. But, clearly, it’s hardly foolproof.

Facebook’s chief technology officer, Mike Schroepfer, just boasted to Fortune magazine in an interview published Thursday that the company’s AI system can tell with 90 percent accuracy the difference between pictures of broccoli and marijuana. But with billions of posts, a 10 percent failure rate is massive, a tech expert explained to CNN.

It’s also too big a challenge for an an AI system to spot a problem in a livestream video or a first-time upload, Peng Dong, co-founder of content-recognition company ACRCloud, told Time magazine. Typically, a problem video must first be uploaded so a system can compare incoming scenes against the footage.

Police were forced to rely on an old-fashioned plea for decency — on Twitter — in a bid to stem the hate spread. “There is extremely distressing footage relating to the incident in Christchurch circulating online,” the police tweeted. “We would strongly urge that the link not be shared.”

Before You Go

LOADINGERROR LOADING

Popular in the Community

Close

What's Hot