The One Hire Facebook Really Needs To Make To Curb Violence

The One Hire Facebook Really Needs To Make To Curb Violence
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

By Emily Dreyfuss for WIRED.

Getty Images

On Wednesday, Facebook CEO Mark Zuckerberg said his company will hire 3,000 people to watch for violent footage posted to the site. The decision comes after two horrific videos caught global attention in recent weeks: a man uploaded a video of himself murdering a grandfather in Cleveland, and a man in Thailand hanged his infant and then himself in a Facebook Live video the authorities saw too late.

They will be joining the 4,500 people Facebook already pays to screen videos flagged by users around the world as inappropriate — work that in theory could at the very least prevent such atrocities from lingering on the site. It’s a commendable response, albeit of the brute-force, ex post facto variety. “Over the last few weeks, we’ve seen people hurting themselves and others on Facebook — either live or in video posted later. It’s heartbreaking, and I’ve been reflecting on how we can do better for our community,” Zuckerberg said today. But what if such reflections had taken place more meaningfully before pushing video so heavily on the site? On a platform nearly 2 billion users strong, shouldn’t the company have better anticipated the sheer statistical inevitability that someone would use Facebook to broadcast acts of violence, and had more robust measures in place to make such notoriety-seeking harder to pull off?

But careful consideration isn’t how Silicon Valley operates. “Move fast and break things” — long Facebook’s motto — means tech companies build something cool, bring it out into the world, and see what happens. If it doesn’t work, they fix it. Failure is part of the process, as long as self-correcting iteration happens fast in response to problems that arise. But “fail fast” means something very different when people are using your product to broadcast murder.

What came to be known as the #FacebookMurder took place two days before Zuckerberg’s keynote declaring augmented reality the future of the company, a public relations nightmare for the company that led to the surreal sight of Zuckerberg paying two sentences of lip service to the dead during an otherwise jokey celebration of technology. Hiring 3,000 additional moderators is more than lip service. The new hires will help speed the process of taking down videos. At the same time, it’s merely pumping extra manpower into a flawed system that depends upon you, the regular Jane and Joe on Facebook, to flag inappropriate videos in the first place. In the case of the Cleveland murder, no one flagged the video where the perpetrator said he was going to kill people. None of his friends saw it. Hiring 3,000 more people to sit around waiting for that flag wouldn’t have helped.

What Facebook seems to have failed to do is design more aggressive preventative measures into its video products in the first place. It could have made the options for flagging video more obvious. It could have put warnings about “seeing something and saying something” on every live video. It could make it harder to download video via third-party tools, which would make sharing violent videos more difficult.

“If we’re going to build a safe community, we need to respond quickly. We’re working to make these videos easier to report so we can take the right action sooner — whether that’s responding quickly when someone needs help or taking a post down,” Zuckerberg said today. But taking the right action should also mean having better systems in place to plan for wrong actions. And such systems need to take more than technology into consideration. Facebook needs to have its own ethics team involved in product planning from the earliest stages, even if that means undercutting the “move fast” ethic that spurred the company’s ascendance.

Over Reacting

Several recent problems plaguing Facebook have revealed the limits of the company’s reactive, iterative approach. When a Gizmodo reporter uncovered ideological bias in Trending Topics, Facebook overcorrected by getting rid of human curators altogether, relying on algorithms that pushed hoaxes as news. When critics argued the company had a responsibility to exert editorial control over content proliferating on the site, it failed to act until the spread of false stories helped shape the outcome of the 2016 presidential election. Not until after people started broadcasting their suicides via video did the company begin training its AI to recognize signs of suicidal ideation in people’s online activity. Today it pledged to hire more moderators. The common thread: Change comes only after the damage has been done.

In an ideal world, Facebook would have anticipated these problems and unveiled technologies that had the solutions built in. According to a spokesperson for Facebook, in mid-2016 the company held roundtable discussions with outside groups who advised about the risks of live video. The product was released in April of that year, so these discussions happened either right after or as the product was being released to the public. An in-house ethics team could have fostered such discussions sooner.

“Companies can have someone like a chief ethics officer. Even in a large company the ethics team doesn’t need to be large,” says Susan Liautaud, an business consultant who advises companies and teaches business ethics at Stanford. “It’s meant to be support and additional brain power behind the companies’ effort to do innovation.”

Last fall, WIRED asked Zuckerberg about a similar kind of team. At the time, Facebook was building new tools to aid disaster relief, and we asked whether he would ever hire a dedicated team of social scientists to consider the site’s social impact full-time. “We have a data science team that does academic research,” Zuckerberg said then. “And I think a lot of people at the company have a multi-disciplinary approach. I mean, my background: I studied psychology and computer science.”

Clearly Zuckerberg has a certain intuitive sense of human psychology to have built a service so many people find compulsively usable. But that’s not the same as experts working to anticipate issues that go beyond the technical. Facebook does all kinds of research—artificial intelligence, data science, its users’ emotional vulnerabilities. And some of these hard scientists have social science backgrounds, like Zuckerberg himself. But these people are dispersed throughout the product and research teams. That’s useful but different than having a dedicated ethicist whose only job is to worry and advise about these matters.

“This is not a Facebook problem. This is a 2017 problem,” says Liautaud. She believes this need is facing every consumer tech company in the world. “They all should be having ethics discussions in real time. But innovation-friendly ethics. I don’t think it needs to be a barrier.” Today, Facebook announced it will hire 3,000 people to fight the scourge of violent video. Facebook should start looking for an ethics officer and make that number 3,001.

More from WIRED:

Popular in the Community

Close

What's Hot