The Reason You Saw The Virginia Shooting Video, Even If You Didn’t Want To

A recent Twitter update is meant to force viewers to see video advertisements, but there's no opt-out button for violent content.

When Vester Lee Flanagan allegedly shot and killed two journalists during a broadcast on Roanoke’s WDBJ7 news station on Wednesday, two sets of viewers unintentionally watched the footage of the slaying.

One was the small group of people watching the morning news, who saw the shooting during the live broadcast. A second group watched the video on Twitter, where it was uploaded to an account that appears to have belonged to the shooter.

Twitter quickly deleted the account, but not before users viewed it in their feeds, and, in many cases, retweeted it to an ever-growing audience. Thanks to Twitter's handy new "autoplay" update, largely meant to benefit advertisers, users didn't even have to click to watch the video before it started playing.

Credit: Twitter

Unlike on YouTube or sites like LiveLeak that are geared toward difficult footage, Twitter videos now play automatically, without any action by a user, as soon as the video is seen on a screen. While Twitter only launched the feature in June, the company has been experimenting for years with ways to get users to watch videos more quickly -- before autoplay, the motion required to play a video had been reduced to a single tap.

When Twitter converted to autoplay, it framed the update as a way of allowing users to better experience events in real time. “The extra effort meant you could miss something that you care about,” a company blog post read.

That’s code for: Extra tapping? Yeah, that means that you might miss an advertisement.

Self-playing videos are a boon for advertisers because they’re immediately attention-grabbing, forcing you to watch something you might otherwise avoid, like an advertisement, or a televised killing. A video already in motion draws attention in a saturated social feed. And catching a viewer for just a sliver of an advertisement counts as an impression to brands -- which is why advertisers have been pressuring platforms to adopt autoplay.

On Instagram and Vine, autoplay is the standard setting, while Facebook self-starts videos when they're being viewed from a mobile device. (A user named Bryce Williams, which was Flanagan's on-air name, uploaded the videos of the shooting to Facebook, which eventually removed the footage.) When Twitter launched autoplay, it drew criticism for making the data-draining setting standard -- and a bit tricky to disable. More backlash came after a glitch diverted all users, including those who had opted out, back to autoplay.

Most social networks walk a fine line when it comes to sorting through graphic and violent content -- particularly Twitter, which disproportionately traffics in breaking news. Whether it’s footage of officer Ahmed Merabet being shot in Paris during the Charlie Hebdo attacks or the images of James Foley’s execution at the hands of the Islamic State, the violent imagery of the day will, eventually, be posted on Twitter.

Which is why it’s odd that Twitter’s official rules limit comments on graphic content to a single sentence:

Graphic Content: You may not use pornographic or excessively violent media in your profile image, header image, or background image.

Twitter's media guide provides a bit more context. "We do not mediate content," the guide says, but "media that is marked as containing sensitive content will have a warning message that a viewer must click through."

After images of Foley’s death flooded the network in August 2014, Twitter removed them upon his family's request, and added an update, oddly enough, to its privacy policy:

In order to respect the wishes of loved ones, Twitter will remove imagery of deceased individuals in certain circumstances. Immediate family members and other authorized individuals may request the removal of images or video of deceased individuals, from when critical injury occurs to the moments before or after death, by sending a request to Twitter Inc. via our privacy form. When reviewing such media removal requests, Twitter considers public interest factors such as the newsworthiness of the content and may not be able to honor every request.

The company did not immediately respond to requests for comment.

To note, Twitter doesn’t make a blanket promise to remove images of all deaths, or even all violent deaths. The company's announcement that it “considers public interest factors such as the newsworthiness of the content” when evaluating whether the material should be taken down led many to criticize the platform for editorializing.

“Twitter is not an editorial outfit," Jay Caspian Kang wrote in The New Yorker, adding that "it's odd to think that a company that allows thousands of other gruesome videos, including other ISIS beheadings, would suddenly step in."

Yet increasingly, Twitter is becoming an editorial company. Its new Project Lightning, for instance, will deploy editor-curators to ferret out noteworthy posts during breaking news events.

The relatively new autoplay function ratchets up the stakes. Thanks to autoplay, users will be confronted with not just images, but also videos of whatever horrific, gruesome event is occupying the news cycle. And Twitter's double-speak guidelines -- which promise not to remove content, unless the company decides to, and to post a trigger warning before scary videos, unless Twitter doesn't get to it quickly enough -- aren't particularly helpful.

Ultimately, Twitter has to standardize its procedures for moderating violent content. Especially now that, thanks to autoplay, that moderation is happening in real time.

Before You Go

Popular in the Community