If being told to “get back in the oven” isn’t considered anti-Semitic abuse, what is?
That’s the issue Twitter is grappling with in the wake of a new report that shows a “sharp uptick” in the anti-Semitic bullying of journalists.
The Anti-Defamation League studied millions of tweets in the year before July 2016 and found that, as the election ramped up, anti-Semitic harassment of journalists significantly increased. The abuse was largely perpetuated by the same cluster of about 1,600 accounts who describe themselves as white supremacists or neo-Nazis. Attacks on journalists like ex-Breitbart editor Ben Shapiro and CNN anchor Jake Tapper include slurs, Holocaust references and sloppy Photoshop jobs of people’s faces on the bodies of concentration camp victims.
The report is just the latest example of Twitter’s failure to crack down on cyberbullying ― a failure that could cost them deeply. The company may have missed out on several big buyers due to concerns over what amounts to a relatively small army of trolls. And as Twitter’s stock price continues to plummet, it needs investors more than ever.
Twitter says that it prohibits attacks or threats “on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease,” and warns that violation of this rule could result in suspension. But the company provides little detail about what kind of comment meets the mark for punishment and does very little to stop anyone banned from just making a new account. Anonymity is one of the things that have attracted some of the platform’s more than 300 million active users ― and it’s also one of the main things that have turned people away.
Comedian Leslie Jones famously took a break from Twitter after being subjected to an onslaught of largely unpoliced racism. Other users have shared her frustrations. In a survey of over 2,700 Twitter users, BuzzFeed said that “an overwhelming majority of reported abuse requests ended with Twitter taking no visible action toward the offending account.”
It’s not just Twitter users that are being driven away by bullying; potential buyers of the social media network are also reportedly fleeing. According to Bloomberg, Disney backed out of a potential deal partly over concerns that Twitter’s image stood at odds with the media giant’s otherwise family-friendly content. Salesforce also chose not to pursue a bid because of “the hatred,” CNBC’s Jim Cramer said.
Twitter has long acknowledged its bullying problem.
“We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years,” former CEO Dick Costolo told staffers in a memo last year. “It’s no secret and the rest of the world talks about it every day.”
The company unveiled new ways to deal with trolls earlier this year, but some users feel it’s still not enough. For anyone who has flagged abusive tweets and been told that Twitter was “unable to take action given that we could not determine a clear violation,” the rules can seem extremely murky.
“The only thing I get from reporting people is the knowledge that I took an action to stop it,” one user told BuzzFeed. “I don’t expect anything back from Twitter anymore, they don’t care.”
The tweets below contain content that may be upsetting to some readers.
When pressed for details on Twitter’s review process, a spokesman for the company pointed to the code of conduct listed on its website. The spokesman also called into question the accuracy of the ADL study.
“We don’t believe these numbers are accurate, but we take the issue very seriously,” he said. “We have focused the past number of months specifically on this type of behavior and have policy and products aimed squarely at this to be shared in the coming weeks.”
Twitter CEO Jack Dorsey said in September that while the company currently relies on human moderators to make judgment calls about what is and isn’t abuse, he’s hoping to eventually shift to “tech & product-based solutions” to better combat bullying.
During this summer’s Olympic Games, Twitter removed copyrighted images and videos from the site at a Michael Phelps-esque speed― proof that the company’s technology can be effective at handling some bad behavior. But, of course, deciding something is hate speech isn’t as clear-cut as flagging an unlawful video.
“It’s difficult to police [online abuse] effectively,” said ADL National Director and CEO Jonathan Greenblatt. “It’s not like a snippet of video violating copyright ... it’s not so black and white.”
The challenge speaks to another, deeper internal struggle within the company over the limits of free speech, particularly in light of the attention on Instagram and Facebook over what some critics say is too much censorship. Facebook was sharply criticized after it took down a video posted by a Minnesota woman whose boyfriend had just been shot by a police officer during a routine traffic stop. Instagram often gets flak for censoring women’s nipples.
Ultimately, corporations aren’t beholden to the same free speech laws that the U.S. government is, so it’s up to the company to decide what’s protected. While Twitter continues to work out where its lines in the sand fall, it may be missing the bigger picture on abusive behavior. Earlier this week, the company learned that one of its new hires was the author of a widely shared Facebook rant which referred to homeless people as “degenerates [who] gather like hyenas, spit, urinate, taunt you, sell drugs, get rowdy, they act like they own the center of the city.”
Twitter quickly fired Greg Gopman, because, if there’s one thing that a tech company dealing with trolls doesn’t need, it’s someone on staff with a well-documented history of trolling.
Emily Peck contributed reporting.