Trolls fill Twitter with abusive language and other forms of harassment, but a new report has identified some key ways the site can beat them back.
It's a problem that's long burdened the popular social network -- so much so that its CEO, Dick Costolo, flatly acknowledged "We suck at dealing with abuse" in a leaked internal memo earlier this year. A lot of people would probably agree with him: The network has been used for death threats, revenge porn (sexually explicit pictures or videos shared without the consent of the person featured) and doxxing (the leaking online of someone's private information, like home address or phone number).
Enter the Women, Action, & The Media (WAM!) group. With Twitter's blessing, the nonprofit launched a new tool last fall that helped individuals submit detailed harassment reports -- even on behalf of others.
Twitter does have its own dedicated page for reporting abusive behavior. WAM!'s tool was something separate hosted on its own website, but it allowed people to submit more detailed accounts of what they were experiencing -- and across a greater range of categories than Twitter itself previously offered.
The group collected those reports and, if necessary, elevated them to workers at Twitter who could take action against abusive users. The goal was to gather data on harassment and present the findings in hopes of improving how abuse is handled on the social network. Once the experiment was complete, WAM! deactivated the tool.
Now, the results are in.
WAM! released its complete report Wednesday. The organization collected 811 harassment reports while its tool was online last fall. Of those reports, 317 were determined to be "genuine" and 161 were serious enough to be brought to Twitter's attention. Twitter took action on 55 percent of those escalated reports.
Wait -- why not the full 100 percent? That's not entirely clear.
"We have limited data as to why Twitter may have declined escalated reports. It's definitely an area where greater clarity -- not only on how Twitter defines harassment, but also on how it reviews reports -- would be helpful," Amy Johnson, a lead author of the report, told The Huffington Post via email.
Nathan Matias, a co-author, said in a statement that the nonprofit escalated certain reports to Twitter that weren't yet covered by the social network's policies. Revenge porn and doxxing, for example, were not formally banned by Twitter until this March.
Matias also said that reports could be confounded because Twitter requires "evidence" -- a web address linking to an abusive tweet, for example. Because tweets are so easy to delete quickly, an abusive individual could wipe away evidence before the report was escalated.
"All of which is to say: a WAM! reviewer's decision to escalate reports was made at a different point, with different information, than a Twitter reviewer's decision to decline to take action," Johnson told HuffPost.
After considering its data, WAM! came up with four recommendations for Twitter to battle abuse on its platform:
- Develop new policies, especially to combat "tweet and delete" tactics, which make it difficult to report harmful actions that are later removed from the platform.
Twitter did not respond to requests for comments on the release of the report. The company did recently update its community standards, though some, like feminist critic Anita Sarkeesian and actress Ashley Judd, characterize this as little more than the first step in the right direction.
“The method to report is staggeringly inadequate," said Judd at a panel last month.
Click here to view WAM!'s complete report, or for more information, scroll through the infographic below:
CORRECTION: A previous version of this story stated incorrectly that Amy Johnson is a lead researcher for WAM!. She is a lead author of WAM!'s report but is otherwise unaffiliated with the organization.