An Anti-Bullying Emoji Is Not Enough

Two weeks into the "I Am a Witness" campaign, it's already an afterthought. These large media companies should be doing more to make sure that children are not being exploited, and to protect victims of revenge porn and those who are being bullied.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Two weeks into the "I Am a Witness" campaign, it's already an afterthought. The campaign brings together some of the biggest names in social media and advocacy--Twitter, Snapchat, Google, Facebook, GLSEN and the Trevor Project--to call out cyberbullying and spur change. But in many ways this campaign is a distraction. These large media companies should be doing more to make sure that children are not being exploited, and to protect victims of revenge porn and those who are being bullied.

Recently, a Facebook user uploaded nude photos of underage teens from Maine. Yet Facebook had no legal responsibility to moderate and prevent these photos from being published. Facebook--and other social media platforms--need to systematically evaluate moderation mechanisms and introduce new reporting standards and accountability.

What the Current Law Says

It might seem that the best way to protect children from bullying and exploitation online is to place the legal burden on social media companies, even when the offensive content is created and uploaded by someone else. But a federal statute provides these companies with broad immunity from civil suits. This statute, Section 230 of the Communications Decency Act, provides that an interactive computer service (e.g., websites) cannot be treated as the publisher or speaker of third-party content. Section 230 has protected websites such as Facebook, Google, Yahoo!, Craigslist, and more from liability stemming from the content posted on their sites by users. Of course, those parties who created and posted the illegal material are not immune from liability.

Though immunity under this statue is far-reaching, a website can be liable for content generated by others when it is involved in encouraging or developing the content.

The upshot of this law is not to give websites a complete pass. Conversely, it is to encourage them to police their own content. Prior to the enactment of Section 230, a website could be held liable for content posted on its site when it tried to moderate offensive and obscene content, but didn't catch it all. A website that did nothing, on the other hand, faced no liability because they had no involvement (e.g., publication, removal etc.) with the content.

Congress decided to provide broad immunity to quell the concern that a website's efforts to remove objectionable content (that it might remove too much, or maybe not enough, or that it might not remove it quickly enough, for example) might expose it to liability even when trying to be a good online citizen. Considering the sheer volume of material that is posted on many websites on any given day, policing all of the content would prove impossible, and so the natural response without Section 230 would be to do nothing.

Big Social

Facebook is the largest social network in the world, with 1.31 billion mobile monthly active users as of June 30, 2015. It needs to lead the industry and get the reporting feature right. Facebook currently allows for removal of images, but nothing stipulates how quickly it needs to act, or how its reporting user interface should be created and labeled.

When it comes to digital media and social media dissemination, seconds matter. Every moment harmful content sits on the Internet causes damage--more views, shares, screenshots, and downloads. This is why company takedown protocols and moderation are so important.

We Can Do Better, We Have to Do Better

Unless Section 230 is amended to narrow immunity for websites--an idea worth consideration--the burden to protect people from bullying, harassment and abuse must fall on the websites. It's time for big social to take action. Many industries have self-regulatory bodies such as the Advertising Self-Regulatory Council, which monitors the industry and sets policies and procedures.

Below are specific recommendations that should be addressed:

Issue: Social media products vary greatly when it comes to moderation and reporting
Recommendation: Convene a self-regulatory council to monitor the industry and set policies and procedures for the identification and immediate removal of content sexually exploiting children and revenge porn victims

Issue: Reporting language differs greatly across platforms and is often vague (e.g., "I don't like this post")
Recommendation: Standardize reporting language and mechanisms

Issue: Reporting can be delayed when users have to click 3-5 times to fully submit a photo/post
Recommendation: Move the reporting button to the root level (i.e., report on the same level that you can like or share)

Issue: Every moment that harmful content sits on the Internet is damaging--it means more people saw it, more people shared it, more screenshots, more downloads
Recommendation: Add a mechanism that pulls down any images or posts that have been flagged with signification issues and republish after moderation if there is no issue

Issue: Non-consenting nude photos often appear on social media
Recommendation: Create better photo scanning technology that can identify potentially harmful images

Issue: It is unclear if anyone is reviewing posts that have been flagged as abusive
Recommendation: Create a moderation tool for reviewing any post with the new anti-bullying emoji

Issue: It can be difficult to find contact information when issues and questions arise during reporting
Recommendation: Provide additional contact information and instructions for expediting the reporting of images, especially those that exploit children

Anti-bullying campaigns are important, and a good start, but what we really need is for the social media industry to take an active role in ensuring that inappropriate comments and images never make it to the Internet, and when they do, are removed as expeditiously as possible.

Popular in the Community

Close

What's Hot