Critics Demand Action From Facebook In Fight Against ISIS

Can the world's biggest social network do more to combat terrorism? A petition says yes.
Artwork accompanying a Change.org petition calling for Facebook to do more against ISIS. More from the artist here.
Artwork accompanying a Change.org petition calling for Facebook to do more against ISIS. More from the artist here.
Matthieu Meron for Rue 89/Change.org

A new Change.org petition calling for Facebook to take more action against Islamic State sympathizers has garnered nearly 70,000 signatures online since it launched on Saturday. But when it comes to shutting down hateful, violent messages linked to the terrorist group, the social network may only have so much power.

The petition was started by Julie Guilbault, a 37-year-old web designer who experienced a flood of abusive messages from Islamic State-aligned spam bots and individuals when she tried to use the #rechercheParis hashtag. The hashtag was supposed to help people locate missing loved ones following the terror attacks in the French capital, but Guilbault told The Huffington Post that it instead became flooded with jihadist propaganda.

"There are literally dozens of stories telling how, on Facebook, you just have to post a few keywords about Islam, Muslims, to have jihadist content appearing in your feed, or even to have someone talking directly to you, and trying to recruit you," Guilbault explained via email.

"It’s simply unbearable to think Facebook, the largest, most popular social network in the world, can set up algorithms to detect nudity, even on a painting, and put it offline in a matter of minutes or hours, and can’t do the same on jihadist propaganda or gore images," she added.

The problem is that Facebook reportedly doesn't rely on just algorithms to detect nudity -- or any other shocking content. Outlets like Wired have profiled the laborers who the social network pays to sift through the harshest content on the site: porn, gore and unfortunate mixes of the two.

For Facebook to delete [an] account and then deal with confirming they're not a spammer and then to deal with the outrage ... it would be a nightmare.Finn Brunton, assistant media professor at New York University

Automating this process successfully is something of a holy grail for social media companies. Imagine how much better the world would be without hate campaigns and the like, amplified by an army of spam bots and dedicated humans on the Internet.

Finn Brunton, an assistant professor of media, culture and communication at New York University and co-author of the book Obfuscation: A User's Guide for Privacy and Protest, told HuffPost that automatic content moderation is anything but a straightforward process.

"Someone who is re-posting jihadist imagery might be doing so because they're an ISIS recruiter or an ISIS bot, or it might be because they're furious, or they're a journalist, or they want to say these are the people they're fighting," Brunton said in a phone interview. "For Facebook to delete their account and then deal with confirming they're not a spammer and then to deal with the outrage … it would be a nightmare."

We work aggressively to ensure that we do not have terrorists or terror groups using the site, and we also remove any content that praises or supports terrorism.Facebook spokesperson

In other words, people post things for all sorts of reasons. When Facebook updated its guidelines on hate speech earlier this year, it made that clear.

"Sometimes people share content containing someone else's hate speech for the purpose of raising awareness or educating others about that hate speech," the updated community standards state.

Facebook has also been somewhat ahead of the curve when it comes to combating terrorism, having imposed a blanket ban on "terror-related content" more than five years ago.

To reiterate, though, a lot of the action happens because individuals report problematic content.

In a statement to HuffPost that's been used before, a Facebook spokesperson said Friday, "There is no place for terrorists on Facebook. We work aggressively to ensure that we do not have terrorists or terror groups using the site, and we also remove any content that praises or supports terrorism. We have a community of more than 1.5 billion people who are very good at letting us know when something is not right. We make it easy for them to flag content for us and they do. We have a global team responding to those reports around the clock, and we prioritize any safety-related reports for immediate review."

Of course, to hear Guilbault tell it, there's plenty of content that praises and supports terrorism on Facebook.

"I just want them [Facebook] to realize that they are used as weapons in a war against all the Western world," Guilbault said. "In the war against ISIS, they have a real role to play."

Also on HuffPost:

Egyptian Billionaire Offers To Build Island For Refugees

Inspiring Reactions to The Migrant And Refugee Crisis

Popular in the Community

Close

What's Hot