The Case for Censoring Hate Speech

Stricter regulation of Internet speech will not be popular with the libertarian-minded citizens of the United States, but it's necessary.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.
For the past few years speech has moved online, leading to fierce debates about its regulation. Most recently, feminists have led the charge to purge Facebook of misogyny that clearly violates its hate speech code. Facebook took a small step two weeks ago, creating a feature that will remove ads from pages deemed "controversial." But such a move is half-hearted; Facebook and other social networking websites should not tolerate hate speech and, in the absence of a government mandate, adopt a European model of expunging offensive material.
Stricter regulation of Internet speech will not be popular with the libertarian-minded citizens of the United States, but it's necessary. A typical view of the case for expunging hate speech comes from Jeffrey Rosen, who argues in The New Republic that,
"...given their tremendous size and importance as platforms for free speech, companies like Facebook, Google, Yahoo, and Twitter shouldn't try to be guardians of what Waldron calls a "well-ordered society"; instead, they should consider themselves the modern version of Oliver Wendell Holmes's fractious marketplace of ideas -- democratic spaces where all values, including civility norms, are always open for debate."
This image is romantic and lovely but it's worth asking what this actually looks like. Rosen forwards one example:

"Last year, after the French government objected to the hash tag "#unbonjuif" -- intended to inspire hateful riffs on the theme "a good Jew ..." -- Twitter blocked a handful of the resulting tweets in France, but only because they violated French law. Within days, the bulk of the tweets carrying the hashtag had turned from anti-Semitic to denunciations of anti-Semitism, confirming that the Twittersphere is perfectly capable of dealing with hate speech on its own, without heavy-handed intervention."

It's interesting to note how closely this idea resembles free market fundamentalism: simply get rid of any coercive rules and the "marketplace of ideas" will naturally produce the best result. Humboldt State University compiled a visual map that charts 150,000 hateful insults aggregated over the course of 11 months in the U.S. by pairing Google's Maps API with a series of the most homophobic, racist and otherwise prejudiced tweets. The map's existence draws into question the notion that the "twittersphere" can organically combat hate speech; hate speech is not going to disappear from twitter on its own.
The negative impacts of hate speech cannot be mitigated by the responses of third-party observers, as hate speech aims at two goals. First, it is an attempt to tell bigots that they are not alone. Frank Collins -- the neo-Nazi prosecuted in National Socialist Party of America v Skokie (1977) -- said, "We want to reach the good people, get the fierce anti-Semites who have to live among the Jews to come out of the woodwork and stand up for themselves."
The second purpose of hate speech is to intimidate the targeted minority, leading them to question whether their dignity and social status is secure. In many cases, such intimidation is successful. Consider the number of rapes that go unreported. Could this trend possibly be impacted by Reddit threads like /r/rapingwomen or /r/mensrights? Could it be due to the harassment women face when they even suggest the possibility they were raped? The rape culture that permeates Facebook, Twitter and the public dialogue must be held at least partially responsible for our larger rape culture.
Reddit, for instance, has become a veritable potpourri of hate speech; consider Reddit threads like /r/nazi, /r/killawoman, /r/misogny, /r/killingwomen. My argument is not that these should be taken down because they are offensive, but rather because they amount to the degradation of a class that has been historically oppressed. Imagine a Reddit thread for /r/lynchingblacks or /r/assassinatingthepresident. We would not argue that we should sit back and wait for this kind of speech be "outspoken" by positive speech, but that it should be entirely banned.
American free speech jurisprudence relies upon the assumption that speech is merely the extension of a thought, and not an action. If we consider it an action, then saying that we should combat hate speech with more positive speech is an absurd proposition; the speech has already done the harm, and no amount of support will defray the victim's impression that they are not truly secure in this society. We don't simply tell the victim of a robbery, "Hey, it's okay, there are lots of other people who aren't going to rob you." Similarly, it isn't incredibly useful to tell someone who has just had their race/gender/sexuality defamed, "There are a lot of other nice people out there."
Those who claim to "defend free speech" when they defend the right to post hate speech online, are in truth backwards. Free speech isn't an absolute right; no right is weighed in a vacuum. The court has imposed numerous restrictions on speech. Fighting words, libel and child pornography are all banned. Other countries merely go one step further by banning speech intended to intimidate vulnerable groups. The truth is that such speech does not democratize speech, it monopolizes speech. Women, LGBTQ individuals and racial or religious minorities feel intimidated and are left out of the public sphere. On Reddit, for example, women have left or changed their usernames to be more male-sounding lest they face harassment and intimidation for speaking on Reddit about even the most gender-neutral topics. Even outside of the intentionally offensive sub-reddits (i.e. /r/imgoingtohellforthis) misogyny is pervasive. I encountered this when browsing /r/funny.

Those who try to remove this hate speech have been criticized from left and right. At Slate, Jillian York writes, "While the campaigners on this issue are to be commended for raising awareness of such awful speech on Facebook's platform, their proposed solution is ultimately futile and sets a dangerous precedent for special interest groups looking to bring their pet issue to the attention of Facebook's censors."

It hardly seems right to qualify a group fighting hate speech as an "interest group" trying to bring their "pet issue" to the attention of Facebook censors. The "special interest" groups she fears might apply for protection must meet Facebook's strict community standards, which state:

While we encourage you to challenge ideas, institutions, events, and practices, we do not permit individuals or groups to attack others based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition.

If anything, the groups to which York refers are nudging Facebook towards actually enforcing its own rules.

People who argue against such rules generally portray their opponents as standing on a slippery precipice, tugging at the question "what next?" We can answer that question: Canada, England, France, Germany, The Netherlands, South Africa, Australia and India all ban hate speech. Yet, none of these countries have slipped into totalitarianism. In many ways, such countries are more free when you weigh the negative liberty to express harmful thoughts against the positive liberty that is suppressed when you allow for the intimidation of minorities.
As Arthur Schopenhauer said, "the freedom of the press should be governed by a very strict prohibition of all and every anonymity." However, with the Internet the public dialogue has moved online, where hate speech is easy and anonymous.
Jeffrey Rosen argues that norms of civility should be open to discussion, but, in today's reality, this issue has already been decided; impugning someone because of their race, gender or orientation is not acceptable in a civil society. Banning hate speech is not a mechanism to further this debate because the debate is over.
As Jeremy Waldron argues, hate speech laws prevent bigots from, "trying to create the impression that the equal position of members of vulnerable minorities in a rights-respecting society is less secure than implied by the society's actual foundational commitments."
Some people argue that the purpose of laws that ban hate speech is merely to avoid offending prudes. No country, however, has mandated that anything be excised from the public square merely because it provokes offense, but rather because it attacks the dignity of a group -- a practice the U.S. Supreme Court called in Beauharnais v. Illinois (1952) "group libel." Such a standard could easily be applied to Twitter, Reddit and other social media websites. While Facebook's policy as written should be a model, it's enforcement has been shoddy. Again, this isn't an argument for government intervention. The goal is for companies to adopt a European-model hate speech policy, one not aimed at expunging offense, but rather hate. Such a system would be subject to outside scrutiny by users.
If this is the standard, the Internet will surely remain controversial, but it can also be free of hate and allow everyone to participate. A true marketplace of ideas must co-exist with a multi-racial society open to people of all genders, orientations and religions, and it can.

Sean will be finishing up a six-month research internship in August. He's looking for a job that involves research, writing and public policy. Check out his resume and writing samples and e-mail him at:

Popular in the Community


What's Hot