Far-Right Activists Are Taking Their Message To Gen Z On TikTok

The video-sharing app has let hate speech flourish as a growing number of users, many of them young, flock to TikTok.
Getty Editorial

As Facebook, YouTube and Twitter continue a slow purge of users who violate their policies prohibiting hate speech, some extremists kicked off those platforms have found a safe space on TikTok, the Chinese-made video-sharing app that’s become wildly popular among Generation Z.

Facebook banned Faith Goldy, a Canadian white nationalist, from its site earlier this month for posting xenophobic and racist content. But on TikTok, where Goldy still has an active account, she freely posts xenophobic videos about undocumented immigrants and calls for them to be physically ejected. Her message is the same on the two platforms, but the presentation is different: In her sarcasm-laden TikTok videos, Goldy draws on meme culture to appeal to the app’s younger audience.

Tim “Baked Alaska” Gionet, an anti-Semitic, far-right troll who was banned from Twitter for hateful conduct in 2017, is also active on TikTok, as is British conspiracy theorist Paul Joseph Watson. An editor and contributor at conspiracy-mongering website InfoWars, Watson joined TikTok shortly after YouTube, Facebook and Twitter banned InfoWars from their platforms in 2018.

Although its intended purpose was for users to create music videos, TikTok has allowed more sinister content to flourish, demonstrating a failure to enforce its own policies as a growing number of users ― many of them young ― flock to the app.

After launching in China in late 2016, TikTok rapidly became a leading short-video app in the United States. Similar in some ways to Vine, which was discontinued in 2016, TikTok allows users to create 3- to 15-second videos. It has been downloaded some 800 million times globally, surpassing Facebook, YouTube and Instagram to become the world’s most downloaded app in the first half of 2018.

Hate speech on TikTok largely flew under the radar until December, when Motherboard reported that it had found examples of “blatant, violent white supremacy and Nazism,” including calls for the murders of Jews and black people. TikTok, the news site concluded, “has a Nazi problem.”

In a statement to HuffPost, a TikTok spokesperson described the issue as “a challenge for the industry as a whole.”

“For our part we continue to enhance our existing measures and roll out further protections as we work to minimize the opportunity for misuse,” the spokesperson said. “There is absolutely no place for discrimination, including hate speech, on this platform.”

Watson, who mainly uses TikTok to spread misogynistic and Islamophobic tropes, has framed criticism of the app, such that coming from Motherboard, as an affront to free speech that’s being driven by “moral gatekeepers” in an effort to deprive “young men, predominantly young, white men ― God forbid ― of a space in which to express themselves.”

But so-called “moral gatekeepers” don’t get to decide who or what stays on platforms. The platforms do. The existence of discriminatory, hateful content on TikTok reflects the app’s negligence in enforcing its policies, which clearly prohibit both discrimination and hate speech.

TikTok’s user guidelines describe the platform as an “inclusive community” that does not allow content that “incites hatred against a group of people based on their race, ethnicity, religion, nationality, culture, disability, sexual orientation, gender, gender identity, age, or any other discrimination.” Users are also prohibited from sharing content “that may trigger hostility, including trolling or provocative remarks.”

In the U.S., the Communications Decency Act shields platforms from liability for most user-generated content, leaving it to them to decide what material they will and will not allow on their sites. The social media giants, which have long struggled to draw a line between free speech and hate speech, are responsible for creating their own terms of use. But they’re not held accountable for ensuring that users adhere to those rules or for maintaining a safe and inclusive environment on their platforms.

“There’s nothing legally mandating that any of these companies do anything [to address hateful or extremist content], which is the hole that the Communications Decency Act has left,” said Heidi Beirich, who leads the Intelligence Project at the Southern Poverty Law Center. “The pressure to get hate speech and other violent stuff off platforms is entirely a result of either outside pressure, like bad PR, or activities that the platforms themselves have engaged in to try to reduce the amount of hate on their platforms.”

Beirich views strong content moderation as a “moral responsibility” for platforms generally and for TikTok in particular.

“TikTok panders to children,” she said. “They see cool videos, then they see racist things or content calling [white supremacist mass murderer] Dylann Roof a hero, and they’re going to end up going down a really bad path.”

Before You Go

LOADINGERROR LOADING

Popular in the Community

Close

What's Hot