Racists, neo-Nazis, misogynists, anti-Semites, Holocaust-deniers ... On TikTok, hatemongers like these are having a field day dodging the platform’s moderators.
Violent extremists are enjoying safe haven on TikTok, according to the “Hatescape” report, a major study by the London-based Institute of Strategic Dialogue (ISD).
The hugely popular video-sharing platform claims to have a stringent policy banning extremist content. But users “intent on using online spaces to produce, post and promote hate and extremism” are finding easy ways to bypass its auto-moderators.
And thanks to the use of popular hashtags, such material is being distributed widely. That means users who aren’t on TikTok searching for hatespeech - like your child, for example - will likely encounter it anyway.
The culmination of a three-month-long study, report also revealed an alarming abundance of racist and terrorist content created by Australians, some receiving views in the multiple millions.
A video depicting a man eating a bat - referencing racist stereotypes about Chinese people - and another featuring a man in blackface impersonating murder victim George Floyd are two typical examples of offensive content by Australians, according to study author Ciaran O’Connor.
TikTok’s moderators are programmed to auto-detect and delete such material. But users have little trouble bypassing the platform’s filters. Simply changing the soundtrack to posts that violate the platform’s policy is one popular workaround. (For some reason, Australian artist Gotye’s 2011 hit “Someone I used to know,” is a favourite.)
Other common tactics include changing a letter in a banned phrase or account name, or deliberately misspelling a hashtag.
A game of cat and mouse
Sometimes hatespeech is edited into other videos, using the app’s Stitch or Duet features, as a way to evade detection.
It’s a constant game of cat and mouse - with the cat at a decided disadvantage.
The Hatescape report makes it clear that TikTok is not unresponsive to its responsibilities to remove extremist material. On the contrary, the company has learned from the mistakes of other social media platforms and claims to be committed to achieving greater transparency and more effective enforcement.
Nevertheless, notes O’Connor, “there’s an enforcement gap in TikTok's approach to hate and extremism … this content is removed, but inconsistently.”
Evidence? Of the 1030 extremist videos the study examined, only 191 had been removed by TikTok at the end of the three-month data-collection period. The remaining 81.5% were still live.
The report underscores “the clear need for independent oversight” of platforms like TikTok, “which currently leave users and the wider public open to significant risks to their health, security and rights.”
Instagram can be dangerous territory for vulnerable teen girls. But you can ensure a healthier experience with these expert tips.
How much screen-time is “just right” for your child? Applying the Goldilocks Principle to children’s device use means figuring out where ...
The automated message “this call may be recorded for training purposes” is all too familiar to most of us. But few are aware that those ...