UNALIVING THE ALGORITHM

by Ellie Botman

“Just as we find things on the internet by following links from one place to another, language spreads and disseminates through our conversations and interactions.” - Gretchen McCulloch, Because Internet: Understanding the New Rules of Language, Riverhead Books, 2019




video by @felixmaaan
“At TikTok, we prioritize safety, diversity, inclusion, and authenticity. We encourage creators to celebrate what makes them unique and viewers to engage with what inspires them; we believe that a safe environment helps everyone do so openly.” - TikTok Community Guidelines, as of February 2022


Spend enough time on TikTok and you’ll start to notice patterns in the captions and video text you scroll past: phrases like “FAKE BODY,” tw: d*ath, “FAKE BODY DON’T DELETE !!,” unalive myself, PROP !! DON’T REPORT ME, FAKE BLOOD NOT REAL, tw: dr*gs, and FAKE KN*FE embedded within a smattering of emojis, hashtags, and tactically-placed asterisks. The videos might be clickbait-y thirst traps, cinematic cosplay, viral dances, or more mundane content like clothing try-on hauls, room tours, and wild stories recounted to the camera.  

There’s nothing fake about what’s being shown on camera (with the exception of some very realistic cosplay weapons); creators’ bodies might be edited or filtered, but they’re still bodies. They aren’t 3D renderings or animations. They are human bodies performing something for the camera, yet appearances and actions and accessories are labeled as artificial and unreal to assure some unseen third spectator. That third spectator, sitting between the content on the app and the audience who watches it, is TikTok itself. Or rather, TikTok’s automated, AI-driven content moderation system. This ecosystem of semi-censored language that renders content unreal and bodies alienated is a relatively new phenomenon, one that has sprung up in direct response to an ever-changing algorithmic influence which continues to shape how users interact with the app.

To understand how we got here, it’s important to look at how TikTok’s content moderation has evolved in tandem with the app’s meteoric rise in popularity over the past two years. In March 2020, The Intercept published a series of leaked internal documents from 2019 that showed instructions for moderators to identify undesirable content from users who appeared poor, had visible disabilities, or whose bodies had “ugly” or “abnormal” shapes. Since then, anecdotal experiences with 'shadowbanning' have persisted and accusations of discriminatory content suppression tactics continue to be made against the app by non-white, queer, plus-size, and disabled creators. 

In July 2021, TikTok announced that it would begin using automated content moderation in a greater capacity to remove content identified as violations of its Community Guidelines, particularly with sexual content, minor safety, violent or graphic content, and illegal activities. After this, creators began noticing videos getting taken down or suppressed for relatively innocuous activities getting ‘misread’ by the algorithm. Videos with obviously fake blood and other stage props, videos where a user lights something like a candle with a lighter, videos where TikTokers are speaking openly (or joking) about experiences with mental health, death, or suicide, were getting taken down for being graphic, inciting violence, or (as it continued to be the case for fashion creators who didn’t fit the skinny, white, cisgender norm) promoting sexual content. As Saifya Noble notes in her 2018 book Algorithms of Oppression: How Search Engines Reinforce Racism, “algorithmic oppression is not just a glitch in the system but, rather, is fundamental to the operating system of the web,” where IRL socioeconomic and political power structures of inequality are digitally reinforced.

On TikTok, videos are uploaded to the platform in formats that are then ‘read’ by their artificial intelligence systems who, by using the information provided by the user, learn more about these content creators and their audiences by identifying textual and visual markers that may denote personally revealing demographics and feed this data into the platform’s recommendation systems. Artist Trevor Paglen recently sought to define this state of automated looking that everyday people are now enveloped by. He notes that “what’s truly revolutionary about the advent of digital images is the fact that they are fundamentally machine-readable,” cutting the human observer out of the equation almost entirely. TikTok still employs teams of human moderators to review more serious cases and escalated ban or removal appeals, however this AI content monitoring happens at incomprehensible processing speeds. All we see is the aftermath: videos that are flagged or amplified, and embedded ads that are tailored to our preferences based on our past interactions and content we’ve viewed. Paglen observes that, although invisible to our own human perception, “machine-machine systems are extraordinary intimate instruments of power that operate through an aesthetics and ideology of objectivity, but the categories they employ are designed to reify the forms of power that those systems are set up to serve.” While unseen, power structures of enforcement and regulation remain within this digital infrastructure, shaping how we behave and communicate with one another.

TikTok’s Community Guidelines, which the app directs you to in the event of a violation flag or video removal, continues to be frustratingly vague, allowing for broad interpretations of what could be ‘inappropriate’ content (enforcement of which, as we’ve seen, disproportionately impacts marginalized users). It’s unsurprising that the phrase “unalive” became a stand-in for death or suicide since creators saw videos getting taken down just for using those words regardless of the context they’re spoken or written in—users are changing the way they interact with these apps to challenge the AI powers that be. TikTok in particular has seen a rise in video content that attempts to explain how the app’s algorithm works, how to ‘hack’ it to better your chances of going viral, demonstrating a kind of desire—especially among teens—to understand the app’s opaque corporate motivations and discern a logic within this automated virtual infrastructure through crowd-sourced information gathering.

We expect language to provide us with clarity, to define proper terminology, and better articulate what we perceive and experience as the world changes around us. On TikTok, under the content pressure to restrict one’s online behavior and presentation in order to meet aesthetic and ideological guidelines of acceptability, we see language muting as a response in attempts to render the content creator’s body illegible in the eyes of the machine. There’s no one definitive point of origin for phrases like “fake body” and “unaliving,” nor can we identify the first person who began using emojis to substitute for certain words or who discovered that if you place asterisks in words they become unsearchable while still readable enough for the audience. That’s not the point.

On a highly-controlled and monitored platform like TikTok, marginalized people and their images continue to be subjected to some of the app’s greatest scrutiny. What has emerged as a consequence of years of controversial content suppression tactics are new forms of communication that disarm TikTok’s regulatory power through subversions of language. Perhaps we can reframe these subversive forms of self-moderation as acts of linguistic hacking, challenging the reach and accuracy of TikTok’s AI observers. For all of their virtual infallibility, these algorithms can still be hacked, tricked, misled, enabling those whose content might be subject to greater policing to carve out spaces for themselves in TikTok’s endless scrolling feed. McKenzie Wark states in A Hacker Manifesto [version 4.0], that “to hack is to refuse representation, to make matters express themselves otherwise. To hack is always to produce a difference, if only a minute difference, in the production of information.” By muddling one’s own language, restricting one’s speech, and mislabelling one’s body, there’s this splintering of gazes where the human eye can recognize the content of these videos for what they actually are, while the AI struggles to fit the text and the image into its programmed patterns of recognition, a parallel world where one’s video content is simultaneously legible and illegible to its human and artificial audiences. 

Call it self-censorship, verbal glitches, or a linguistic hack, there’s no denying that TikTok has shaped the way we speak and relate to our own bodies within its algorithm-driven, video content platform. The way so much slang and terminology and information replicates travels across the Internet with infectious speed and viral replications, words like “unaliving” and asterisk-inflected spellings have become part of the digital lexicon of other platforms like Twitter and Instagram. People continue to label their videos as “FAKE:” performing brazen denials of reality in hopes they’ll avoid that little pop-up Community Guidelines violation notification. Even if you don’t make this content yourself, there’s a heightened awareness that you should be careful what you say and how you present yourself within this online space. Ultimately it is not a matter of if, but when TikTok’s content moderation systems catch on, that is until new socio-techno pressures arise. Yet, this sense that we have become alienated from our physical bodies within the confines of this social media platform, a feeling that has become exacerbated through linguistic slippages that have enabled bodies of all kinds to—if only just briefly—circumvent forms of automated digital surveillance. We are told to not let our eyes deceive us, especially in the digital age, but within this modern landscape of software and computer code, our innate capacity for linguistic innovation has given us the capacity to deceive the machines watching us somewhere between our eyes and our screens.