Skip to Content
Categories:

Internet anonymity is in trouble: What does this mean for protest and creative expression?

Since its inception, the internet has been a platform to share thoughts, organize movements and express parts of people’s identity that they may not be comfortable with sharing in their offline life. It started with various blogs and forums, home videos and group chats. But over time, the anonymity that provides these protections has started to fray. The internet has become a highly- (and often poorly-) monitored space.
On July 25, the first Protection of Children Codes of Practice for user-to-user and search services, part of the UK’s Online Safety Act 2023, passed. The stated purpose of this protection was to limit child access to material that could be considered harmful, such as “”primary priority content” (such as pornographic content); (b) “priority content” (such as bullying content); and (c) “non-designated content” (such as content that promotes depression, hopelessness and despair)” online. The recommended measures these user-to-user platforms are expected to follow include: age checks (utilizing credit card and photo ID checks), safer algorithms and content moderation.
This sounds great in theory, except for one detail: who gets to decide what kind of content is harmful for children? It’s not a reach to say that a lack of access to resources about depression or self-harm could be worse for children than having them. Many times, kids who are struggling with mental health issues won’t casually speak out about their struggles. Instead, they’ll take to internet forums to seek the help they need. Whether we believe that is the best course of action or not, closing channels for discussion seems questionable. It also doesn’t seem like a stretch to say that “bullying” content could include valid concerns, such as messages using politically-charged phrases deemed by some as insensitive or holding politicians accountable for their poor decisions.
The passage of this protection seems akin to Instagram and Pinterest’s recent efforts to eliminate harmful content from their platforms. By removing the ability to look up words like “girl” and “sad,” they attempted to protect users from child predators and self-harm rabbit holes, but in reality, these were more likely endeavors to signal to users that tech companies are doing their civic duty to combat online harm. These decisions were performative, eliminating much content that was not even harmful and flooding emails with multiple messages because a “pin” allegedly “goes against [their] Community Guidelines on self-injury and harmful behavior.”
According to a September 2025 Reuters report, many of Instagram’s teen safety tools “do not work well or, in some cases, don’t exist.” Of the 47 features tested by researchers from Northeastern University and child-safety advocacy groups, only eight were fully effective, and features meant to block self-harm search terms “were easily circumvented” and systems to redirect teens from bingeing on content “never triggered.” Researchers have additionally noted that new laws may encourage over-removal of online content because platforms will err on the side of caution to avoid penalties. Because of this reality, the algorithms used to make these judgements do not yet seem strong enough to be able to effectively moderate, instead wiping relatively benign material on a large-scale with vague explanations and a short, often week-long, window to contest corporate resolutions. With these algorithms calling the shots that craft a “safer” platform, I worry about what kind of internet we will see in the future, whether you’re under the age of 18 or not.
Don’t get me wrong, restrictions on online content are absolutely necessary. The internet has been used to stalk, dox and harass people since its genesis. Anonymity has always provided comfort for bad actors to cause harm, and something must be done to make sure everyone, not just minors, is protected from this kind of material. Yet, requiring “papers,” such as government-issued IDs, to access the realm of the internet seems like a poor choice that could lead to security breaches, unfair usage of identification by the government and coercion into silence on important social issues. Anonymity is needed to protect valid movements and freedom of speech in a world where those rights are being threatened every day.
We live in a world where Instagram can wake up one day and decide to dox all of its users in a new map feature. We live in a world where speaking up about social issues leads to shadowbanning and the need to use emojis to denote the mere existence of these movements. We live in a world where AI slop is platformed (and flagged as such by even more AI) and the main way to combat it is by requiring that users present formal identification. In a world where one of the largest social media platforms in the world can issue a gracious thank you note to America’s president while he actively tries to acquire part of the company, there is absolutely no way that social media can continue to be a platform that encourages unmonitored social mobilization and self-expression. Just take X (formerly Twitter) for example. If we continue to let poor decisions be made about how the internet should be run, then we will end up with an online world ruled by faulty AI models, big bucks and high-ranking government officials with all the reasons to control the narrative on their self-serving decisions.