How the UK Online Safety Act is harming marginalized communities and setting a dangerous global precedent
By Repro Uncensored and the English Collective of Prostitutes (ECP)
The UK’s Online Safety Act is often described as a landmark effort to make the internet safer. Framed around protection, particularly of children, the law introduces sweeping duties for online platforms to identify and reduce the risk of harm on their services.
But as the Act moves from passage into enforcement, its real world impact is becoming increasingly clear. The burdens of compliance are not shared equally. In practice, it is marginalized communities such as sex workers, queer people, and sexual and reproductive health advocates who absorb the costs of what regulators and platforms call “risk mitigation.”
When safety prioritizes platform protection over people
At the heart of the Online Safety Act is a duty of care model enforced by the UK regulator Ofcom. Platforms are required to assess potential harms, implement measures to reduce those risks, and demonstrate ongoing compliance. Failure to do so can result in significant financial penalties.
This structure creates a powerful incentive for platforms to act defensively. When faced with uncertainty, the safest operational choice is often to remove or restrict content that could be perceived as risky. In this context, risk does not mean illegal or harmful in a real world sense. It often means controversial, stigmatized, or difficult for automated systems to interpret.
As platforms rush to demonstrate compliance, they rely heavily on automated moderation systems and blunt internal policies that treat certain forms of speech as inherently dangerous. Content related to sex, bodies, survival, and mutual aid is repeatedly flagged as high risk, even when it is legal, consensual, and essential.
Sex workers mislabeled as harm and stripped of safety
Sex workers are among the most directly impacted groups. Under Online Safety Act style risk frameworks, sex work is routinely conflated with exploitation and abuse rather than understood as labor or as a community requiring safety and rights.
As a result, platforms increasingly remove harm reduction and peer to peer safety information, advice on screening clients and avoiding violence, advocacy and rights based education, and income generating content that allows workers to avoid dangerous offline conditions.
This is not a theoretical concern. The loss of online spaces where sex workers share safety information and earn income has repeatedly been shown to increase real world harm. When platforms remove this infrastructure, workers are pushed into more precarious situations while being offered no meaningful alternative protections.
A familiar logic, newly digitized
The impact of the Online Safety Act on sex workers does not emerge in a vacuum. It reflects a long historical continuity in which repression is repeatedly justified as protection, with moral regulation reframed as safety. What is new is not the logic itself, but the infrastructure through which it is enforced.
For decades, sex workers have been positioned either as a risk or as perpetually at risk, but rarely as experts in managing risk. This framing erases the reality that sex workers have developed extensive peer led safety practices that demonstrably reduce violence, exploitation, and harm. Screening methods, shared warning systems, community accountability, and harm reduction strategies have long functioned as effective forms of risk mitigation.
Under the Online Safety Act, this expertise is ignored. Sex worker visibility is framed as a liability, their labor as inherently harmful, and their survival strategies as evidence of regulatory failure rather than resilience. Safety is defined without the participation of those most affected, and enforcement proceeds accordingly.
Platforms responding to regulatory pressure do not build safer systems in collaboration with impacted communities. Instead, they shift the costs of safety onto those least able to absorb them through content removal, account suspensions, loss of income, and the dismantling of peer support networks.
This is not a neutral outcome of compliance. It is a continuation of a historical logic in which protection is achieved through exclusion, now digitized and automated. What appears as platform responsibility is, in practice, privatized austerity, where corporations meet regulatory demands by externalizing harm onto marginalized communities.
Seen in this light, the Online Safety Act is not simply a new policy framework. It is the latest iteration of a long standing pattern where moral regulation is repackaged as care, and where safety is secured by rendering certain people invisible.
Queer communities visibility treated as risk
Queer content is similarly swept up in overbroad safety enforcement. Automated systems and age based appropriateness filters frequently classify queer visibility as sexualized or unsuitable, particularly when it involves discussions of bodies, identity, or healthcare.
Under compliance pressure, platforms are incentivized to downrank or remove queer educational content, restrict access to health related information, and apply age gates to identity affirming resources. The result is a chilling effect where visibility itself is treated as a safety problem, especially for young people seeking information and community.
SRHR and abortion information treated as sensitive not essential
Sexual and reproductive health and rights content, including abortion information, is especially vulnerable under safety driven moderation. Automated systems regularly fail to distinguish between explicit content and medical or educational material.
As platforms attempt to minimize regulatory exposure, they often remove posts discussing abortion access and care, suppress harm reduction and self managed abortion information, and restrict or suspend accounts sharing sexual health resources.
For many people, particularly those living in restrictive legal environments, online platforms are the primary and sometimes only way to access accurate health information. When that content disappears, the harm is immediate and material.
Automation turns bias into infrastructure
The Online Safety Act does not explicitly mandate automated moderation, but at platform scale automation is the only viable enforcement tool. This makes discrimination systemic rather than incidental.
Automated systems are trained on biased datasets, perform poorly on context and nuance, and are triggered more frequently by stigmatized language and imagery. Because marginalized communities are already disproportionately reported and surveilled, their content is flagged more often, reviewed less carefully, and reinstated less frequently. Appeals mechanisms, where they exist, are opaque and inaccessible, meaning takedowns become permanent by default.
Safety that erases is not safety
The cumulative effect of Online Safety Act style enforcement is not a safer internet. It is a quieter one, where those most affected by harm are pushed out of visibility and stripped of tools for survival, organizing, and care.
By embedding risk aversion into platform governance, the Online Safety Act risks formalizing a system where safety is achieved through removal rather than protection, marginalized speech is treated as a liability, and platforms are shielded from accountability while communities absorb the damage.
A copycat model already taking shape
The significance of the UK’s approach extends far beyond its borders because the Online Safety Act does not just regulate content. It exports a regulatory architecture that other governments can easily replicate.
What is most likely to be copied is not the UK’s stated commitment to human rights safeguards, but the mechanics of enforcement. These include broad and flexible definitions of harm, mandatory platform risk assessments, heavy financial penalties for non compliance, and reliance on automated moderation to demonstrate compliance at scale.
This model is already appealing to governments seeking fast and politically defensible ways to control online speech. By framing regulation around safety, particularly child safety, lawmakers can introduce sweeping content controls while avoiding direct debates about censorship, freedom of expression, or discrimination.
In practice, copycat laws tend to reproduce three core elements. First, the creation of a powerful online safety regulator with authority to issue binding codes, demand takedowns, and penalize platforms financially. Second, the use of platform risk assessments as a legal obligation, pushing companies to proactively identify and suppress content categories deemed risky. Third, the normalization of automated enforcement as an acceptable compliance tool, despite its well documented discriminatory impacts.
When this model is adopted in countries with weaker judicial oversight, limited civil society participation, or hostile political environments, the harms intensify. The same framework that quietly suppresses sex worker, queer, and reproductive health content in the UK can be used more aggressively elsewhere to silence dissent, criminalize visibility, or erase entire communities online.
Once a safety based censorship model is normalized in a country like the UK, it gains legitimacy. Other governments can point to it as precedent, even as they strip away safeguards and accountability. What begins as risk mitigation in one jurisdiction becomes outright repression in another.
Safety cannot be built on erasure
If online safety laws systematically silence the communities most affected by harm, they fail at their stated purpose. Safety cannot be achieved by removing people from view, stripping them of information, or dismantling the digital infrastructure they rely on to survive and organize.
Unless the harms already being produced by the Online Safety Act are confronted directly, the law risks becoming not a solution to online harm, but a blueprint for global censorship, enforced by algorithms and justified in the language of protection.
https://www.reprouncensored.org/research-overview/research/uk-online-safety-act-harm
