Framing the Manosphere – Why Ofcom’s Report Raises More Questions Than It Answers

Chatgpt image jun 13, 2025, 11 06 25 pm

As the UK’s communications regulator, Ofcom occupies a uniquely powerful position—tasked with overseeing media standards while increasingly expanding its remit into online safety and digital regulation. With the recent publication of Experiences of Engaging with the Manosphere, Ofcom seeks to inform public discourse about a loosely defined collection of online subcultures—incels, red pill groups, men’s rights activists, and others—known collectively as the “manosphere.”

Yet despite the report’s intentions to offer nuance and insight, it raises more concerns than it resolves. As advocates for open, decentralised, and accountable public discourse, we at Decentered Media believe that any attempt to shape regulatory frameworks around contentious online spaces must be held to the highest standard of independence, evidence, and public transparency. Unfortunately, this report falls short on each of those fronts.

A Conflict of Interest at the Heart of the Research

Screenshot 2025 06 13 230337Perhaps the most immediate concern is that Ofcom has chosen to commission and publish this research in-house. While carried out in partnership with an external agency, the overall framing, methodology, and presentation of findings remain under Ofcom’s control. When a regulatory body with legal powers to impose codes of conduct, fines, and platform compliance guidelines publishes a report on a controversial topic like the manosphere, the potential for a conflict of interest is clear.

This is not a purely academic exercise. This research feeds directly into Ofcom’s regulatory decision-making. The Online Safety Act 2023 gives Ofcom sweeping powers to define, mitigate, and sanction “harmful” online content. So we must ask: what guarantees do we have that this report won’t be used to pre-emptively justify future restrictions on speech, rather than inform open, evidence-led consultation?

A genuinely independent study—commissioned through a university or third-sector research body with no regulatory remit—would have provided more credibility and removed the suspicion that this report is laying the groundwork for expanding Ofcom’s reach into editorial spaces under the guise of “safety.”

A Narrow Sample, a Wide Set of Conclusions

The research sample itself—39 people, 38 of whom were men—is too limited to draw generalisable conclusions. Participants were recruited online, largely self-selected, and understandably cautious of talking to an organisation they associated with the “mainstream.” Even the report admits that more extreme voices likely self-excluded.

So what exactly does the report represent? Not the full spectrum of manosphere engagement, but a narrow slice of self-aware, voluntarily involved individuals. From such a small group, with no demographic control or comparison to wider user behaviours, how can we reasonably make inferences about harm, risk, or trends?

In any other policy field, such methodological constraints would be grounds for serious caution. Why should communications regulation be any different?

The Harm Principle—Or Lack Thereof

One of the more worrying aspects of the report is its framing of “harm.” While references are made to misogyny and dangerous content, there is little evidence that any of the participants suffered or inflicted actual harm. Much of the concern arises from perceived risk—content that is “potentially” harmful or “might” influence behaviour. But without a standard of demonstrable harm, how are we to adjudicate what should or shouldn’t be allowed?

The distinction between personal offence and social injury is critical. Who decides what crosses the line? Ofcom? Platform moderators? Advocacy groups? There is no transparent rubric offered here—no clear, publicly agreed framework to balance freedom of speech against inferred or symbolic harm.

This vagueness is not trivial. It goes to the heart of our democratic norms. If harm is to be the standard for restriction, then it must be clearly defined, demonstrably evidenced, and proportionally assessed—not assumed based on associative guilt or algorithmic proximity.

What Kind of Speech Are We Protecting?

Finally, we must confront the quiet elision at the heart of this report: the suggestion that certain forms of speech—especially when associated with disfavoured identities or controversial opinions—are inherently suspicious. The manosphere, for all its flaws, contains voices engaged in discussions about mental health, masculinity, inequality in family courts, and social alienation. These are not illegitimate topics. Labelling them “risky” simply because they circulate outside mainstream media institutions is a form of soft censorship that undermines pluralism.

To be clear: this is not a defence of online harassment, bigotry, or abuse. But unless we are willing to defend unpopular speech—particularly when it addresses difficult or uncomfortable issues—we risk turning regulation into ideology. And once we do that, the damage to public trust, civic dialogue, and democratic legitimacy may be far more dangerous than the online content we seek to control.

A Call for Accountability and Independence

At Decentered Media, we urge Ofcom—and other regulators—to resist the urge to pre-define public discourse. If online safety policy is to be sustainable, it must be grounded in independence, transparency, and meaningful dialogue with diverse publics. That begins with commissioning research at arms-length, applying rigorous standards of evidence, and creating clear frameworks for understanding and adjudicating harm.

We don’t need institutions that second-guess the public’s capacity to think. We need institutions that help the public deliberate. Until then, this report should be read not as the basis for action, but as a starting point for much-needed scrutiny

For further reflection or to share your views on this topic, please get in touch with us at Decentered Media.