Spain's ambitious regulatory agenda, announced in May 2026, aims to crack down on harmful social media practices and AI misuse. Yet embedded within this well-intentioned policy lies a troubling contradiction that deserves serious scrutiny. Spain's digital transformation minister, Oscar Lopez, has declared that anonymity should not shield people from liability for online crimes. A position that, while superficially reasonable, threatens to fundamentally undermine digital privacy rights for ordinary users.
The tension is clear: on one hand, Spain wants to protect minors from cyberbullying, sexual harassment, and AI-generated deepfakes. These are genuine harms that deserve regulatory attention. On the other hand, the government's insistence that "what isn't legal in the real world cannot be legal in the virtual world" opens the door to pervasive surveillance and the erosion of anonymity protections that have historically been crucial safeguards for vulnerable populations, activists, and marginalized communities.
Lopez's statement that authorities should be able to identify pseudonymous users who commit crimes sounds reasonable in isolation. The problem emerges when you consider how such powers get used in practice. History and contemporary examples from around the globe demonstrate that surveillance tools ostensibly designed to catch criminals consistently expand beyond their original scope.
Consider a few scenarios that become possible under Spain's proposed framework:
The issue isn't that these hypotheticals are far-fetched. It's that they're entirely predictable consequences of weakening anonymity protections. Once deanonymization becomes routine for "crimes," the definition of what constitutes a crime becomes critically important. And history shows that those definitions often expand in ways that harm the powerless far more than the powerful.
Spain's regulatory package focuses heavily on holding tech executives personally liable for hate speech and banning teenage social media use. These are understandable responses to real problems: algorithmic amplification of harmful content, platforms' negligence toward child safety, and the documented mental health impacts of social media addiction.
But mandatory deanonymization is a blunt instrument that doesn't precisely target these harms. It's like imposing blanket surveillance on an entire city to catch one criminal. The collateral damage (like the chilling effect on free speech, the vulnerability of marginalized users, the erosion of privacy) far exceeds what's necessary to address the specific problems of algorithmic harm and child exploitation.
When a minister claims that anonymity "should not shield people from liability," what often gets lost is how privacy advocates and tech critics actually use that protection. Anonymity online serves multiple legitimate purposes:
None of these uses involve "committing crimes." Yet a broad deanonymization mandate would inevitably affect all of them. The question Spain hasn't adequately addressed is: Why should ordinary people lose privacy protections because some people misuse anonymity? We don't eliminate door locks because criminals use them; we prosecute the criminals while protecting legitimate uses of locks.
Spain isn't acting in isolation. The European Union's proposed Digital Fairness Act, mentioned by European Commission President Ursula von der Leyen, reflects a broader regulatory movement aimed at constraining tech companies' power. Big Tech's influence is indeed immense, and platforms have demonstrably failed to self-regulate responsibly.
However, Spain risks conflating "holding Big Tech accountable" with "expanding state surveillance powers." These are different problems requiring different solutions. You can hold platforms liable for algorithmic harms, require algorithm transparency, and enforce strict child safety standards without dismantling anonymity protections.
The fact that Spain frames this as necessary to combat a "mental health pandemic" among minors is telling. It's an emotionally compelling justification that makes it harder to question whether the proposed remedy actually addresses the root cause. The real culprits behind youth mental health impacts are algorithmic engagement maximization, dopamine-hit design patterns, and insufficient moderation. None of these require eliminating anonymity to fix.
Spain could achieve its stated goals (protecting minors, combating hate speech, reining in harmful AI) without dismantling anonymity. Here's what that would look like:
These approaches address Spain's genuine concerns while preserving privacy rights that benefit everyone, including future whistleblowers, activists, and ordinary people who simply want to participate online without surveillance. But like that they could not continue with their secret agenda.