The EU’s plan to scan private communications for “illegal content” was widely believed to be dead after a wall of public outrage and resistance from several member states. Now it’s back – rebranded, repackaged, and routed through the back door.

The new draft of the EU’s Child Sexual Abuse Regulation (CSAR) is being sold as a “compromise.” In reality, it keeps the same core idea: make providers responsible for preventing “online abuse” in a way that can be used to justify scanning every private message, and tie digital communication more tightly to real-world identity. (Patrick Breyer)

Digital rights campaigner and former MEP Patrick Breyer calls it “a political deception of the highest order” – and he’s not exaggerating. (Patrick Breyer)

Let’s unpack what has actually changed, and what hasn’t.


1. Quick recap: what Chat Control was supposed to be

Formally, “Chat Control” refers to the EU’s proposed Regulation to Prevent and Combat Child Sexual Abuse, first presented in 2022. On paper, the aim is to tackle CSAM (child sexual abuse material) and “grooming” online. In practice, the regulation creates tools for:

  • scanning private communications, including messages, images, videos and URLs
  • scanning not only known CSAM, but “unknown” material and even patterns of conversation
  • extending scanning to services that use end-to-end encryption (E2EE) (Patrick Breyer)

Right now, a temporary “ePrivacy derogation” – nicknamed Chat Control 1.0 – allows platforms to voluntarily scan for CSAM without violating EU privacy rules. A handful of mostly US-based services (Gmail, Facebook/Instagram Messenger, Skype, Snapchat, some cloud email) already do this. (Patrick Breyer)

When the Commission tried to move from voluntary to mandatory scanning (Chat Control 2.0), it ran into a brick wall:

  • Digital rights groups warned of generalized mass surveillance and encryption backdoors. (European Digital Rights (EDRi))
  • Technical experts pointed out that “client-side scanning” – scanning content on the device before encryption – is functionally indistinguishable from state-mandated spyware. (TechRadar)
  • Several governments (Germany, the Netherlands, Poland, Austria, Finland, Estonia, Luxembourg, Czechia) said they would not support indiscriminate scanning. (euronews)

The Danish Council presidency then pulled the original text due to lack of support – and immediately started working on a “compromise” draft. (TechRadar)

That “compromise” is what Breyer and Reclaim The Net are now sounding the alarm over.


2. What’s new in the “back door” version?

The key political trick is simple:

  • The most obvious mandatory scanning language has been removed (the old “detection orders”).
  • In its place, providers are required to adopt “all appropriate risk mitigation measures” to prevent child sexual abuse on their platforms. (EU Tech Loop)

On the surface, this sounds softer and more flexible. In reality, it can be turned into a blank cheque:

  • If a service is labelled “high risk,” authorities can argue that the only “appropriate” way to mitigate that risk is to implement scanning – including on encrypted services. (Patrick Breyer)
  • If providers refuse, they can be accused of failing their legal duties and face penalties.

Breyer sums it up as: “we removed the front door of mandatory scanning, but installed a huge side gate through ‘risk mitigation’.” (Patrick Breyer)

So instead of a clear, honestly stated mandate, the EU gets something arguably worse: permanent legal pressure to surveil, with the details outsourced to bureaucrats, regulators, and “technical standards.”


3. The four main problems

3.1 De facto generalised scanning

Under the new text, providers must:

  • conduct risk assessments
  • implement “effective” mitigation measures
  • show regulators that their measures actually reduce risk (europarl.europa.eu)

If law enforcement and the Commission decide that scanning is the only “effective” measure, then every major provider is under pressure to scan:

  • Images and videos against CSAM hash databases.
  • Text using AI algorithms to flag “grooming,” sexual content, or ambiguous phrases.
  • Metadata (who talks to whom, when, from where) for pattern analysis. (Patrick Breyer)

Breyer notes that in Germany, roughly half of all suspicious reports from existing voluntary scanning turn out to be irrelevant – false positives or harmless content. (Patrick Breyer)

Now extend that from images to every chat message: sarcasm, flirting, roleplay, dark humour, therapy conversations. The system will misfire, and every misfire means:

  • innocent people investigated
  • private material stored and shared with authorities
  • real abusers hidden in the noise
LUE MYÖS:  Sodan tarkoitus George Orwellin mukaan (1984)

“Mass scanning with AI doesn’t understand context. It can’t distinguish flirting from grooming. What it does very well is to generate huge numbers of alerts,” as one legal analysis put it. (europarl.europa.eu)

3.2 An attack on encryption via client-side scanning

End-to-end encryption normally means: only you and the recipient can read a message. Even the service provider can’t see it.

Chat Control’s answer to this is client-side scanning:

  • your device scans messages and media locally
  • suspicious content is flagged before encryption
  • the flagged content (and potentially your private conversations) is sent to the platform and/or the authorities (Patrick Breyer)

Signal has already described this approach as akin to installing malware: a monitoring agent that lives on your device and reports back. The app has said openly that it will leave the EU rather than implement such scanning. (TechRadar)

The revised text does not explicitly mandate “client-side scanning” – but by tying providers to “effective mitigation measures” and calling out E2EE services as “high risk,” it sets the stage for exactly that. (EU Tech Loop)

You can’t simultaneously have:

  • robust, trustworthy end-to-end encryption and
  • a legal obligation to inspect every message.

One of those has to give way.

3.3 End of anonymity: age and ID checks for messaging

The new draft goes beyond scanning and moves into identity and access control:

  • Age verification becomes mandatory for accessing messaging and email platforms.
  • Platforms will be pushed toward using official IDs, digital IDs, or biometric checks to comply. (Patrick Breyer)

Even if the law never openly says “you must scan passports,” the combination of:

  • liability for letting minors use services “unsafely” and
  • strict age thresholds

naturally drives the market toward ID-based age verification. There are only so many ways to prove age at scale.

Breyer calls this “the de facto end of anonymous communication online,” and he’s not wrong. (Patrick Breyer)

The casualties are predictable:

  • whistleblowers
  • journalists’ sources
  • political dissidents and activists
  • people seeking help around abuse, addiction, sexuality, or mental health in environments where they can’t safely be open

The Commission talks about “safety”; what it’s setting up is a framework where almost every meaningful online interaction has to be tied back to a verified real-world identity.

3.4 Banning under-16s from messaging platforms

One of the more quietly shocking elements of the compromise is a provision that effectively blocks under-16s from using messaging and chat-enabled social platforms. (Patrick Breyer)

The logic appears to be:

  • minors are a high-risk group
  • therefore the safest option is simply to exclude them from large parts of online communication

Breyer calls this “digital isolation instead of education,” and it’s hard to argue with that. (Patrick Breyer)

Instead of:

  • teaching digital literacy
  • empowering kids and parents
  • building better reporting and support systems

the EU proposal leans toward “protecting” children by cutting them off from the same communication channels that their peers, schools, and communities use.


4. The political battlefield: who will block this?

The Council dynamics are messy and fluid, but a few points are clear:

  • The Danish EU presidency is actively pushing this revised draft and has already tabled a 200+-page compromise text. (EU Tech Loop)
  • Meetings of the Law Enforcement Working Party (LEWP) on 5 and 12 November show that several governments are warming to the new wording, because it no longer openly says “we will scan everything.” (TechRadar)
  • Germany, after months of internal conflict, publicly committed in October to vote against the original Chat Control proposal on constitutional grounds – but that was before Denmark’s “compromise” version. (TechRadar)

Breyer’s call to action explicitly names:

Germany, the Netherlands, Poland, Czechia, Luxembourg, Finland, Austria, Estonia

as states that previously resisted mass surveillance and now need to hold the line again. (Patrick Breyer)

Whether they do so depends on:

  • how much they fear being painted as “soft on child abuse”
  • how carefully they read the risk-mitigation and age-verification clauses
  • how much pressure they feel from domestic privacy advocates versus law enforcement lobbies
LUE MYÖS:  "Young Blood!" - Mitkä ovat "adrenochrome-salaliiton" taustat?

If a blocking minority in the Council collapses, the regulation moves on to the next stage – with Parliament then forced to negotiate from a far worse starting point.


5. What this would look like in practice

If this package passes in something like its current form, the everyday reality for EU users could look like this:

  1. Signing up for a messaging app
    • You’re asked to verify your age.
    • This may involve uploading ID, using a national digital ID, or completing a biometric check via a third-party provider.
  2. Using an encrypted messenger
    • Behind the scenes, the app runs client-side scanning on your device.
    • Images and videos are hashed and compared to known CSAM databases.
    • Text is fed to AI models looking for “grooming” patterns. (Patrick Breyer)
  3. Having a private, sensitive conversation
    • Your chats with a partner, child, therapist, or support group are continuously analysed by black-box algorithms.
    • A joke, roleplay, or out-of-context phrase can trigger a flag.
    • The flagged conversation is copied, stored, and possibly reviewed by human moderators or law enforcement.
  4. If you are under 16
    • Many mainstream messaging and social platforms are simply unavailable to you in the EU.
    • Workarounds and grey-market apps flourish; kids go where surveillance is weakest and moderation is worst.

Meanwhile, criminals with serious intent:

  • move to custom clients, underground networks, and niche tools
  • exploit the chaos and noise of false positives
  • benefit from weakened security (because any backdoor can be abused by them too)

The result is a familiar pattern: ordinary people get mass surveillance; predators and organised crime adapt.


6. What a sane alternative would look like

Breyer and many others aren’t arguing for doing nothing about child abuse online. They are arguing against turning the entire EU into a permanent scanning grid. (Patrick Breyer)

A sane, rights-respecting approach would focus on:

  • Targeted investigations, under strict judicial oversight, where there is concrete suspicion
  • Better police capacity to actually follow up on existing reports (large numbers of which already go uninvestigated)
  • Platform tools that users can opt into voluntarily (safe-search modes, content-filtering options, parental controls)
  • International cooperation to go after known abusers and criminal networks, not just generate endless piles of automated alerts

And above all:

  • Preserving strong, unbroken encryption as a security baseline, not treating it as a “problem” to be bypassed.
  • Preserving anonymous channels for those whose safety depends on them – whistleblowers, dissidents, vulnerable groups. (European Digital Rights (EDRi))

7. The bottom line

The new Chat Control draft is not a compromise. It is an exercise in political cosmetics:

  • remove the explicit “we will scan everything” language
  • insert a vague but binding “risk mitigation” duty
  • layer on identity checks and under-16 bans
  • call it “voluntary,” “flexible,” and “child-centred”

On any technical and legal reading, this still adds up to:

  • a powerful infrastructure for mass surveillance of private communications
  • a systemic attack on encryption via client-side scanning and “high-risk service” labelling
  • de facto identity regime attached to everyday online life
  • a structural chilling effect on journalism, activism, and anyone who needs privacy to be safe

Breyer’s closing line in the Reclaim The Net piece is blunt but accurate: they are selling security and delivering a total surveillance machine. (reclaimthenet.org)

If EU governments sign off on this, they’re not just voting on a child protection measure. They’re voting on whether private conversation and anonymous speech are still allowed to exist in Europe at all.


📚 Sources

  • Patrick Breyer – overview of Chat Control and current derogation (“Chat Control 1.0”). (Patrick Breyer)
  • Patrick Breyer – “Chat Control 2.0 through the back door” warning on the new Danish compromise. (Patrick Breyer)
  • Reclaim The Net – “The Disguised Return of The EU’s Private Message Scanning Plot”. (reclaimthenet.org)
  • EDRi – “Chat Control: What is actually going on?” (analysis of CSA Regulation and Council process). (European Digital Rights (EDRi))
  • TechRadar & Euronews – reporting on Denmark’s new draft, LEWP meetings, and Germany’s position. (TechRadar)
  • Signal & security experts on client-side scanning and encryption risks. (TechRadar)
  • Legal analyses of the CSA proposal’s risk-mitigation and scanning provisions. (europarl.europa.eu)
Avatar photo

By Pressi Editor

Jos lainaat tekstiä, laitathan lainatun tekstin yhteyteen paluulinkin!