20 million ChatGPT conversations in court – and digital privacy is ripped open
Take a moment to think about all the things you’ve written in the chat ”just to try”:
desperate questions about health, arguments about relationships, darkest fears, stupid ideas, financial worries, silly fantasies.
Now imagine that someone else officially gets the right to rummage through it all – not as your name, but as ”anonymised data”, but still your line-by-line stream of thought.
This is exactly the direction we are going in.
In the US, magistrate judge Ona T. Wang has ordered OpenAI to hand over 20 million anonymised ChatGPT conversations to the New York Times and other newspapers as part of a copyright case. (Reuters)
NaturalNews/Brighteon is already talking about ”a preview of AI censorship and control”(NaturalNews.com) Exaggeration? Yes. A harmless little thing? No, definitely not.
This case is not just about copyright. This is a precedent of who owns your ideas once they have been fed into the black box called ”AI service”.
What did the court actually order?
Let’s start with the facts.
- The New York Times and other media houses have sued OpenAI and Microsoft, claiming that their models were illegally trained with articles from the news outlets. (AP News)
- The court has previously ordered OpenAI to retain all ChatGPT chat logs, including those that would normally be deleted – including chats ”deleted” by the user.(Justia Dockets & Filings)
- On November 2025, Judge Wang ordered the next step: OpenAI must hand over an anonymised sample of 20 million ChatGPT conversations to lawyers and experts at newspaper publishers so they can look for examples of alleged copyright infringement. (Reuters)
OpenAI responded with extraordinary bluntness on its own blog: the New York Times demands 20 million of your private conversations, in violation of both long-standing privacy norms and the company’s own promises to users(OpenAI)
However, the court ignored OpenAI’s warnings and found that anonymisation and non-disclosure agreements were sufficient.
On paper, this looks like a technical procedural matter in a copyright case.
In reality, this draws the line as to whether there is an AI chat:
- only ”data” to which the court has normal access, or
- an extension of a personal life of thought, which should be closer to medical or legal secrecy.
At the moment, the Court is leaning towards the former.
”Anonymised data” – great word, lousy shield
The defence mantra is that all data will be ”de-identified”: names, emails, user IDs will be deleted. All right, but what’s left?
- Whole life stories.
- Details of work projects that are nowhere in the public domain.
- A description of rare diseases that occur in a small community.
- Texts copied from your own drafts, contracts or internal documents.
Several e-discovery and data protection lawyers have already pointed out that anonymisation does not automatically make data ”secure”. The more detailed the data, the easier it is to re-identify the individual, at least on a probabilistic level(joneswalker.com).
When 20 million conversations are opened up to analysts, the situation looks like this:
- every thread is a deep peek inside someone’s head
- analysis tools can search for patterns, keywords, recognisable combinations
- guaranteed to include phrases that can only be found in one Slack team or family
The law says: ”anonymised”.
Practice says: ”this is still your soul-scape in text form”.
Delete no longer means delete
There’s another problem in the background that many people haven’t heard about: the storage rules for chat logs.
In previous decisions, the same court has ordered OpenAI to retain all ChatGPT logs – including those deleted by users, and also those that would normally have been deleted after 30 days. (Justia Dockets & Filings)
Eli:
- When a user presses ”Delete chat”, they think they are making a conscious, irrevocable choice.
- The court order effectively turns that button into cosmetics – a UI element that doesn’t tell the whole truth.
- The data becomes evidence regardless of what you wanted.
This is no longer just a problem for OpenAI.
It’s a signal to everyone:
If you write it for an AI running in the cloud,
someone can get their hands on it – years later – because the law says so.
The New York Times does not only defend its copyright
It is true that the media houses have a legitimate point: if their decades of content archives have been downloaded into models without permission and without compensation, the matter is a matter for the courts. Copyright law is not just a decoration.(AP News)
But let’s not be naive about what’s at stake here:
- ChatGPT and other LLMs provide answers in a way that bypasses the gatekeeping of traditional media.
- If the AI provides a direct answer, the user does not have to click behind the paywall.
- Advertising money and subscription revenues suffer.
A story in NaturalNews puts it succinctly: traditional media houses want to ensure that future AI only sees the world through ”approved” sources, and that everything else is labelled ”misinformation”(NaturalNews.com).
The reality is more complex, but the direction is the same:
- Media companies want licensing deals and control over how their content appears in AI responses.
- Big Tech wants to retain maximum freedom to collect and use data, including your conversations.
- Courts are setting the first precedents for what is ”reasonable” in the use of data – and doing so at the expense of your privacy.
It is no longer just a question of copyright. It is a question of who owns it:
- training material (content)
- user logs (your conversations)
- and ultimately the ”mind” of the AI, i.e. its weights and preferences.
A precedent that governments around the world are dreaming of
This is a case that will now be followed closely, and not just by the media.
When the Court finds that:
- private AI conversations of millions of people
- can be handed over en masse for legal proceedings
- as long as they are ”anonymised” and protected by confidentiality rules
it gives a clue to everyone else:
- for authorities who want ”anti-terrorism and hate speech data”
- for intelligence services that want to analyse ”narratives” in real time
- EU-style chat controls and digital identity systems that dream of full surveillance integration
If such a decision is normalised in the US, there is nothing to stop it being invoked later:
”We also need access to billions of AI conversations,
because of national security / misinformation / health threats.”
And every time someone says: ”But user privacy?”
The answer will be: ”Don’t worry, everything is anonymized.”
”Rogue AI” and distributed response – threat or opportunity?
In his text, Lance D. Johnson also paints a counter-narrative: as centralized giant models are strangled by regulation and built-in censorship, decentralized, ”rogue” AIs take over:
- openly distributed models run on people’s own machines
- education data is a mix of alternative sources, cold data and ”forbidden” perspectives
- cannot be censored because there is no central server to turn off the taps(NaturalNews.com)
There is both hope and risk here:
- Hope: Monopolised, gatekeeper-dominated ”official truth AI” will not remain the only option.
- Risk: Decentralised AI does not magically make content real. It can also fuel scam sites, medical pseudoscience and political psychosis.
But one thing is certain:
the more centralised systems become tools of control and censorship, the faster people will vote with their feet for something else – even if it’s the messy Wild West.
What should we learn from this – in practical terms?
This is not legal advice. This is a survival guide for the digital person who uses AI in their everyday life:
- Don’t give AI something you wouldn’t give in an email or chat that you could bear to leak.
When you write, imagine that the worst case scenario is that it ends up being read by someone else. - Find out what the service actually keeps – and for how long.
”Delete” and ”not used for model training” may mean different things than you think, especially when the courts step in.(OpenAI) - Use zero-data farms, local templates or self-hosted solutions if you do very mundane work.
Doctor, therapist, lawyer, whistleblower, security consultant – you can’t afford to blindly rely on promises of cloud privacy.(news.bgov.com) - Demand clear boundaries at political level:
- AI conversations should be approached more with the logic of attorney-client privilege than Facebook commenting.
- Mass transfers should be treated as an exception, not a default.
- Don’t give moral suasion to the courts, media houses or AI companies.
Just because something is ”legal” does not make it acceptable – or permanent. The law will only change if citizens refuse to swallow everything that the first precedents spew out.
Finally: whose side is AI on?
The ironic reality is this:
- OpenAI now appeals to user privacy to protect itself from costly copyright lawsuits by the New York Times(OpenAI)
- The New York Times invokes copyright to protect itself from competition from artificial intelligence.
- The court balances the two – at your expense.
Your role is put in the background: you are just a ”consumer”, a ”user”, a ”data”.
This is a situation worth rebelling against. Not by going completely offline, but by demanding a simple principle:
My thoughts are not anyone’s raw material without my consent –
not even anonymised, not even for the judge’s pleasure.
This 20 million ChatGPT debate is just the beginning.
It shows the direction the system is inherently heading in: towards maximum collection, storage and analysis.
Whether or not we reverse course is ultimately up to us – not AI, not judges, not media corporations, but the people who refuse to accept that the inside of their skulls is just one data source among others.
📚 Sources
- NaturalNews / Newstarget: Court orders OpenAI to surrender 20 million private ChatGPT conversations, setting stage for AI censorship and control
https://www.naturalnews.com/2025-11-13-court-orders-openai-surrender-private-chatgpt-conversations.html(NaturalNews.com) - Reuters: OpenAI fights order to turn over millions of ChatGPT conversations
https://www.reuters.com/business/media-telecom/openai-fights-order-turn-over-millions-chatgpt-conversations-2025-11-12/(Reuters) - Business Insider: OpenAI lost a court battle against the New York Times – now it’s taking its case to the public
https://www.businessinsider.com/openai-new-york-times-copyright-infringement-lawsuit-chatgpt-logs-private-2025-11(Business Insider) - OpenAI Blog: Fighting the New York Times’ invasion of user privacy
https://openai.com/index/fighting-nyt-user-privacy-invasion/(OpenAI) - The Verge & TechRadar: ChatGPT chat retention rules and saving ”deleted” chats
https://www.theverge.com/news/681280/openai-storing-deleted-chats-nyt-lawsuit
https://www.techradar.com/computing/artificial-intelligence/sam-altman-says-ai-chats-should-be-as-private-as-talking-to-a-lawyer-or-a-doctor-but-openai-could-soon-be-forced-to-keep-your-chatgpt-conversations-forever(The Verge) - AP News: Judge allows newspaper copyright lawsuit against OpenAI to proceed
https://apnews.com/article/cc19ef2cf3f23343738e892b60d6d7a6(AP News)
