OpenAI Is Now Keeping Your Deleted ChatGPT Chats Forever — Here's What That Means
A June 2025 court ruling means OpenAI must retain deleted ChatGPT conversations indefinitely. Here's what actually changed and what you can do about it.
Most people assume that deleting something from a cloud service means it's gone. With ChatGPT, that assumption has always come with caveats — but since June 2025, there's a specific and significant reason it no longer holds at all.
What Changed in June 2025
In late 2023, The New York Times filed a copyright infringement lawsuit against OpenAI and Microsoft. The case centres on whether OpenAI used NYT articles to train its language models without authorisation. It's one of the most high-profile pieces of litigation in the AI industry, and it's still ongoing.
As part of that lawsuit, in June 2025 a court issued a legal hold — a formal obligation requiring OpenAI to preserve data that may be relevant to the proceedings. That includes ChatGPT conversations, even ones that users have deleted.
Before this, OpenAI's stated privacy policy gave a reasonably clear picture: when you deleted a conversation from your account, the data would be removed from their servers within 30 days. That 30-day window existed partly to give time for abuse detection and safety processes, but the expectation was that the data would be purged on a predictable schedule.
Under the legal hold, that process has stopped. OpenAI must now retain deleted conversations for the duration of the litigation — a timeline that has no fixed end date and could extend for years. This is not a policy decision OpenAI made voluntarily. It is a legal obligation, and compliance is mandatory. Courts take litigation holds seriously; failure to comply can result in severe sanctions.
OpenAI's privacy policy does contain language acknowledging that legal requirements may override normal deletion timelines. This situation falls squarely within that clause. The change is real, legally grounded, and currently in effect.
What This Means in Practice
When you click "Delete" on a conversation in ChatGPT today, it disappears from your account view immediately. You can no longer access it, search for it, or see it in your history. From your perspective as a user, the deletion looks and works exactly as it always has.
What's different is what happens on OpenAI's infrastructure. Previously, the backend deletion process would have proceeded within 30 days. Now, because of the legal hold, that backend deletion is on hold. The data is being retained — in what form, with what access controls, and where exactly within OpenAI's systems isn't publicly specified.
The retention applies to all deleted conversations, not just those with any connection to copyright or training data questions. Legal holds typically apply broadly to data categories rather than selectively filtering by content. So if you shared a business plan, a personal health question, or a confidential work document in a ChatGPT conversation and then deleted it, that conversation is covered by the same retention requirement.
The duration is tied to the NYT lawsuit. When the case resolves — whether by settlement, judgment, or some other conclusion — the hold would presumably lift. But there's no way to predict when that will be.
How This Compares to Normal ChatGPT Data Practices
To understand the significance of the legal hold, it helps to understand how ChatGPT's data retention worked before it. The picture was already more complex than many users realised.
With conversation history enabled (the default), your conversations are stored indefinitely on OpenAI's servers until you delete them. Disabling conversation history doesn't mean your data isn't retained at all — OpenAI keeps it for up to 30 days for safety monitoring. Enterprise and Team customers have additional controls, including options for zero-day retention and data processing agreements.
There's also the question of training data. OpenAI's policy allows them to use conversations to improve their models, though users can opt out via their account settings.
The legal hold adds a distinct new layer on top of all of this: intentional deletions — the clearest signal a user can send that they want their data removed — are now being preserved. It's worth noting that this isn't unprecedented in how cloud services operate. Any company facing litigation may face similar obligations. But for users who assumed deletion was a reliable way to remove sensitive conversations from OpenAI's servers, this represents a meaningful change in practice.
Why This Matters for Privacy
Let's be direct about the risk level. For most people, the practical impact of this legal hold is limited. If you've been using ChatGPT for writing assistance, brainstorming, coding help, or general questions, the retention of those deleted conversations doesn't create an obvious threat. The data isn't being read by OpenAI employees, acted on in any way you'd notice, or exposed to third parties.
However, for anyone who shared genuinely sensitive information — details about an ongoing business negotiation, personal health or financial information, confidential work materials, or anything they'd be uncomfortable seeing surfaced — the situation is different. The assumption that "I deleted it, so it's gone" was always conditional, but now that condition is clearly suspended for an indeterminate period.
There's a broader point worth considering too. The NYT lawsuit is one specific trigger, but it's not the only scenario that could produce a similar outcome. Any future litigation involving OpenAI — whether related to intellectual property, privacy, employment, or something else entirely — could generate its own legal holds. The current situation illustrates a structural reality: deletion controls on cloud AI platforms are always subject to legal obligations, technical limitations, and policy decisions that users don't control and may not be notified about promptly.
The Only Control You Actually Have: Export First
If you can't guarantee that deletion will result in permanent removal, the practical response is to secure a copy of your conversations before you ever need to worry about deletion. That means using ChatGPT's built-in export feature.
The process is straightforward:
- Open ChatGPT and click your profile name in the bottom-left corner
- Go to Settings → Data Controls
- Click Export Data and confirm the request
- OpenAI sends a download link to your registered email address — usually within a few hours, sometimes faster
- The download is a ZIP file containing
conversations.json(the full text of all your conversations) plus an HTML file that lets you browse them in a browser
Once you've downloaded that ZIP, you have a permanent, platform-independent copy of your conversation history. It exists on your own storage, it's not subject to any cloud platform's retention policies or legal holds, and it can't be taken away by an account issue, a policy change, or anything happening in a courtroom.
Making this export a regular habit — every few months, or whenever you've had a stretch of intensive ChatGPT use — is the most reliable way to maintain control over your own conversation data.
How to Make Your Export Actually Useful
A raw ChatGPT export is complete, but it's not convenient. The conversations.json file from a heavy ChatGPT user can contain thousands of conversations and be tens or hundreds of megabytes in size. Searching through it manually isn't realistic. The HTML viewer included in the export is readable but not searchable in any meaningful way.
This is exactly the problem AI Chat Importer was built to solve. You import your ChatGPT ZIP export directly — no format conversion, no intermediate steps — and your entire conversation history becomes immediately searchable. The import runs entirely locally in your browser or on your desktop; nothing is sent to any server.
Beyond ChatGPT, the same tool handles exports from Claude, DeepSeek, and Grok, which means you can build a single searchable archive covering all the AI platforms you use, in one place.
The Desktop App adds further capabilities that matter if you're managing a large archive: unlimited storage without browser constraints, Smart Import with deduplication so you can import new exports without creating duplicates, the Folder Manager for organising conversations by project or topic, and Auto-Sort with AI for handling a backlog automatically. It's a one-time £29 purchase with no subscription.
If you want to try the import process before committing to anything, the free web app handles the same ZIP import with no account required — it's the quickest way to see how well your export data becomes searchable.
What About ChatGPT's Built-In History Search?
ChatGPT Plus added AI-powered conversation search in early 2026, which raised a fair question: why not just use that instead of exporting?
The limitations are significant. ChatGPT's search only works on conversations that are currently in your account — if you've deleted something, it doesn't appear in search results even under the legal hold, because the hold preserves the data on OpenAI's backend, not in your account view. You can't search data you can no longer see.
There's also the subscription requirement. ChatGPT's search is a Plus feature, meaning it requires a $20/month subscription. That's a reasonable price for the overall ChatGPT Plus package if you're already paying for it, but it's a significant ongoing cost if your primary need is searching your own conversation history.
Every search query you run through ChatGPT's history search also runs through OpenAI's servers — which means your search behaviour is itself data that OpenAI processes. A local archive gives you search without that exposure.
Finally, ChatGPT's search is siloed. Your Claude conversations, DeepSeek conversations, and Grok conversations aren't included. If you use more than one AI platform — which most people do — a local archive is the only way to search across all of them at once.
Should You Be Worried?
The legal hold is a real development, and it's worth understanding clearly. But "worth understanding" and "cause for alarm" are different things, and it's important not to conflate them.
For everyday ChatGPT use — writing, coding, research, general questions — the retention of deleted conversations doesn't represent an immediate or obvious threat to most people. The data isn't being misused; it's being preserved for potential legal discovery, which is a standard part of litigation involving large organisations.
What it does illustrate, clearly and concretely, is that the assumption "I can always delete it later" isn't a reliable privacy strategy on cloud AI platforms. Deletion is conditional. It can be overridden by legal obligations, and those obligations can arise from events completely unrelated to your use of the platform.
The sensible response is neither to panic nor to stop using ChatGPT. It's to develop a habit: export your data regularly, maintain a local archive you actually control, and treat cloud platforms as convenient tools rather than permanent storage you can rely on. That's good practice regardless of any specific legal development — and it's more useful than worrying about data you can't do anything about now.
FAQ
Does the legal hold mean OpenAI is reading my deleted conversations?
No. A legal hold means data must be preserved and made available if required during litigation — it doesn't mean anyone is actively reviewing it. In practice, the retained data is stored and access is restricted to what's legally required. The conversations aren't being read, searched, or acted on by OpenAI employees as part of normal operations.
Will my deleted ChatGPT conversations ever be permanently removed?
That depends on how the NYT lawsuit resolves. When the litigation concludes, the legal hold would lift, and OpenAI would presumably be able to return to its standard deletion practices. However, there's no fixed timeline for the lawsuit — it could settle in months or continue for several more years. There's currently no way to know when normal deletion will resume.
Does this affect ChatGPT Enterprise or Team accounts?
Enterprise and Team customers have separate data processing agreements with OpenAI that typically include stronger data handling commitments, including options for zero-day retention. The legal hold is a court-ordered obligation that would apply regardless of account type, but Enterprise customers should review their specific agreements and consult their legal teams about how litigation holds interact with their contractual terms.
How often should I export my ChatGPT data?
A quarterly export is a reasonable baseline for most users — frequent enough to capture anything significant, not so frequent that it becomes a burden. If you go through periods of intensive ChatGPT use (a major project, a long research phase, significant work conversations), it's worth doing an export at the end of that period rather than waiting for the quarterly cycle. The export process takes a few minutes to request and less than a day to receive, so the overhead is low.