In 2025, a single freedom-of-information (FOI) request filed by New Scientist did more than expose the private musings of a cabinet minister—it rewrote how UK transparency laws apply to artificial-intelligence tools such as ChatGPT. Below, we unpack why that seemingly technical dispute now stands as a landmark in digital accountability.
The Landscape Before 2025
The UK’s Freedom of Information Act 2000 was drafted long before conversational AI existed. While it clearly covered emails, text messages and handwritten notes, the status of chatbot interactions—often stored on third-party servers and generated in real time—remained ambiguous. Ministers could argue that AI prompts were merely “draft thinking” or “ephemeral data,” and thus exempt.
The Catalyst: A Simple, Precise Request
In early 2025, New Scientist requested the ChatGPT conversation logs of the then-technology secretary, covering discussions on post-quantum encryption policy. Those logs included:
- Prompts entered by the minister.
- Model responses that informed official briefings.
- Metadata showing timestamps and departmental devices used.
Although narrow in scope, the request forced the Cabinet Office to confront an uncomfortable question: Are AI-generated drafts used to shape policy subject to public scrutiny?
The Government’s Initial Rejection
The Cabinet Office denied the request, citing:
- Section 35 (Formulation of Government Policy): Claiming disclosure would inhibit frank policy discussion.
- Section 22 (Information Intended for Future Publication): Arguing summaries might be released later.
- Data-hosting Concerns: Saying the logs resided on OpenAI servers outside direct departmental control.
Appeal and the ICO’s Ground-breaking Decision
New Scientist appealed to the Information Commissioner’s Office (ICO). In a decisive ruling, the ICO held that:
- Chatbot prompts are analogous to emails or memos when used for official work.
- Physical custody is irrelevant; what matters is substantive control. If ministers can access the logs, they are subject to FOI.
- The public interest in understanding how AI shapes policy outweighed potential chilling effects.
The Cabinet Office released heavily redacted logs within 30 days, and the government opted not to pursue further judicial review—effectively cementing the precedent.
Why This Matters
The decision reverberated far beyond one department:
1. Redefining “Recorded Information”
FOI officers now treat AI prompts, model outputs and training citations as records, closing what many transparency advocates considered an AI-sized loophole.
2. Vendor Accountability
Departments signing new AI contracts must ensure that records-retention clauses mirror FOI requirements, forcing suppliers to provide timely data exports.
3. Policy-making Integrity
Knowing that chatbot consultations may become public, civil servants are re-evaluating how they phrase queries and validate AI outputs, potentially improving rigor.
Challenges and Critiques
Not everyone applauded the outcome:
- Chilling Effect: Some policymakers fear reduced willingness to experiment with AI tools.
- Privacy Concerns: Staff names occasionally appeared in logs; unions demanded clearer redaction guidelines.
- Technical Complexity: Extracting complete, tamper-proof logs from proprietary systems still poses logistical hurdles.
Global Ripple Effects
Within months, Canada’s Access to Information Act and Australia’s FOI Act saw similar test cases, citing the UK ruling as persuasive authority. Even the EU Ombudsman initiated a study on AI transparency across member states.
What Comes Next?
The Cabinet Office is drafting supplementary guidance that will:
- Set minimum retention periods for AI interactions.
- Define standard redaction criteria for personal data in chatbot logs.
- Explore automated, audit-ready export tools for large language models deployed across government.
Conclusion
New Scientist’s FOI request transformed a grey area into settled law, ensuring that whatever new technologies emerge, the principle of democratic oversight keeps pace. As AI burrows deeper into governance, the 2025 precedent will likely serve as the cornerstone for future battles over algorithmic transparency.



