Public Access to Government AI: What the UK’s New Disclosure Rules Mean

blurred-man-reading-law-book

The UK’s freedom-of-information (FOI) framework has just entered the age of artificial intelligence.
Following a precedent-setting request by New Scientist—which successfully obtained a minister’s ChatGPT prompt-and-response logs—regulators have clarified that material generated with, or by, AI systems is covered by existing transparency laws. Below is a concise breakdown of the new guidance, its legal roots, and the practical consequences for public bodies, journalists, technologists and citizens.

Why Was Clarification Needed?

Until now, departments differed in how they handled FOI requests for AI-related material. Some treated model prompts, internal logs or generated text as “ephemeral”, arguing they were not official records. Others claimed commercial confidentiality or cybersecurity risks. This patchwork approach threatened to undermine public oversight at the very moment government experimentation with large language models (LLMs), image generators and decision-support algorithms is accelerating.

What Exactly Has Been Confirmed?

The Information Commissioner’s Office (ICO) and the Cabinet Office have jointly stated that:

  • AI outputs, training data snapshots and system-interaction logs are “information held” under the Freedom of Information Act 2000 (FOIA) and Environmental Information Regulations 2004 (EIR).
  • Departments must proactively consider disclosure of such material when a valid request is received.
  • Standard FOI exemptions—national security, personal data, commercial interests—may still apply, but must be specifically justified, not waved through wholesale.
  • Deletion or non-retention of AI logs after a request is made could breach section 77 of FOIA (the criminal offence of altering or destroying information).

The Test Case: New Scientist’s ChatGPT Logs

Earlier this year, journalists asked the Department for Science, Innovation & Technology (DSIT) for transcripts of a minister’s interactions with OpenAI’s ChatGPT during policy brainstorming. After initial resistance, DSIT released the records—albeit with portions redacted—demonstrating that:

  • LLM prompts can contain policy-relevant reasoning, stakeholder lists and draft legislative language.
  • Such content is not automatically exempt from release, contradicting the belief that “machine chats” are private or trivial.

Legal Foundations: FOIA in a Digital Era

FOIA’s definition of “information” is technology-neutral: any data “recorded in any form” that a public authority holds. Case law (e.g., Information Commissioner v University of Newcastle, 2011) already extends this to e-mails and instant messages. The new guidance simply extends the logic to:

  • Model Inputs – prompts, training queries, fine-tuning parameters
  • Model Outputs – raw generations, summaries, risk scores
  • System Metadata – timestamps, user IDs, model version numbers

Practical Implications for Public Bodies

Departments now face an operational task list:

1. Record Management

AI tools must be integrated into existing electronic records policies. That means systematic retention schedules, secure storage, version control and audit trails.

2. Redaction Workflows

Because LLM prompts often embed third-party or personal data, institutions require automated or semi-manual redaction pipelines before releasing logs. Failure to do so risks data-protection breaches under UK GDPR.

3. Procurement & Contracting

Contracts with AI vendors must ensure that departments can access—and, when lawful, disclose—interaction data. “Black-box” cloud clauses that bar export of logs could place authorities in breach of FOIA.

Opportunities for the Public and Press

Accountability: Researchers can scrutinise whether AI advice diverges from final policy, revealing hidden influences.
Bias Detection: Released prompts allow civil-society groups to test for systemic biases or inappropriate content in government-used models.
Best-Practice Sharing: Other administrations may benchmark transparency protocols, creating an international norm for AI openness.

Key Limits and Remaining Questions

While the ruling is a leap forward, certain grey zones persist:

  • Security Exemptions: Intelligence-related AI deployments will likely remain off-limits, raising debates about the boundary between national security and public interest.
  • Trade Secrets: Algorithms procured from private vendors may invoke section 43 (commercial interests). The ICO insists a public harm test still applies, but battles are expected.
  • Model Explainability: FOI can reveal inputs and outputs, yet the underlying weights of proprietary models might remain undisclosable.

Looking Ahead

As generative AI permeates Whitehall—from drafting speeches to summarising consultations—the volume of requestable material will grow exponentially. Transparent record-keeping and clear redaction guidelines can prevent a future in which critical policy reasoning is locked inside commercial chatbots.
The new UK rules set a notable global precedent: algorithmic governance cannot hide behind algorithmic opacity. If AI is shaping public decisions, the public is entitled to see how.

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Read

Subscribe To Our Magazine

Download Our Magazine