In 2025, lawmakers in the United Kingdom, Australia and France simultaneously unveiled sweeping online-safety legislation aimed at shielding children from harmful digital content. Below is a closer look at what these laws demand, how they differ, and why specialists disagree on whether they can truly deliver a safer internet for young people.
The Three Flagship Laws at a Glance
United Kingdom – Online Safety Act
The UK’s Online Safety Act imposes a “duty of care” on social-media platforms, search engines and messaging services. Companies must proactively identify and remove illegal content, introduce rigorous age-verification for services “likely to be accessed by children,” and publish annual transparency reports. Non-compliance can trigger fines of up to 10 % of global turnover and, in extreme cases, criminal liability for senior executives.
Australia – Online Safety Amendment Bill 2025
Australia expanded the powers of its eSafety Commissioner. The regulator can now issue “instant takedown” notices for content deemed seriously harmful to minors, compel platforms to install age checks, and order search results to be delisted within 24 hours. Repeat offenders face graduated penalties that can reach AUS $5 million per day.
France – Law on the Protection of Minors Online
France’s statute, often called Loi POM, centers on two pillars: mandatory “age assurance” for sites hosting user-generated content and strict design standards that ban manipulative “dark-pattern” interfaces. Enforcement rests with ARCOM, France’s media regulator, which can blacklist non-compliant domains and levy fines of up to €4 % of annual revenue.
Key Common Requirements
While each law has unique features, three common obligations emerge:
- Age-verification or age-assurance: Platforms must establish a user’s age with “reasonable certainty,” a moving target that has ignited debate over privacy and feasibility.
- Proactive content moderation: Companies are expected to detect and remove child-sexual-abuse material, violent or self-harm content, and “pornography accessible to minors” without waiting for user reports.
- Transparency and reporting: Regular audits, risk assessments and public reports are mandatory, designed to let regulators—and parents—see whether platforms are meeting safety benchmarks.
The Technical Hurdles
Age-Verification Without Surveillance
Privacy advocates warn that robust age checks often require sensitive identity data. Biometric methods (face estimation), third-party “digital tokens” and in-device parental controls are being piloted, yet none satisfy every stakeholder. A critical question is whether introducing more personal data collection undermines the very safety these laws aim to enhance.
End-to-End Encryption vs. Proactive Scanning
The UK Act, in particular, obliges encrypted messaging services to detect illegal material “in an encrypted environment.” Cryptographers argue that such scanning forces providers to introduce “client-side backdoors,” weakening privacy for all users. Law-makers counter that technical solutions—like on-device hashing—can balance safety with encryption, though no consensus exists.
Early Indicators: Are Platforms Complying?
Because full enforcement deadlines stagger through 2026, hard data remain limited. However, several early trends have surfaced:
- Major platforms, including Meta and TikTok, rolled out uniform global age-verification portals rather than tailoring compliance to each jurisdiction.
- Numerous adult content sites have geoblocked users in the UK and France until they can implement compliant age-assurance systems.
- Australia’s eSafety Commissioner reported a 34 % rise in takedown notices issued in the first quarter of 2025 compared with the same period in 2024.
Arguments in Favour
Child-protection NGOs applaud the laws for finally giving regulators the “teeth” to force tech giants to prioritise safety. They emphasise:
- Significant fines that make non-compliance financially untenable;
- The first legal recognition that design choices (e.g., infinite scroll, autoplay) can be harmful, pushing companies to rethink user experience for minors;
- Greater public transparency, which allows researchers to evaluate algorithmic impacts on children.
Criticisms and Unintended Consequences
Skeptics raise several concerns:
- Free-speech risks: Vague definitions of “harmful” may push platforms to over-censor legitimate content.
- Barrier to entry: Smaller services may exit markets rather than shoulder compliance costs, reducing competition and innovation.
- Jurisdictional fragmentation: Divergent national rules could lead to a patchwork internet where content availability depends on the user’s location.
What to Watch in 2026 and Beyond
• The first major enforcement actions—particularly any criminal cases against executives—will signal how aggressively regulators intend to wield their new powers.
• Technical breakthroughs (or failures) in privacy-preserving age assurance will determine whether the model can scale worldwide.
• Ongoing litigation, especially around encryption and free speech, could reshape the scope of these laws before their effects crystallise.
Conclusion
2025 undeniably set a precedent for child-focused internet regulation, moving from voluntary codes to enforceable mandates. Whether these measures will meaningfully reduce the harms children experience online—or merely shift those harms out of view—depends on technical innovation, vigilant oversight and a delicate balance between safety and fundamental rights. The verdict remains open, but the regulatory die is cast.



