UK Technology Firms and Child Safety Officials to Examine AI's Capability to Generate Exploitation Images

Technology companies and child protection organizations will be granted permission to assess whether AI tools can generate child exploitation images under new British legislation.

Significant Rise in AI-Generated Illegal Content

The declaration coincided with findings from a safety monitoring body showing that reports of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the amendments, the government will permit designated AI companies and child protection organizations to inspect AI models – the underlying systems for chatbots and visual AI tools – and ensure they have sufficient safeguards to prevent them from producing depictions of child exploitation.

"Fundamentally about preventing exploitation before it occurs," stated Kanishka Narayan, adding: "Experts, under rigorous conditions, can now identify the danger in AI models promptly."

Addressing Regulatory Obstacles

The changes have been implemented because it is against the law to create and possess CSAM, meaning that AI creators and other parties cannot create such content as part of a evaluation regime. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it.

This legislation is designed to averting that issue by enabling to stop the production of those materials at source.

Legal Structure

The changes are being introduced by the government as modifications to the criminal justice legislation, which is also establishing a ban on possessing, creating or distributing AI systems developed to generate child sexual abuse material.

Practical Consequences

This recently, the minister toured the London base of Childline and listened to a mock-up conversation to counsellors involving a report of AI-based abuse. The call depicted a adolescent requesting help after facing extortion using a explicit deepfake of himself, constructed using AI.

"When I hear about children facing blackmail online, it is a source of extreme anger in me and justified concern amongst parents," he said.

Alarming Data

A leading online safety foundation reported that cases of AI-generated exploitation content – such as webpages that may contain multiple files – had more than doubled so far this year.

Instances of category A content – the most serious form of abuse – rose from 2,621 visual files to 3,086.

  • Girls were predominantly targeted, accounting for 94% of prohibited AI images in 2025
  • Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025

Industry Response

The legislative amendment could "constitute a crucial step to guarantee AI products are secure before they are launched," commented the head of the internet monitoring foundation.

"Artificial intelligence systems have enabled so victims can be victimised all over again with just a simple actions, providing offenders the capability to make potentially limitless quantities of sophisticated, lifelike exploitative content," she added. "Material which further exploits survivors' suffering, and makes young people, particularly girls, less safe both online and offline."

Support Interaction Information

Childline also published details of support interactions where AI has been referenced. AI-related harms discussed in the sessions include:

  • Employing AI to evaluate weight, body and appearance
  • AI assistants dissuading young people from consulting safe adults about harm
  • Being bullied online with AI-generated content
  • Online blackmail using AI-faked images

Between April and September this year, Childline conducted 367 counselling sessions where AI, chatbots and related terms were mentioned, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, including utilizing AI assistants for support and AI therapeutic apps.

Grace Montoya
Grace Montoya

Elara is a certified fitness coach and nutritionist with over a decade of experience, passionate about empowering others through holistic wellness.