UK Tech Firms and Child Safety Agencies to Examine AI's Capability to Generate Exploitation Images

Technology companies and child protection agencies will be granted permission to assess whether artificial intelligence systems can produce child exploitation images under recently introduced UK legislation.

Significant Rise in AI-Generated Harmful Material

The announcement coincided with revelations from a safety watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the amendments, the authorities will allow designated AI developers and child safety organizations to examine AI models – the foundational technology for chatbots and image generators – and verify they have sufficient safeguards to stop them from producing images of child exploitation.

"Fundamentally about stopping exploitation before it occurs," declared Kanishka Narayan, adding: "Experts, under strict conditions, can now identify the danger in AI models early."

Addressing Regulatory Challenges

The changes have been introduced because it is illegal to produce and own CSAM, meaning that AI creators and others cannot create such images as part of a testing process. Previously, officials had to wait until AI-generated CSAM was uploaded online before addressing it.

This legislation is aimed at preventing that issue by helping to stop the creation of those images at their origin.

Legislative Framework

The amendments are being introduced by the authorities as modifications to the criminal justice legislation, which is also establishing a ban on owning, creating or distributing AI models developed to generate child sexual abuse material.

Practical Impact

This week, the minister toured the London headquarters of Childline and heard a simulated conversation to counsellors involving a account of AI-based exploitation. The call portrayed a teenager seeking help after facing extortion using a explicit AI-generated image of himself, created using AI.

"When I learn about children facing extortion online, it is a source of intense anger in me and justified anger amongst families," he stated.

Alarming Data

A leading online safety foundation stated that instances of AI-generated exploitation content – such as online pages that may include numerous images – had more than doubled so far this year.

Instances of category A material – the most serious form of abuse – increased from 2,621 visual files to 3,086.

  • Female children were predominantly victimized, making up 94% of prohibited AI depictions in 2025
  • Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "represent a crucial step to guarantee AI products are secure before they are launched," commented the chief executive of the online safety organization.

"Artificial intelligence systems have enabled so survivors can be targeted all over again with just a few clicks, providing criminals the capability to create possibly endless quantities of sophisticated, lifelike child sexual abuse material," she continued. "Material which further commodifies victims' trauma, and makes children, particularly girls, less safe both online and offline."

Support Interaction Information

The children's helpline also published information of counselling sessions where AI has been referenced. AI-related risks discussed in the conversations include:

  • Employing AI to evaluate body size, physique and appearance
  • Chatbots dissuading young people from talking to trusted guardians about harm
  • Being bullied online with AI-generated content
  • Online blackmail using AI-manipulated images

During April and September this year, the helpline conducted 367 support sessions where AI, chatbots and associated terms were mentioned, significantly more as many as in the same period last year.

Half of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellness, encompassing using AI assistants for assistance and AI therapy applications.

Mrs. Mindy Carey
Mrs. Mindy Carey

Lena is a passionate gamer and tech writer, specializing in indie games and esports coverage.