British Technology Firms and Child Protection Officials to Examine AI's Ability to Generate Abuse Content
Technology companies and child protection agencies will receive authority to assess whether artificial intelligence systems can produce child abuse images under recently introduced UK laws.
Significant Increase in AI-Generated Illegal Material
The declaration coincided with findings from a protection monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the amendments, the government will permit designated AI companies and child protection organizations to examine AI systems – the foundational technology for chatbots and visual AI tools – and verify they have adequate safeguards to prevent them from producing depictions of child sexual abuse.
"Ultimately about stopping abuse before it happens," declared the minister for AI and online safety, adding: "Experts, under strict conditions, can now identify the danger in AI models promptly."
Tackling Legal Challenges
The changes have been implemented because it is illegal to create and own CSAM, meaning that AI creators and others cannot create such images as part of a evaluation process. Previously, authorities had to wait until AI-generated CSAM was published online before dealing with it.
This law is designed to preventing that problem by helping to halt the creation of those images at their origin.
Legal Framework
The amendments are being introduced by the government as modifications to the criminal justice legislation, which is also establishing a ban on owning, creating or sharing AI models developed to generate exploitative content.
Practical Consequences
This week, the minister toured the London headquarters of Childline and listened to a simulated conversation to advisors involving a report of AI-based abuse. The interaction portrayed a teenager seeking help after facing extortion using a sexualised deepfake of himself, created using AI.
"When I hear about young people facing blackmail online, it is a cause of intense frustration in me and justified concern amongst parents," he stated.
Alarming Data
A prominent online safety foundation reported that instances of AI-generated exploitation material – such as online pages that may include multiple images – had more than doubled so far this year.
Cases of category A content – the most serious form of abuse – rose from 2,621 visual files to 3,086.
- Girls were predominantly victimized, making up 94% of illegal AI images in 2025
- Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025
Sector Reaction
The law change could "constitute a crucial step to guarantee AI products are secure before they are launched," stated the head of the online safety foundation.
"AI tools have enabled so victims can be victimised all over again with just a few clicks, providing criminals the ability to create potentially limitless quantities of advanced, lifelike exploitative content," she added. "Material which further commodifies victims' suffering, and makes children, especially female children, less safe on and off line."
Support Interaction Information
The children's helpline also released details of support interactions where AI has been referenced. AI-related risks discussed in the conversations comprise:
- Employing AI to evaluate weight, body and appearance
- Chatbots dissuading young people from talking to safe guardians about abuse
- Being bullied online with AI-generated material
- Online blackmail using AI-manipulated pictures
Between April and September this year, Childline delivered 367 counselling interactions where AI, conversational AI and related terms were mentioned, significantly more as many as in the same period last year.
Half of the mentions of AI in the 2025 interactions were related to mental health and wellbeing, including utilizing chatbots for assistance and AI therapeutic applications.