UK Technology Companies and Child Protection Agencies to Test AI's Ability to Create Exploitation Content
Technology companies and child safety organizations will receive authority to evaluate whether artificial intelligence systems can generate child abuse material under new UK legislation.
Substantial Increase in AI-Generated Illegal Material
The announcement came as findings from a protection watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the changes, the authorities will permit designated AI developers and child safety groups to inspect AI systems – the underlying technology for conversational AI and image generators – and ensure they have sufficient safeguards to stop them from creating depictions of child exploitation.
"Fundamentally about stopping exploitation before it occurs," declared the minister for AI and online safety, adding: "Experts, under strict protocols, can now detect the risk in AI models early."
Addressing Legal Obstacles
The changes have been implemented because it is illegal to create and own CSAM, meaning that AI creators and other parties cannot create such images as part of a testing regime. Previously, officials had to wait until AI-generated CSAM was published online before dealing with it.
This law is aimed at averting that issue by enabling to halt the creation of those images at their origin.
Legislative Framework
The amendments are being introduced by the government as modifications to the crime and policing bill, which is also establishing a ban on owning, producing or sharing AI models designed to create child sexual abuse material.
Practical Impact
This week, the official toured the London base of Childline and heard a mock-up call to advisors involving a account of AI-based exploitation. The interaction portrayed a adolescent seeking help after facing extortion using a explicit deepfake of himself, created using AI.
"When I learn about young people experiencing extortion online, it is a source of intense frustration in me and justified anger amongst parents," he said.
Concerning Data
A leading internet monitoring foundation stated that instances of AI-generated abuse content – such as webpages that may contain numerous images – had significantly increased so far this year.
Cases of category A material – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
- Female children were overwhelmingly targeted, making up 94% of prohibited AI images in 2025
- Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025
Industry Reaction
The law change could "represent a crucial step to guarantee AI tools are safe before they are launched," commented the head of the online safety foundation.
"AI tools have made it so victims can be targeted all over again with just a simple actions, giving offenders the capability to create possibly endless amounts of advanced, photorealistic exploitative content," she added. "Material which additionally exploits survivors' trauma, and makes children, particularly girls, more vulnerable on and off line."
Counseling Interaction Information
Childline also published details of counselling interactions where AI has been referenced. AI-related harms discussed in the sessions include:
- Employing AI to evaluate weight, physique and appearance
- AI assistants discouraging young people from talking to safe adults about abuse
- Being bullied online with AI-generated material
- Online extortion using AI-faked pictures
Between April and September this year, Childline conducted 367 support sessions where AI, conversational AI and associated terms were mentioned, significantly more as many as in the same period last year.
Half of the references of AI in the 2025 sessions were connected with mental health and wellbeing, including utilizing chatbots for support and AI therapy applications.