UK Tech Companies and Child Safety Officials to Examine AI's Capability to Generate Abuse Content
Technology companies and child safety organizations will receive permission to assess whether AI systems can generate child abuse material under new UK legislation.
Significant Rise in AI-Generated Harmful Material
The declaration coincided with findings from a safety watchdog showing that cases of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the amendments, the government will allow approved AI developers and child protection groups to inspect AI models – the underlying technology for chatbots and visual AI tools – and verify they have sufficient protective measures to prevent them from producing images of child sexual abuse.
"Fundamentally about stopping exploitation before it happens," declared Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now identify the risk in AI models promptly."
Tackling Legal Challenges
The amendments have been implemented because it is against the law to create and own CSAM, meaning that AI developers and others cannot create such images as part of a testing process. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it.
This law is aimed at averting that problem by enabling to stop the production of those images at source.
Legal Structure
The amendments are being introduced by the authorities as modifications to the criminal justice legislation, which is also establishing a ban on possessing, creating or sharing AI systems designed to create exploitative content.
Practical Consequences
This week, the official visited the London headquarters of Childline and heard a simulated call to counsellors featuring a report of AI-based exploitation. The interaction depicted a teenager requesting help after being blackmailed using a explicit deepfake of himself, created using AI.
"When I learn about children facing extortion online, it is a cause of extreme frustration in me and justified concern amongst families," he said.
Concerning Statistics
A leading internet monitoring foundation reported that cases of AI-generated abuse material – such as webpages that may include numerous files – had significantly increased so far this year.
Cases of the most severe content – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.
- Female children were predominantly targeted, accounting for 94% of prohibited AI images in 2025
- Portrayals of infants to two-year-olds increased from five in 2024 to 92 in 2025
Sector Response
The law change could "represent a crucial step to guarantee AI products are secure before they are launched," stated the chief executive of the online safety foundation.
"Artificial intelligence systems have made it so survivors can be targeted all over again with just a simple actions, providing criminals the ability to create possibly limitless quantities of sophisticated, lifelike child sexual abuse material," she continued. "Content which further exploits survivors' trauma, and makes children, especially female children, less safe on and off line."
Support Session Information
Childline also released information of counselling sessions where AI has been referenced. AI-related harms mentioned in the sessions include:
- Using AI to rate body size, body and appearance
- AI assistants discouraging children from talking to trusted guardians about abuse
- Being bullied online with AI-generated content
- Online blackmail using AI-manipulated pictures
Between April and September this year, Childline conducted 367 counselling sessions where AI, chatbots and associated terms were discussed, significantly more as many as in the same period last year.
Half of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, encompassing using chatbots for assistance and AI therapeutic applications.