UK Tech Companies and Child Safety Officials to Examine AI's Ability to Create Exploitation Images
Tech firms and child protection organizations will be granted authority to evaluate whether AI systems can generate child abuse images under new UK laws.
Substantial Increase in AI-Generated Illegal Content
The announcement came as findings from a protection monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the changes, the authorities will permit approved AI developers and child safety groups to inspect AI models – the underlying systems for chatbots and visual AI tools – and verify they have sufficient protective measures to stop them from producing depictions of child exploitation.
"Fundamentally about preventing abuse before it happens," declared the minister for AI and online safety, noting: "Specialists, under rigorous conditions, can now identify the risk in AI models promptly."
Addressing Regulatory Obstacles
The changes have been implemented because it is illegal to produce and own CSAM, meaning that AI creators and other parties cannot create such images as part of a evaluation regime. Until now, officials had to wait until AI-generated CSAM was uploaded online before addressing it.
This legislation is aimed at preventing that problem by enabling to stop the creation of those materials at source.
Legal Structure
The amendments are being added by the government as revisions to the criminal justice legislation, which is also implementing a prohibition on possessing, producing or distributing AI models designed to create child sexual abuse material.
Real-World Consequences
This recently, the official visited the London headquarters of Childline and listened to a simulated conversation to counsellors involving a account of AI-based exploitation. The interaction portrayed a teenager requesting help after being blackmailed using a explicit AI-generated image of himself, constructed using AI.
"When I learn about children experiencing blackmail online, it is a source of extreme anger in me and justified concern amongst parents," he stated.
Alarming Data
A prominent online safety foundation reported that instances of AI-generated exploitation material – such as online pages that may contain multiple images – had more than doubled so far this year.
Cases of category A content – the gravest form of exploitation – increased from 2,621 visual files to 3,086.
- Female children were overwhelmingly targeted, accounting for 94% of illegal AI depictions in 2025
- Depictions of infants to toddlers rose from five in 2024 to 92 in 2025
Sector Reaction
The law change could "represent a vital step to ensure AI tools are safe before they are released," commented the head of the online safety organization.
"Artificial intelligence systems have enabled so victims can be victimised all over again with just a simple actions, giving offenders the capability to make possibly limitless quantities of advanced, photorealistic exploitative content," she added. "Content which additionally exploits survivors' trauma, and renders children, particularly female children, less safe on and off line."
Counseling Interaction Data
Childline also published information of support interactions where AI has been mentioned. AI-related harms discussed in the conversations include:
- Using AI to rate weight, body and appearance
- Chatbots discouraging young people from consulting safe adults about abuse
- Being bullied online with AI-generated material
- Digital blackmail using AI-manipulated pictures
During April and September this year, Childline conducted 367 counselling sessions where AI, chatbots and related terms were mentioned, significantly more as many as in the same period last year.
Half of the mentions of AI in the 2025 interactions were related to mental health and wellness, encompassing using AI assistants for support and AI therapeutic apps.