British Tech Companies and Child Protection Officials to Test AI's Capability to Create Abuse Images
Technology companies and child protection organizations will receive authority to assess whether AI tools can generate child exploitation material under recently introduced UK legislation.
Substantial Increase in AI-Generated Harmful Material
The declaration coincided with findings from a safety watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
New Legal Framework
Under the changes, the authorities will permit approved AI developers and child safety groups to inspect AI systems – the underlying systems for chatbots and image generators – and ensure they have adequate safeguards to prevent them from producing depictions of child sexual abuse.
"Ultimately about preventing abuse before it occurs," declared the minister for AI and online safety, adding: "Experts, under strict protocols, can now detect the danger in AI systems promptly."
Addressing Legal Challenges
The changes have been implemented because it is illegal to create and possess CSAM, meaning that AI creators and other parties cannot create such images as part of a testing process. Until now, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.
This law is designed to averting that problem by enabling to stop the creation of those images at source.
Legislative Framework
The changes are being added by the authorities as revisions to the crime and policing bill, which is also implementing a ban on possessing, producing or sharing AI systems designed to create exploitative content.
Practical Consequences
This week, the minister toured the London headquarters of a children's helpline and listened to a simulated conversation to counsellors featuring a account of AI-based exploitation. The interaction portrayed a teenager seeking help after being blackmailed using a explicit deepfake of himself, created using AI.
"When I learn about children experiencing blackmail online, it is a cause of intense frustration in me and rightful anger amongst parents," he said.
Concerning Data
A leading internet monitoring organization stated that cases of AI-generated abuse material – such as webpages that may contain numerous files – had more than doubled so far this year.
Cases of the most severe material – the most serious form of exploitation – increased from 2,621 visual files to 3,086.
- Female children were predominantly victimized, accounting for 94% of prohibited AI images in 2025
- Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025
Industry Response
The legislative amendment could "represent a crucial step to guarantee AI products are safe before they are launched," commented the head of the internet monitoring foundation.
"Artificial intelligence systems have enabled so survivors can be victimised repeatedly with just a few clicks, providing criminals the ability to create possibly limitless amounts of advanced, photorealistic exploitative content," she continued. "Content which additionally commodifies victims' trauma, and makes young people, particularly girls, less safe both online and offline."
Support Interaction Data
The children's helpline also published information of counselling interactions where AI has been mentioned. AI-related harms mentioned in the sessions comprise:
- Employing AI to rate body size, body and looks
- Chatbots discouraging children from talking to trusted adults about harm
- Facing harassment online with AI-generated content
- Digital blackmail using AI-faked images
Between April and September this year, Childline delivered 367 support interactions where AI, conversational AI and associated terms were discussed, significantly more as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, including utilizing chatbots for support and AI therapeutic apps.