Technology companies and child protection agencies will be granted permission to evaluate whether AI systems can generate child abuse material under recently introduced British laws.
The declaration came as findings from a protection watchdog showing that cases of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Under the amendments, the government will permit approved AI developers and child protection groups to examine AI models – the underlying systems for conversational AI and image generators – and verify they have adequate safeguards to prevent them from creating depictions of child sexual abuse.
"Ultimately about preventing exploitation before it occurs," stated Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now identify the risk in AI systems early."
The changes have been implemented because it is illegal to create and own CSAM, meaning that AI developers and other parties cannot generate such images as part of a testing process. Until now, officials had to delay action until AI-generated CSAM was published online before dealing with it.
This law is aimed at averting that issue by helping to halt the production of those images at their origin.
The changes are being introduced by the authorities as revisions to the criminal justice legislation, which is also implementing a ban on owning, creating or sharing AI models designed to create child sexual abuse material.
This recently, the minister visited the London headquarters of Childline and listened to a mock-up call to counsellors featuring a report of AI-based exploitation. The call portrayed a adolescent seeking help after being blackmailed using a sexualised AI-generated image of themselves, constructed using AI.
"When I hear about young people facing blackmail online, it is a source of extreme anger in me and rightful anger amongst parents," he said.
A prominent internet monitoring organization reported that cases of AI-generated exploitation material – such as webpages that may include multiple files – had more than doubled so far this year.
Instances of the most severe content – the most serious form of exploitation – rose from 2,621 visual files to 3,086.
The law change could "constitute a vital step to guarantee AI tools are secure before they are released," commented the head of the online safety foundation.
"Artificial intelligence systems have enabled so victims can be targeted all over again with just a simple actions, providing criminals the ability to make potentially limitless quantities of sophisticated, lifelike exploitative content," she continued. "Content which additionally exploits survivors' trauma, and renders young people, particularly girls, more vulnerable on and off line."
Childline also released details of support sessions where AI has been mentioned. AI-related risks discussed in the sessions include:
Between April and September this year, the helpline conducted 367 counselling sessions where AI, chatbots and related terms were mentioned, four times as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 sessions were related to mental health and wellbeing, including utilizing AI assistants for assistance and AI therapy applications.
A passionate writer and digital artist who shares innovative methods for blending words and visuals in storytelling.