The Children’s Commissioner for England, Dame Rachel de Souza, is urging the government to prohibit apps that use artificial intelligence (AI) to create sexually explicit images of children. She emphasized the need for a total ban on “nudification” apps, which edit real photos to depict individuals as naked.
Dame Rachel warned that the government is allowing these applications to operate unchecked, leading to severe real-world consequences. A government spokesperson confirmed that child sexual abuse material is illegal and that new laws are being developed to address the creation, possession, or distribution of AI tools designed for this purpose.
Deepfakes—videos, images, or audio created using AI to appear authentic—are increasingly concerning. In a recent report, Dame Rachel noted that this technology disproportionately affects girls and young women, with many apps seemingly targeting only female bodies. As a result, many girls are avoiding sharing images online to reduce the risk of being targeted, akin to their offline safety precautions.
Children fear that they could be targeted by “a stranger, a classmate, or even a friend” using technologies readily available on popular platforms. Dame Rachel stated, “The evolution of these tools is happening at such scale and speed that it can be overwhelming to grasp the dangers they pose.”
Under the Online Safety Act, sharing or threatening to share explicit deepfake images is illegal. In February, the government announced new laws to combat the generation of AI-produced child sexual abuse images, making it illegal to create, possess, or distribute such tools. However, Dame Rachel argues that these measures are insufficient, insisting that no nudifying apps should be allowed.
Rise in Reported Cases
The Internet Watch Foundation (IWF) reported a dramatic rise in AI-generated child sexual abuse cases, with 245 reports in 2024 compared to 51 the previous year—a 380% increase. IWF Interim Chief Executive Derek Ray-Hill noted that these apps are being misused in schools, leading to rapid dissemination of explicit imagery.
ICYMT: UCC Confers Degrees on 5,291 at 57th Congregation
A spokesperson for the Department for Science, Innovation and Technology condemned the creation and distribution of child sexual abuse material, including AI-generated images, as “abhorrent and illegal.” They highlighted that, under the Online Safety Act, platforms must remove such content or face severe penalties.
Dame Rachel is also advocating for the government to:
- Impose legal responsibilities on developers of generative AI tools to identify and mitigate risks to children.
- Establish a systematic process to remove sexually explicit deepfake images of children from the internet.
- Recognize deepfake sexual abuse as a form of violence against women and girls.
Paul Whiteman, general secretary of the NAHT school leaders’ union, echoed the commissioner’s concerns, stating that this issue needs urgent review as technology evolves faster than legal and educational responses.
Recently, media regulator Ofcom released its final version of the Children’s Code, mandating stricter age checks on platforms hosting harmful content to protect children, although Dame Rachel criticized the code for prioritizing tech companies’ business interests over children’s safety.
SOURCE: BBC