Menu

Search

  |   Technology

Menu

  |   Technology

Search

AI Advances Could Flood Internet with Child Sexual Abuse Videos, Watchdog Warns

AI advances may lead to an increase in AI-generated child sexual abuse material, warns watchdog. Credit: EconoTimes

Advances in artificial intelligence (AI) are enabling pedophiles to produce AI-generated videos of child sexual abuse, raising significant concerns among safety watchdogs. The Internet Watch Foundation (IWF), which monitors child sexual abuse material (CSAM) globally, has observed a worrying trend where AI technology is being exploited to create increasingly realistic and disturbing content.

The majority of cases identified by the IWF involve the manipulation of existing CSAM or adult pornography, where a child’s face is superimposed onto the footage. These partial deepfakes are created using AI models that are readily available online. However, the IWF has also noted fewer entirely AI-generated videos, typically around 20 seconds long. While currently of essential quality, these fully synthetic videos are expected to improve as technology advances.

The IWF has expressed deep concern that the volume of AI-generated CSAM could increase significantly as AI tools become more accessible and sophisticated. The organization has drawn parallels to the trend observed with AI-generated still images, which have increased as technology advances.

In their monitoring, IWF analysts discovered numerous examples of partial deepfakes on dark web forums frequented by pedophiles. These forums anonymize users and shield them from tracking, making it difficult for law enforcement to intervene. In one snapshot study of a single dark web forum, the IWF found 12,000 new AI-generated images posted over a month-long period. Alarmingly, nine out of ten images were so realistic that they could be prosecuted under the same laws as real CSAM in the UK.

The IWF has also reported that offenders are selling AI-generated CSAM images online, sometimes in place of traditional non-AI CSAM. This trend highlights the increasing market for such material and the evolving methods predators use to exploit technological advancements for their perverse purposes.

The organization's chief executive emphasized the urgent need for proper controls to prevent generative AI tools from becoming a playground for online predators. The IWF is advocating for legal changes to criminalize the creation and distribution of guides for generating AI-made CSAM and the production of “fine-tuned” AI models capable of creating such material.

A proposed amendment to the data protection and digital information bill, tabled by child safety campaigner Baroness Kidron, aimed to address this issue by criminalizing the creation and distribution of AI models designed to produce CSAM. However, the bill was shelved after the general election called by Prime Minister Rishi Sunak in May.

The increasing prevalence of AI-generated CSAM also overwhelms law enforcement agencies in the United States. The Guardian recently reported that the sheer volume of this content is hampering efforts to identify and rescue real-life victims, highlighting the urgent need for international cooperation and robust regulatory frameworks to tackle this growing threat.

As AI technology continues to evolve, the IWF and other safety organizations are calling for immediate action to curb the misuse of these tools and protect vulnerable children from exploitation. The rise of AI-generated CSAM underscores the pressing need for vigilance, regulation, and global collaboration in the fight against child sexual abuse.

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.