Advertisement
UK markets close in 6 hours 56 minutes
  • FTSE 100

    8,113.99
    +35.13 (+0.43%)
     
  • FTSE 250

    19,802.49
    +200.51 (+1.02%)
     
  • AIM

    755.53
    +2.41 (+0.32%)
     
  • GBP/EUR

    1.1655
    -0.0002 (-0.01%)
     
  • GBP/USD

    1.2528
    +0.0017 (+0.14%)
     
  • Bitcoin GBP

    51,420.87
    +312.63 (+0.61%)
     
  • CMC Crypto 200

    1,390.36
    -6.17 (-0.44%)
     
  • S&P 500

    5,048.42
    -23.21 (-0.46%)
     
  • DOW

    38,085.80
    -375.12 (-0.98%)
     
  • CRUDE OIL

    83.76
    +0.19 (+0.23%)
     
  • GOLD FUTURES

    2,356.60
    +14.10 (+0.60%)
     
  • NIKKEI 225

    37,934.76
    +306.28 (+0.81%)
     
  • HANG SENG

    17,651.15
    +366.61 (+2.12%)
     
  • DAX

    18,052.46
    +135.18 (+0.75%)
     
  • CAC 40

    8,045.82
    +29.17 (+0.36%)
     

Why computer-generated child abuse is the next crime wave waiting to happen

AI online
AI online

Browse through the public image library of Openjourney, an artificial intelligence tool used to create computer-generated art, and you will find an impressive collage of hyper-realistic portraits, vivid science-fiction scenes and atmospheric landscapes.

The free software is one of a spate of AI programs to have exploded in use in recent months, allowing users to create computer-generated art in seconds with just a few words of instruction. Lifelike images of Pope Francis in a bright white puffer coat and apparent pictures of Donald Trump being arrested have been created and spread across the web this year.

While these viral images were amusing distractions, image-creation AI is also being exploited by those with darker motives.

ADVERTISEMENT

Openjourney already appears to have been used by paedophiles to create illegal child abuse images, according to Henk Van Ess, a Dutch investigator and researcher.

Van Ess says that while researching AI software he stumbled across the illegal material on an online forum that included a rolling feed of images created by the software.

“I’m a father of two children, my children could come to this tool, you can type in terrible stuff and there’s no restriction,” he says.

AI leaders spent last week warning that the technology they have helped create risked triggering human “extinction” and should be treated as a threat on the same level as nuclear war. Rishi Sunak will bring up these existential concerns at a White House trip this week.

Today, those warnings remain hypothetical. Child safety campaigners have raised concerns that they will distract from the real harm that the technology risks creating today.

“When we’re focusing on those risks, we’re not looking at the ways in which AI technology can be used to extend and compound the real-world risks that we’re seeing today, such as sexual abuse,” says Andy Burrows, an online safety consultant and the former head of child safety at the NSPCC.

“We are instead being distracted by the long term effects, and what that means is history is repeating itself, we’re moving from current technologies where the risks to children have not been properly addressed to new kinds of technology in which those risks are sharply heightened.”

Burrows says paedophiles are already taking advantage of the rapid rise of generative AI systems, for example by using the software to automate online grooming conversations, or to blackmail children by creating compromising fake voice recordings.

However, the biggest concern is that image generation tools will lead to industrialised production of child abuse images, overwhelming attempts to combat it. Paedophiles have often taken advantage of novel technologies, from end-to-end encryption to file sharing and social media, and AI may be no different.

AI generated pictures are often nearly indistinguishable from real photos. Computer-generated abuse material is equally illegal to produce or own as genuine images.

Last week, Dan Sexton, of the Internet Watch Foundation, the UK’s hotline to report child abuse images, warned that mass-scale production could leave investigators drowning in artificially simulated pictures of abuse, and unable to identify real victims.

“If AI imagery of child sexual abuse does become indistinguishable from real imagery, there is a danger that IWF analysts could waste precious time attempting to identify and help law enforcement protect children that do not exist, to the detriment of real victims,” he said.

Image generation tools have not been built without safeguards. Major tools such as Stable Diffusion, run by the London-based Stability AI, have banned thousands of keywords associated with producing illegal content and cracked down on pornography.

“Over the past seven months, Stability AI has taken numerous steps to significantly mitigate the risk of exposure to NSFW [not safe for work] content from our models. These safeguards include the development of NSFW-content detection technology to block unsafe and inappropriate material from our training data,” a spokesman for Stability AI.

“Stability AI strictly prohibits any misuse for illegal or unethical purposes across our platforms, and our policies are clear that this includes [child sex abuse material].”

However, rapid improvements in the technology have led to a proliferation of DIY alternatives that do not introduce such controls. Van Ess said Openjourney, created by the developers of the website PromptHero, had few such limitations. Javi Ramirez and Javier Rueda, the website’s owners, did not respond to emails and other requests for comment.

Many software developers open-source their code, meaning they publish it freely to encourage people to spot bugs or potential improvements.

However the fact it can easily be downloaded and edited means tools could be remodelled to focus on creating illegal content.

“It will be a very short period of time before people really start explicitly trying to build models to do this kind of thing, if they haven’t already,” says Michael Wooldridge of the Alan Turing Institute, Britain’s premier organisation for data science and artificial intelligence.

“If it became possible for you to do it from your living room, then I think we would likely see an explosion of this kind of stuff.”

Researchers have also warned that child abuse material on the web could be included among the billions of images used to train image generation systems. LAION, a German non-profit that makes AI data sets, recently removed a link whose metadata indicated it featured child abuse imagery. The image’s description was in Portuguese, meaning it was missed by English-language tools for identifying illegal content.

LAION said there was no proof the link, which is now offline, contained child abuse material, and that no other cases had been reported to it, but said that it had revised its filtering tools to include other languages.

“The way these datasets work [is] you indiscriminately scrape the web, sometimes legally, sometimes illegally,” says Hany Farid of the UC Berkeley School of Information, who specialises in detecting digitally manipulated images. “Nobody’s going in and looking at 5bn images.”

One online safety executive says AI-generated child abuse material will be near-impossible to track online. While social networks have successfully used AI to detect illegal content such as terrorist propaganda, doing the same with child images is difficult.

Since storing the child abuse images is illegal, programmes cannot easily be legally used to spot offending material as they do not have data sets to draw on.

The widely used alternative - a database of digital signals used to identify child abuse images that have already been reported - is useless on newly created images.

The National Crime Agency said it was looking at working with internet companies on the problem.

Tech giants might soon face stronger incentives. Baroness Kidron, the crossbench peer who has been pushing for greater protection for children for years, is proposing an amendment to the Government’s Online Safety Bill that would ensure it covers AI-generated content, meaning greater punishments for websites that handle it and fail to crackdown on offending material.

The pace of AI development, however, means that it is already being used before laws can be written to stop it. Criminals have often exploited technology faster than authorities can respond. Tragically, AI may be no different.