Advertisement
UK markets closed
  • FTSE 100

    8,164.12
    -15.56 (-0.19%)
     
  • FTSE 250

    20,286.03
    -45.77 (-0.23%)
     
  • AIM

    764.38
    -0.09 (-0.01%)
     
  • GBP/EUR

    1.1796
    -0.0009 (-0.07%)
     
  • GBP/USD

    1.2646
    +0.0005 (+0.04%)
     
  • Bitcoin GBP

    48,771.76
    +657.80 (+1.37%)
     
  • CMC Crypto 200

    1,277.32
    -6.51 (-0.51%)
     
  • S&P 500

    5,460.48
    -22.39 (-0.41%)
     
  • DOW

    39,118.86
    -45.20 (-0.12%)
     
  • CRUDE OIL

    81.46
    -0.28 (-0.34%)
     
  • GOLD FUTURES

    2,336.90
    +0.30 (+0.01%)
     
  • NIKKEI 225

    39,583.08
    +241.58 (+0.61%)
     
  • HANG SENG

    17,718.61
    +2.14 (+0.01%)
     
  • DAX

    18,235.45
    +24.85 (+0.14%)
     
  • CAC 40

    7,479.40
    -51.32 (-0.68%)
     

Taylor Swift deepfake porn points to a fundamental problem: AI can make it, but can’t police it

Patrick Smith—Getty Images

Hello and welcome to Eye on AI.

Wall Street investors and almost anyone who’s tracking the progress of the generative AI boom is waiting to see what sort of quarterly earnings tech giants Microsoft and Alphabet post after the market close today. Many are hoping to see both companies put big topline growth figures on the board due to sales of AI-enhanced products and cloud services. Analysts think Microsoft could see 15% revenue growth and an almost 19% earnings jump, largely due to more companies using its AI cloud services and its new AI-enhanced software tools. That would be a significantly large increase for such a giant company. On the other hand, if both companies disappoint, it may feed into concerns the AI boom is overhyped.

While we’re waiting for that news, let’s talk about some other stuff. Like porn. And Taylor Swift. Deepfake pornographic images of the music star went viral on social media platform X and on various Telegram channels this past week, underscoring the huge problem nonconsensual deepfake porn poses not just to Swift, but to women everywhere. Some are hopeful Swift will use her considerable cultural influence to create a groundswell of support for regulation that might actually do something to stem the tide of these sorts of deepfakes, which are often used to harass non-celebrities. And, in fact, several Congressional representatives introduced bills aimed at combating deepfake porn in response to the Swift deepfakes, and White House spokesperson Karine Jean-Pierre said legislation on the misuse of social media might be needed.

The question is exactly what form those laws should take. In the U.K., the new Online Safety Bill puts responsibility on the people who create the images and post them online, making the sharing of nonconsensual pornography a crime. But it is unclear how easy the law will be to enforce or how much attention police and prosecutors will devote to pursuing such cases. The creators of these images usually take steps to hide their identity, making such investigations technically difficult. The law also stops short of holding social media companies that allow these kinds of deepfakes to go viral criminally liable. However, it does require them to show that they have systems to try to prevent the spread of nonconsensual porn and to remove the content quickly if it does slip through their filters.

ADVERTISEMENT

This is the kind of regulation that even some big tech CEOs have advocated in response to the problem of deepfakes and disinformation of all kinds. Stop it at the point of distribution, not the moment of creation. That’s what Microsoft CEO Satya Nadella said in recent comments at London’s Chatham House and in Davos. To paraphrase Nadella’s argument: Going after the people making AI models because they happen to be able to make deepfake porn is like suing Smith Corona because a bank robber used one of its typewriters to write a stickup note.

Then again, he would say that. Microsoft doesn’t have a major social network to police. But it does make and sell AI software. And as it turns out, there’s good evidence that it was Microsoft’s Designer software, which includes the ability to use natural language prompts to create images, that was used to create the Swift deepfakes. After tech publication 404 Media showed how easy it was to get around Microsoft’s guardrails to create Swift deepfakes, Microsoft strengthened some of those prompt restrictions.

What is needed is a multi-layered approach that addresses all three levels of the problem: laws that make it a criminal offense to create and distribute nonconsensual porn and deepfakes; laws that require AI model makers to have far more robust guardrails than they do currently; and, most importantly, laws that require social media companies to better filter out such imagery and prevent it from going viral.

The ease with which Designer’s guardrails can be overcome and the problem social media giants have in filtering out pornographic content stem from the same fundamental issue: Despite all their seeming sophistication and abilities to pass the bar exam or U.S. medical licensing exams, AI systems still lack anything approaching human-level understanding. Pornography is famously hard to define, even for humans. As Supreme Court Justice Potter Stewart famously quipped, he couldn’t define it, “but I know it when I see it.”

In theory, this is exactly the sort of problem at which modern AI, based on neural networks, should excel. One reason neural network-based deep learning caught on in the first place is that such software could classify images, such as telling photos of cats apart from ones of dogs, not based on some elaborate rules and definitions, but by developing an impossible-to-explain, almost intuitive sense of when an image depicted a cat or a dog.

But it turns out pornography is a much more complex concept to grasp for AI than identifying a cat or dog. Some nudity is innocent. Some is not. And our deep learning classifiers have struggled to understand enough about semantic composition—the parts of an image that give it a particular meaning—and context to make those calls successfully. That’s why so many social media platforms end up blocking the distribution of innocent baby snaps or photos of classical sculptures that depict nude figures—the AI software powering its filters can’t tell the difference between these innocent images and porn. Laws such as the U.K.’s Online Safety Act wind up incentivizing companies to err on the side of blocking innocent images since it keeps them from getting fined and drawing lawmakers’ ire. But it also makes these platforms less useful.

The same goes for our image generation AI, which is also based on deep learning. You can’t simply create guardrails for these systems by telling them behind the scenes, “Don’t create porn.” Instead, you have to ban user prompts such as “Taylor Swift nude.” But, as it turns out, the same system will still create essentially the same image when prompted with “Taylor ‘singer’ Swift” and then, as 404 Media reported, “rather than describing sexual acts explicitly, describe objects, colors, and compositions that clearly look like sexual acts and produce sexual images without using sexual terms.” Again, this is because the image generator doesn’t have any understanding of what porn is. And as companies try to strengthen these guardrails, they render their own products less useful for legitimate use cases.

This is one of those AI problems that it may take an entirely new AI architecture to solve. Yann LeCun, Meta’s chief AI scientist, has been advocating for a new deep learning method for image classifiers called a Joint Embedding Predictive Architecture (or JEPA) that tries to create AI models with a much more robust conceptual and compositional understanding of a scene. It is possible that an image classifier based on JEPA might be a better Taylor Swift deepfake porn detector than our current models.

We’ll have to wait to hear from Yann whether this works for Taylor Swift deepfakes. In the meantime, expect deepfake porn to continue to be a scourge of social media.

There’s lots more to AI news, so read on. But, before you do: Fortune is always trying to make Eye on AI more valuable to our readers. If you could take a couple of minutes to give us your honest feedback by answering a few questions about your experience, I'd appreciate it. It shouldn't take you more than five minutes. You can find the link below. Thanks!

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Correction: Last Tuesday's edition Eye on AI misstated a statistic Getty CEO Craig Peters used to illustrate the growth of AI-generated imagery. He said more AI-created images have been produced in the past 12 months than lens-based photographs, not that the number of AI-generated images produced in that period already exceeded the number of photographs produced throughout history.

This story was originally featured on Fortune.com