Advertisement
UK markets closed
  • FTSE 100

    8,164.12
    -15.56 (-0.19%)
     
  • FTSE 250

    20,286.03
    -45.77 (-0.23%)
     
  • AIM

    764.38
    -0.09 (-0.01%)
     
  • GBP/EUR

    1.1796
    -0.0009 (-0.07%)
     
  • GBP/USD

    1.2646
    +0.0005 (+0.04%)
     
  • Bitcoin GBP

    48,072.86
    +90.27 (+0.19%)
     
  • CMC Crypto 200

    1,263.08
    -20.75 (-1.62%)
     
  • S&P 500

    5,460.48
    -22.39 (-0.41%)
     
  • DOW

    39,118.86
    -45.20 (-0.12%)
     
  • CRUDE OIL

    81.46
    -0.28 (-0.34%)
     
  • GOLD FUTURES

    2,336.90
    +0.30 (+0.01%)
     
  • NIKKEI 225

    39,583.08
    +241.54 (+0.61%)
     
  • HANG SENG

    17,718.61
    +2.14 (+0.01%)
     
  • DAX

    18,235.45
    +24.90 (+0.14%)
     
  • CAC 40

    7,479.40
    -51.32 (-0.68%)
     

Deepfakes are another front in the Israel-Hamas war that risk unleashing even more violence and confusion in the future: ‘This is moving incredibly fast’

Screenshots Clockwise from Top left: @GloOouD/X, VERIFY/YouTube, @AG_Journalist/X, @GloOouD/X, The Quint/YouTube

As dawn broke on Oct. 7, air-raid sirens blasted out across Tel Aviv, sending Michael Matias and his girlfriend catapulting out of bed and down to the bomb shelter in their apartment building. Inside, the messages on their cell phones revealed the horror that had set off the alarm: Hamas gunmen were waging mass slaughter on Israelis and seizing hundreds of hostages, less than an hour’s drive from where they huddled in safety.

In his stunned aftershock, Matias was struck by the implications for his tech startup. Late last year he had launched Clarity, an artificial intelligence company focused on detecting deepfakes in election campaigns; he believed such disinformation was an urgent threat to democracies. But with about 1,200 Israelis dead on Oct. 7 and the country at war, that mission would need to wait for now. He scrambled an emergency meeting that morning with Clarity’s team, to trigger an action plan. “We said to each other, ‘Our technology is going to be very meaningful here,’ ” says Matias, Clarity’s CEO.

In the chaos following the massacres—the deadliest in the country’s 75-year existence—Israel blocked Gaza’s supplies of water, food, and electricity and dropped thousands of bombs on what it claimed were Hamas targets in the packed coastal enclave that’s home to 2 million Palestinians. As the humanitarian crisis spiraled into a full-blown disaster, and more than 10,000 Palestinians killed, including civilians, in a month, macabre images flooded TV and phone screens, setting off spasms of rage and fraught protests worldwide. This was a war fought not only with munitions, but with information—both real and fake. And along with the outrage on both warring sides, some people raised questions about whether the gruesome scenes were even real, or whether the images were so-called deepfakes that had been created with the help of artificial intelligence.

In fact, AI-generated fakes have grown increasingly difficult to discern as the technology has improved. That has left startups like Clarity fighting “a cat and mouse game,” Matias says.

ADVERTISEMENT

The question is, will the cat or the mouse win in the end? Some fear the ease of generative technology will let malicious users outsmart far less nimble tech companies and governments attempting to rein them in. “This is moving incredibly fast,” says Henry Ajder, a Cambridge, U.K.–based consultant on AI technologies for the British government along with businesses including Adobe and Facebook parent Meta. “What is it going to look like in 20 years, or 10 years?” Ajder says. “The tools are going to get more and more accessible. We could be fundamentally unprepared when we do see a lot of deepfakes.”

Propaganda, the saying goes, is as old as war. But since OpenAI launched its first version of AI chatbot ChatGPT last year, the explosive popularity of generative artificial intelligence has turbocharged regular users’ ability to create their own narratives. The implications are relatively trivial when it involves Pope Francis in a puffer jacket or Kim Kardashian as a bus driver. But in a war, deepfakes can be used to sow confusion with potentially life-and-death consequences.

Within days of Russia invading Ukraine in February 2022, setting off Europe’s biggest land war in generations, Ukrainian President Volodymyr Zelensky appeared on Facebook, telling his soldiers to surrender. On Twitter days later, Russian President Vladimir Putin also commanded his forces to lay down arms. Twitchy mannerisms and blurry camerawork quickly exposed both videos as fake. But it signaled what could happen one day, once AI tools improve.

Now that day has come, with generative AI tools easily accessible, whether in a government office or at home. The technology has uncorked a slew of manipulated content that would previously have required skilled techies to produce. In Israel’s war, it has come from all quarters, according to Layla Mashkoor of the Atlantic Council’s Digital Forensic Research Lab, part of a U.S. think tank that closely tracks social media sites in the conflict. People on both sides have spread deepfakes, she says, citing a pro-Israel Instagram account that featured an AI-manipulated image of crowds of Israelis cheering soldiers from their balconies. To many people, the tsunami of information flashing by on phone screens has made everything seem unreliable. “For even authentic images, there is a counterclaim, so it’s very difficult for people to find clarity,” Mashkoor says.

When Matias chose the name Clarity for his AI startup, it was that exact problem he aimed to fix. With the outbreak of war, the team swung into action. As hundreds of videos and photographs hit the internet, some shot by Hamas during its attack, or by Gaza residents in devastated neighborhoods, international media organizations began emailing Clarity for help in weeding out deepfakes, according to Matias, who declined to let Fortune reveal his clients.

The war spawned a wealth of fakery online: Fashion model Bella Hadid apologizing for her pro-Palestinian sentiments was in fact synthesized audio. Jordan’s Queen Rania saying her country was “standing with Israel” was AI-generated audio added atop an appearance she had made on CNN. A Hamas video supposedly showing its fighters destroying an Israeli tank was in reality from the Ukraine war. And another video, showing Hamas downing two Israeli helicopters? That was lifted from the Arma 3 video game.

Clarity, headquartered in Palo Alto, Calif., and Tel Aviv, has also partnered in the war with Israeli intelligence agencies to detect which war footage is real or fake. Almost all of Clarity’s team, including Matias, spent years honing their skills in the Israeli Defense Forces elite tech units during their compulsory national service. Matias himself served in the hyper-selective intelligence unit known as 8200, whose members have launched countless global startups after their military service.

With that pedigree, Matias has tapped a network of seasoned founders for both advice and capital. “Clarity is innovating on a new frontier,” says Udi Mokady, an angel investor in Clarity—and another 8200 veteran—who founded CyberArk, an identity-security company. Now CyberArk’s executive chairman, Mokady predicts the market for deepfake detection will rise sharply. “The awareness grew dramatically overnight with ChatGPT,” he says, “where people can create deepfakes from their homes.”

Rather than certifying content purely true or purely fake, Clarity instead uses AI to track a range of data points in a video, like a person’s facial tics or voice cadence, then uses neural networks to place them on a scale of certainty shown on a dashboard, from green to red. “It’s AI versus AI,” Matias says. Some, like the video showing the leader of the Iran-backed organization Hezbollah condemning the Oct. 7 attacks, are clearly deepfakes (“unfortunately,” quips Matias). But other videos are not so clear-cut and require human judgment, rather than AI-trained machines.

Clarity isn’t alone in its fight against deepfakes. Several other similar startups have launched over the past few years, including Reality Defender in New York and Sentinel in Estonia. Tech giants are also focusing on fakery: Beginning in January, Meta will require political campaigns to flag any AI-generated content on Facebook ads, while Google now includes data about the origins of images.

Governments are also trying to take on deepfakes. In October, President Joe Biden issued an executive order on “safe, secure, and trustworthy” AI, which delegated the Commerce Department to find ways to authenticate content as real, and watermark media created with AI. But it came without threats of sanctions for those who violate the rules.

In the same month, officials agreed to similar measures during a U.K. summit on AI of 28 countries, including China. And in the 27-nation European Union—where an AI Act that would enforce safety standards and fund startups is inching its way through the labyrinthine process—top officials want to slap giant fines on social media platforms that fail to crack down on disinformation.

“The consensus increasingly is that there are catastrophic risks to be posed by AI,” says Ajder, the AI consultant in the U.K. “They want to avoid a situation where the Wild West we’ve seen over the last 18 months is perpetuated, in a way where the stakes just get ever higher.”

The stakes could hardly be higher than Israel’s fierce war on Gaza, with U.S. warships stationed close offshore and Iran and Lebanon poised for possible battle. Clarity has been grappling with the implications through weeks of war, as the team analyzes hundreds of grueling videos and photos.

The effect of watching often horrifying videos on Clarity’s small team is clear enough, and Matias says that they will surely need post-traumatic therapy after the war ends. “We knew we were entering a war that is highly personalized,” he says, adding the work has left his team feeling “a deep emotional load, and an incredible sense of importance.”


Detective force

As deepfakes proliferate, both corporate giants and startups have raced to detect them. These are some of the companies involved:

DeepMedia
Launched in 2017 by Stanford and Yale University graduates specializing in AI. From its headquarters in Oakland, it works with the U.S. Department of Defense, the United Nations, and tech companies to spot fake content, using neural network processing.

Reality Defender
A Manhattan-based company founded in 2021 by former Goldman Sachs executive Ben Colman, who says it is crucial to stop a deepfake in its tracks before it goes viral. The company raised $15 million in an October funding round, and aims to roll out new tools that can spot manufactured voices in real time.

Intel
Introduced a deepfake detector, FakeCatcher, in 2022, which it says can analyze videos in real time and deliver 96% accurate results within milliseconds. One rare tool is analyzing the blood flow in the pixels, gauging whether the image depicts a live person.

A version of this article appears in the December 2023/January 2024 issue of Fortune with the headline, "Going to war against deepfakes."

This story was originally featured on Fortune.com