Advertisement
UK markets closed
  • FTSE 100

    8,164.12
    -15.56 (-0.19%)
     
  • FTSE 250

    20,286.03
    -45.77 (-0.23%)
     
  • AIM

    764.38
    -0.09 (-0.01%)
     
  • GBP/EUR

    1.1779
    -0.0026 (-0.22%)
     
  • GBP/USD

    1.2650
    +0.0008 (+0.06%)
     
  • Bitcoin GBP

    49,053.59
    +873.98 (+1.81%)
     
  • CMC Crypto 200

    1,287.16
    +3.33 (+0.26%)
     
  • S&P 500

    5,460.48
    -22.39 (-0.41%)
     
  • DOW

    39,118.86
    -45.24 (-0.12%)
     
  • CRUDE OIL

    81.46
    -0.08 (-0.10%)
     
  • GOLD FUTURES

    2,336.90
    -2.70 (-0.12%)
     
  • NIKKEI 225

    39,583.08
    +241.58 (+0.61%)
     
  • HANG SENG

    17,718.61
    +2.11 (+0.01%)
     
  • DAX

    18,235.45
    +24.85 (+0.14%)
     
  • CAC 40

    7,479.40
    -51.32 (-0.68%)
     

Meta Removes AI-Generated Influence Campaigns in China, Israel

(Bloomberg) -- Meta Platforms Inc. removed hundreds of Facebook accounts associated with covert influence campaigns from China, Israel, Iran, Russia and other countries, some of which used artificial intelligence tools to generate disinformation, according to the company’s quarterly threat report.

Most Read from Bloomberg

Meta, the parent of Facebook, Instagram and WhatsApp, has seen threat actors rely on AI to produce fake images, videos and text in an effort to influence users on its sites. But the use of generative AI didn’t affect the company’s ability to disrupt those networks, Meta said Wednesday in the report.

ADVERTISEMENT

Among the disinformation campaigns, the company found a deceptive network from China sharing AI-generated poster images of a fictitious pro-Sikh movement, and an Israel-based network posting AI-generated comments praising Israel’s military under the pages of media organizations and public figures. The company said it removed many of those networks before they were able to build audiences among authentic communities.

“Right now we’re not seeing gen AI being used in terribly sophisticated ways,” Meta’s policy director of threat disruption, David Agranovich, said Tuesday during a press briefing. Tactics such as creating AI-generated profile photos or using artificial intelligence to produce large volumes of spammy content, haven’t been effective so far, he said.

“But we know that these networks are inherently adversarial,” Agranovich said. “They’re going to keep evolving their tactics as their technology changes.”

Social media companies such as Facebook, ByteDance Ltd.’s TikTok and Elon Musk’s X have struggled with the influx of fake and misleading AI-generated content on their sites. This year alone, doctored audio of US President Joe Biden and fake images of the Israel-Hamas conflict circulated on social media, gathering millions of views.

Nick Clegg, Meta’s president of global affairs, has been vocal about the need to detect and label AI-generated content, especially as the company prepares for the 2024 election cycle. Global elections will take place in more than 30 countries this year, including many places where the company’s apps are widely used, such as the US, India and Brazil.

Clegg has said that creating an industry standard around watermarking is the “most urgent task facing us today.” Meta has been working on the ability to detect and label images that were created by tools from AI companies such as Alphabet Inc.’s Google and OpenAI. The company has already starting adding visible markers to some images as well as invisible markers and identifying information in image files.

Meta recently updated its policies to label misleading content on its site that is AI-generated, rather than remove it. The company also requires advertisers to disclose when they use AI to create Facebook or Instagram ads related to social issues, elections or politics, but the company doesn’t fact-check political ads by politicians.

(Updates with image detection in the eighth paragraph.)

Most Read from Bloomberg Businessweek

©2024 Bloomberg L.P.