Advertisement
UK markets closed
  • FTSE 100

    8,164.12
    -15.56 (-0.19%)
     
  • FTSE 250

    20,286.03
    -45.77 (-0.23%)
     
  • AIM

    764.38
    -0.09 (-0.01%)
     
  • GBP/EUR

    1.1796
    -0.0009 (-0.07%)
     
  • GBP/USD

    1.2648
    +0.0006 (+0.05%)
     
  • Bitcoin GBP

    47,965.47
    -680.14 (-1.40%)
     
  • CMC Crypto 200

    1,263.58
    -20.25 (-1.58%)
     
  • S&P 500

    5,460.48
    -22.39 (-0.41%)
     
  • DOW

    39,118.86
    -45.20 (-0.12%)
     
  • CRUDE OIL

    81.46
    -0.28 (-0.34%)
     
  • GOLD FUTURES

    2,336.90
    +0.30 (+0.01%)
     
  • NIKKEI 225

    39,583.08
    +241.54 (+0.61%)
     
  • HANG SENG

    17,718.61
    +2.14 (+0.01%)
     
  • DAX

    18,235.45
    +24.90 (+0.14%)
     
  • CAC 40

    7,479.40
    -51.32 (-0.68%)
     

Meta identifies networks pushing deceptive content likely generated by AI

NEW YORK (Reuters) - Meta said on Wednesday it had found "likely AI-generated" content used deceptively on its Facebook and Instagram platforms, including comments praising Israel's handling of the war in Gaza published below posts from global news organizations and U.S. lawmakers.

The social media company, in a quarterly security report, said the accounts posed as Jewish students, African Americans and other concerned citizens, targeting audiences in the United States and Canada. It attributed the campaign to Tel Aviv-based political marketing firm STOIC.

STOIC did not immediately respond to a request for comment on the allegations.

WHY IT'S IMPORTANT

ADVERTISEMENT

While Meta has found basic profile photos generated by artificial intelligence in influence operations since 2019, the report is the first to disclose the use of text-based generative AI technology since it emerged in late 2022.

Researchers have fretted that generative AI, which can quickly and cheaply produce human-like text, imagery and audio, could lead to more effective disinformation campaigns and sway elections.

In a press call, Meta security executives said they removed the Israeli campaign early and did not think novel AI technologies had impeded their ability to disrupt influence networks, which are coordinated attempts to push messages.

Executives said they had not seen such networks deploying AI-generated imagery of politicians realistic enough to be confused for authentic photos.

KEY QUOTE

"There are several examples across these networks of how they use likely generative AI tooling to create content. Perhaps it gives them the ability to do that quicker or to do that with more volume. But it hasn't really impacted our ability to detect them," said Meta head of threat investigations Mike Dvilyanski.

BY THE NUMBERS

The report highlighted six covert influence operations that Meta disrupted in the first quarter.

In addition to the STOIC network, Meta shut down an Iran-based network focused on the Israel-Hamas conflict, although it did not identify any use of generative AI in that campaign.

CONTEXT

Meta and other tech giants have grappled with how to address potential misuse of new AI technologies, especially in elections.

Researchers have found examples of image generators from companies including OpenAI and Microsoft producing photos with voting-related disinformation, despite those companies having policies against such content.

The companies have emphasized digital labeling systems to mark AI-generated content at the time of its creation, although the tools do not work on text and researchers have doubts about their effectiveness.

WHAT'S NEXT

Meta faces key tests of its defenses with elections in the European Union in early June and in the United States in November. (This story has been corrected to reflect that this is the first disclosure of text-based generative AI use, not of generative AI use altogether, in paragraph 4)

(Reporting by Katie Paul; Editing by Rod Nickel)