Meta Removes AI-Generated Influence Campaigns in China, Israel
(Bloomberg) -- Meta Platforms Inc. removed hundreds of Facebook accounts associated with covert influence campaigns from China, Israel, Iran, Russia and other countries, some of which used artificial intelligence tools to generate disinformation, according to the company’s quarterly threat report.
Most Read from Bloomberg
Key Engines of US Consumer Spending Are Losing Steam All at Once
GameStop Shares Surge as Gill’s Reddit Return Shows Huge Bet
Mnuchin Chases Wall Street Glory With His War Chest of Foreign Money
Homebuyers Are Starting to Revolt Over Steep Prices Across US
AMLO Protege Sheinbaum Becomes First Female President in Mexico
Meta, the parent of Facebook, Instagram and WhatsApp, has seen threat actors rely on AI to produce fake images, videos and text in an effort to influence users on its sites. But the use of generative AI didn’t affect the company’s ability to disrupt those networks, Meta said Wednesday in the report.
Among the disinformation campaigns, the company found a deceptive network from China sharing AI-generated poster images of a fictitious pro-Sikh movement, and an Israel-based network posting AI-generated comments praising Israel’s military under the pages of media organizations and public figures. The company said it removed many of those networks before they were able to build audiences among authentic communities.
“Right now we’re not seeing gen AI being used in terribly sophisticated ways,” Meta’s policy director of threat disruption, David Agranovich, said Tuesday during a press briefing. Tactics such as creating AI-generated profile photos or using artificial intelligence to produce large volumes of spammy content, haven’t been effective so far, he said.
“But we know that these networks are inherently adversarial,” Agranovich said. “They’re going to keep evolving their tactics as their technology changes.”
Social media companies such as Facebook, ByteDance Ltd.’s TikTok and Elon Musk’s X have struggled with the influx of fake and misleading AI-generated content on their sites. This year alone, doctored audio of US President Joe Biden and fake images of the Israel-Hamas conflict circulated on social media, gathering millions of views.
Nick Clegg, Meta’s president of global affairs, has been vocal about the need to detect and label AI-generated content, especially as the company prepares for the 2024 election cycle. Global elections will take place in more than 30 countries this year, including many places where the company’s apps are widely used, such as the US, India and Brazil.
Clegg has said that creating an industry standard around watermarking is the “most urgent task facing us today.” Meta has been working on the ability to detect and label images that were created by tools from AI companies such as Alphabet Inc.’s Google and OpenAI. The company has already starting adding visible markers to some images as well as invisible markers and identifying information in image files.
Meta recently updated its policies to label misleading content on its site that is AI-generated, rather than remove it. The company also requires advertisers to disclose when they use AI to create Facebook or Instagram ads related to social issues, elections or politics, but the company doesn’t fact-check political ads by politicians.
(Updates with image detection in the eighth paragraph.)
Most Read from Bloomberg Businessweek
Disney Is Banking On Sequels to Help Get Pixar Back on Track
The Budget Geeks Who Helped Solve an American Economic Puzzle
Israel Seeks Underground Secrets by Tracking Cosmic Particles
How Rage, Boredom and WallStreetBets Created a New Generation of Young American Traders
©2024 Bloomberg L.P.