Advertisement
UK markets closed
  • NIKKEI 225

    38,460.08
    +907.92 (+2.42%)
     
  • HANG SENG

    17,201.27
    +372.34 (+2.21%)
     
  • CRUDE OIL

    82.74
    -0.62 (-0.74%)
     
  • GOLD FUTURES

    2,328.20
    -13.90 (-0.59%)
     
  • DOW

    38,460.92
    -42.77 (-0.11%)
     
  • Bitcoin GBP

    51,360.88
    -1,821.33 (-3.42%)
     
  • CMC Crypto 200

    1,378.03
    -46.07 (-3.24%)
     
  • NASDAQ Composite

    15,712.75
    +16.11 (+0.10%)
     
  • UK FTSE All Share

    4,374.06
    -4.69 (-0.11%)
     

Why Facebook needs to act quickly to stop ‘deep fake’ videos

Following the virality of manipulated or deep fake videos of Facebook (FB) CEO Mark Zuckerberg late last week and House Speaker Nancy Pelosi this May, Facebook is facing increased scrutiny over its related policies.

The social network needs to act quickly to address the problem. Congress, which convened a hearing on Thursday to discuss the problem of manipulated video and audio, is concerned examples like the Pelosi clip could go mainstream and play a role in influencing the 2020 U.S. presidential election. But beyond that, several experts who spoke with Yahoo Finance caution the sophistication of deep fakes — and other types of misinformation — will only increase if they’re not kept in check somehow by technological platforms like Facebook.

A deepfake video of Facebook CEO Mark Zuckerberg is making the rounds. Source: Instagram
A deep fake video of Facebook CEO Mark Zuckerberg is making the rounds. Source: Instagram

Late last week, artists Bill Posters and Daniel Howe uploaded to Instagram a deep fake video of Zuckerberg created with the help of technology from advertising company Canny. In the altered video, Facebook’s chief executive was digitally manipulated into discussing the power he wields, “with total control of billions of people's stolen data, all their secrets, their lives, their futures.” Meanwhile, the doctored video of Pelosi, which President Donald Trump tweeted in May, was edited so the House Speaker slurred her words and appeared impaired.

ADVERTISEMENT

Both videos raised concerns over how Facebook handles the spread of misinformation, particularly deep fake videos, which are becoming more convincing. In the case of the Pelosi video, Facebook did not take it down but rather “deprioritized” it, or made it appear less often, on the social network — a decision Pelosi disagreed with, according to a Washington Post report on Tuesday. Meanwhile, the deep fake video of Zuckerberg remains on Instagram, but it too is now harder to find.

Removing immunity

UNITED STATES - JUNE 13: Speaker Nancy Pelosi, D-Calif., conducts her weekly news conference in the Capitol Visitor Center on Thursday, June 13, 2019. (Photo By Tom Williams/CQ Roll Call)
A doctored video of House Speaker Nancy Pelosi, slurring her words and seeming impaired, went viral in May. Source: Tom Williams/CQ Roll Call

Dr. Mary Anne Franks, a professor at the University of Miami School of Law, contends Facebook should somehow be held responsible for misinformation like deep fake videos published on the platform. However, the social network technically remains immune from liability because of Section 230 of the Communications Decency Act of 1996, which states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." For the time being, Section 230 essentially provides Facebook a sort of immunity from the consequences of misinformation like deep fake videos, explains Franks.

“As it is, there’s little incentive for companies like Facebook to really crack down on this kind of content beyond some sort of media backlash, like there is right now,” says Franks, who suggests that Section 230 be amended such that tech companies must somehow “earn” its immunity.

Facebook has come under heavy scrutiny since the 2016 U.S. presidential election, making headlines over revelations that Russia used the social network to meddle with the election. Lawmakers, media and the public have questioned whether Facebook should play a more aggressive role in policing content and the spread of misinformation.

Facebook, for its part, acknowledges the clock is ticking to take more action.

“Leading up to 2020 we know that combating misinformation is one of the most important things we can do,” a Facebook spokesperson told Yahoo Finance in a statement. “We continue to look at how we can improve our approach and the systems we've built. Part of that includes getting outside feedback from academics, experts and policymakers.”

Explicit labels

WASHINGTON, DC - JUNE 13: House Intelligence Committee Chairman Adam Schiff (D-CA) listens to testimony from experts on the subject of 'deepfakes,' digitally manipulated video and still images, during a hearing in the Longworth House Office Building on Capitol Hill June 13, 2019 in Washington, DC. Artificial intelligence-generated videos of Speaker of the House Nancy Pelosi (D-CA) and Facebook founder Mark Zuckerberg have brought the issue of manipulation into sharp focus and leaves lawmakers with questions about national security and mass communication.   (Photo by Chip Somodevilla/Getty Images)
House Intelligence Committee Chairman Adam Schiff (D-CA) listening to testimony from experts on the subject of deep fake videos during a hearing in the Longworth House Office Building on Capitol Hill on June 13, 2019 in Washington, D.C. Source: Chip Somodevilla/Getty Images

At the very least, the social network should consider explicitly labeling deep fake videos on the platform as such so Facebook users know what they’re watching from the get-go. As it currently stands, Facebook only notifies users if a particular deep fake video is cause for concern if they try to share it. Try sharing the doctored Pelosi video on Facebook, for instance, and you’re notified that there is “additional reporting,” with buttons you can click to read articles from organizations including Factcheck.org, Lead Stories, PolitiFact, Associated Press, and 20 Minutes.

“Labeling, in particular, is a middle-ground position that we have not fully tested,” explains Robert Chesney, associate dean for academic affairs at the University of Texas School of Law, who adds that if Facebook is given the power to ban such content outright it could be construed as censorship. “We should try labeling before we go around and just deleting content for people, de-platforming people or suppress their content so no one can find it, bearing in mind that it's a slippery slope that sometimes is going to involve political speech. … Maybe we’d be smart, if we proceed with baby steps, including encouraging the companies as an initial matter, simply to label more aggressively without actually silencing speech.”

The stakes are high, points out Susan Etlinger, an Altimeter Group analyst who specializes in AI ethics.

“We are now in a world where truth can so easily be manipulated,” she says. “So my challenge for Mark Zuckerberg and the CEOs of other social platforms would be to extrapolate out 3 years, 5 years, 50 years. Where will we be then? Granted, Facebook and Twitter can’t singlehandedly ensure societal stability. That’s completely unrealistic. But they don’t have to contribute to instability.”

Follow Yahoo Finance on Twitter, Facebook, Instagram, Flipboard, SmartNews, LinkedIn,YouTube, and reddit.

More from JP: