Advertisement
UK markets open in 5 hours 56 minutes
  • NIKKEI 225

    37,360.33
    -719.37 (-1.89%)
     
  • HANG SENG

    16,385.87
    +134.03 (+0.82%)
     
  • CRUDE OIL

    82.60
    -0.13 (-0.16%)
     
  • GOLD FUTURES

    2,393.10
    -4.90 (-0.20%)
     
  • DOW

    37,775.38
    +22.07 (+0.06%)
     
  • Bitcoin GBP

    50,472.39
    +935.06 (+1.89%)
     
  • CMC Crypto 200

    1,302.64
    +417.10 (+46.64%)
     
  • NASDAQ Composite

    15,601.50
    -81.87 (-0.52%)
     
  • UK FTSE All Share

    4,290.02
    +17.00 (+0.40%)
     

Top Takeaways From The Facebook Papers (So Far)

Leaks of internal Facebook documents brought about by former Facebook-employee-turned-whistleblower Frances Haugen have turned into a deluge, as journalists across more than a dozen outlets scrutinize the trove of tens of thousands of pages and publish the findings online.

With damning new realizations seemingly every day, there’s a lot to keep track of.

Here are some of the biggest takeaways so far:

Facebook routinely gives conservative politicians and causes a pass

The documents suggest Facebook repeatedly and deliberately looks the other way instead of taking action against conservative politicians and causes, fearing any appearance of anti-conservative bias.

ADVERTISEMENT

A December 2020 presentation reviewed by Politico specifically calls out Facebook’s public policy team, overseen by Joel Kaplan, a former Republican operative, for exempting right-wing publishers from punishment for spreading misinformation. Per Politico, Kaplan’s team has intervened on behalf of right-wing activists like Charlie Kirk, Diamond and Silk (prompting a public mea culpa from CEO Mark Zuckerberg), and Breitbart.

That’s in addition to granting a free pass to high-profile politicians and celebrities, who, thanks to an internal system known as “cross check,” are granted immunity to enforcement actions.

The documents show Zuckerberg also personally puts his thumb on the scales from time to time. The CEO personally intervened to reinstate a false anti-abortion video to assuage conservative Republican politicians, for instance, and nixed plans to provide accurate voting information on WhatsApp’s “voting information center” in Spanish because he thought it would seem partisan.

Zuckerberg may have lied under oath

Zuckerberg told Congress last year that Facebook removes “94%” of the hate speech it finds before it’s reported by a human, touting artificial intelligence as an effective means to clean up the platform.

But documents provided by Haugen ― and Haugen’s own congressional testimony ― show Facebook actually misses 93% to 95% of the hate speech on the platform.

The distinction between the two claims rests heavily on semantics, with Facebook essentially ignoring hate speech at large for statistical purposes, instead only counting hate speech that was “reported by a human.”

If hate speech goes unreported, does it really exist? (Yes, yes it does.)

Facebook loves it when you’re angry — and wants you to stay that way

Facebook rewards and promotes content that results in anger, an emotion that keeps people engaged and on the site longer. Documents show the site awards five points for every “angry” emoji a piece of content yields, but just one point for a “like.”

Content that produces more points earns a more prominent position in the Newsfeed, creating a feedback loop that rewards angrier and more emotionally charged feeds.

“Outrage and misinformation are more likely to be viral,” wrote one Facebook researcher in a 2019 document. “We know that many things that generate engagement on our platform leave users divided and depressed.”

Facebook employees think the company ignored warning signs about the Jan. 6 insurrection

After the November 2020 election, Facebook rolled back policies aimed at stemming violence, misinformation and hate speech on the platform, then was caught flat-footed as “Stop the Steal” and its many copycat groups went on to violently overrun the U.S. Capitol.

Documents show that by Nov. 9, just days after the election, 10% of all U.S. views of political material on Facebook were of posts casting the results of the election as illegitimate.

Yet the company treated each bad actor as an individual rather than as part of a larger movement, an internal analysis shows, hindering its own enforcement and leading to what critics internally labeled a “piecemeal” approach.

Facebook employees voiced their anger afterward on internal message boards reviewed by CNN, even as Facebook Chief Operating Officer Sheryl Sandberg publicly dismissed the notion that insurrectionists might have used the site to coordinate the attack on the Capitol.

“There were dozens of Stop the Steal groups active up until yesterday, and I doubt they minced words about their intentions,” wrote one employee.

Another added, “All due respect, but haven’t we had enough time to figure out how to manage discourse without enabling violence? We’ve been fueling this fire for a long time and we shouldn’t be surprised it’s now out of control.”

Facebook has a human trafficking problem — and it didn’t realize its extent (or worse, didn’t care) until Apple intervened

Documents reviewed by CNN show Facebook has struggled to control human trafficking on the platform, and may have been unaware of its extent until Apple threatened to remove Facebook and Instagram from the App Store over it in October 2019.

Apple itself was reacting to a BBC investigation at the time into a bustling online slave market in the Middle East, facilitated in part by Instagram.

Documents show Facebook responded to the report by banning hashtags associated with the content and removing 703 Instagram profiles involved in the sale of what it labels “domestic servants.”

After Apple contacted Facebook, the company panicked, set up a round-the-clock working group, and removed more than 130,000 pieces of domestic servitude content.

“Removing our applications from Apple platforms would have had potentially severe consequences to the business, including depriving millions of users of access to IG & FB,” a November 2019 document titled “Apple Escalation on Domestic Servitude ― how we made it through this [Site Event]” states.

Internal documents show Facebook had been aware of Instagram profiles dedicated to selling domestic laborers in the Middle East and North Africa since at least March 2018.

“Was this issue known to Facbeook before the BBC enquiry and Apple escalation?” the internal report states. “Yes.”

CNN’s review of the documents shows Facebook still struggles with the problem, which internal reports acknowledge being difficult to detect.

Facebook’s conduct in the rest of the world is even worse than we assumed (and we assumed it was bad!)

Haugen has repeatedly accused Facebook of putting “profits before people.” Nowhere is that claim better illustrated than Zuckerberg’s personal decision in late 2020 to drastically increase censorship in Vietnam of anti-government dissidents.

A Washington Post report found the company took action at the direct request of the Communist Party there, ahead of January’s party congress, where new government leaders are selected. Zuckerberg was reportedly worried that non-compliance would reduce Facebook’s market share in the country, where its various products are estimated to generate more than $1 billion in annual revenue.

Facebook’s failures elsewhere are less deliberate.

In non-English-speaking parts of the world, Facebook’s content screening efforts and technology lag far behind, giving bad actors even more room to operate than they already had.

A New York Times review of documents found Facebook spends 87% of its global budget for combating misinformation in the United States, and just 13% in the rest of the world. Only about 10% of Facebook’s daily active users reside in North America.

In India, Facebook’s largest market, the problem is particularly acute. Hate speech, misinformation and glorifications of often-sectarian violence run rampant. Per the Times, India’s political parties employ bots and fake accounts to sow division and bolster their own image.

In the Arab world, Instagram’s filters only detected 6% of Arabic-language hate content in late 2020, an internal memo reviewed by Politico shows.

Similarly grim reports concerning the Middle East found ads attacking women and the LGBTQ community going un-flagged, Egyptian users self-censoring for fear of being arrested, and rival militias in Iraq posting child nudity and other profanity on one another’s Facebook pages in a bid to see their opponents de-platformed.

Haugen testified earlier this month that Facebook’s algorithms in Ethiopia are “literally fanning ethnic violence,” aggravating civil war there.

This is in addition to well-known earlier failures in Myanmar (where a United Nations report concluded Facebook played a “determining role” in genocide) and Sri Lanka (where it stoked similar violence).

Europe has seen profound negative effects as well, internal reports suggest.

“Political parties across Europe claim that Facebook’s algorithm change in 2018 [regarding social interactions] has changed the nature of politics. For the worse,” an employee wrote in an April 2019 internal post reviewed by NBC News.

The person went on to say Facebook was responsible for a “social-civil war” in Poland, borrowing a phrase from a political operative there. (Poland’s far-right Law and Justice party has seen a sharp resurgence in the country since 2015, using its power to hollow out independent media and attempt to control the judiciary.)

This article originally appeared on HuffPost and has been updated.

Related...