Advertisement
UK markets closed
  • FTSE 100

    7,895.85
    +18.80 (+0.24%)
     
  • FTSE 250

    19,391.30
    -59.37 (-0.31%)
     
  • AIM

    745.67
    +0.38 (+0.05%)
     
  • GBP/EUR

    1.1607
    -0.0076 (-0.65%)
     
  • GBP/USD

    1.2370
    -0.0068 (-0.55%)
     
  • Bitcoin GBP

    51,502.35
    +1,916.86 (+3.87%)
     
  • CMC Crypto 200

    1,369.11
    +56.48 (+4.31%)
     
  • S&P 500

    4,967.23
    -43.89 (-0.88%)
     
  • DOW

    37,986.40
    +211.02 (+0.56%)
     
  • CRUDE OIL

    83.24
    +0.51 (+0.62%)
     
  • GOLD FUTURES

    2,406.70
    +8.70 (+0.36%)
     
  • NIKKEI 225

    37,068.35
    -1,011.35 (-2.66%)
     
  • HANG SENG

    16,224.14
    -161.73 (-0.99%)
     
  • DAX

    17,737.36
    -100.04 (-0.56%)
     
  • CAC 40

    8,022.41
    -0.85 (-0.01%)
     

Facebook reportedly said no problem even as internal memos flagged polarising content and hate speech in India

Indian Prime Minister Narendra Modi (L) and Facebook CEO Mark Zuckerberg shake hands after a Townhall meeting, at Facebook headquarters (AFP via Getty Images)
Indian Prime Minister Narendra Modi (L) and Facebook CEO Mark Zuckerberg shake hands after a Townhall meeting, at Facebook headquarters (AFP via Getty Images)

Facebook reportedly brushed aside persistent problems in its operations in India even as internal memos flagged the prevalence of hate speech and polarising content on the platform, according to a new report.

Three internal memos exchanged by staff within Facebook between 2018 and 2020, pointed out numerous red flags in the platform’s operation in India, including “constant barrage of polarising nationalistic content,” misinformation, and content denigrating minority communities in the country, the report published on Wednesday in the Indian Express newspaper noted.

However, despite these red flags pointed out by the company’s staff, who were mandated to undertake oversight functions, Facebook held that these instances of hate speech and problematic content were of relatively low prevalence on the platform.

ADVERTISEMENT

Redacted versions of Facebook internal documents leaked to the US securities and exchange commission by former Facebook product manager and whistleblower Frances Haugen, revealed that two reports within the company flagged hate speech and “problem content” in January-February 2019, ahead of parliamentary elections.

In another report presented in August 2020, staff reportedly mentioned that Facebook’s artificial intelligence tools failed to pick up problematic content and hate speech as they were unable to “identify vernacular languages.”

In a 2019 internal review when these problems were flagged, Chris Cox, then vice president of Facebook, said, “Survey tells us that people generally feel safe. Experts tell us that the country is relatively stable,” according to the minutes of the meeting part of the leaked documents.

Mr Cox, who quit the company in March 2020 and later rejoined as the chief product officer, however, noted that problems in sub-regions within India may be lost at the country level.

The company did not respond to requests for comment by the Indian Express on Cox’s meeting and the internal memos.

An earlier report on the documents leaked by Ms Haugen also noted that Facebook saw India as one of the most “at-risk countries” in the world.

It identified Hindi and Bengali languages as priorities for “automation on violating hostile speech,” adding that the company did not have enough local language moderators or content-flagging personnel to stop misinformation that spilled over to real-world violence.

Then In 2019, a Facebook researcher who set up a “test account” to assess videos and groups recommended by Facebook’s algorithm, found that the suggestions in the feed was inundated with hate speech, misinformation, and posts glorifying violence, according to the New York Times.

“Following this test user’s news feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the researcher wrote.

In one of the internal memos, which is part of a discussion between Facebook staff and executives, employees questioned how the platform did not have “even basic key work detection set up to catch” hate speech.

They also raised questions on how the company planned to “earn back” the trust of its employees from minorities communities, especially after a senior Indian Facebook executive had shared on her personal profile a post which many felt “denigrated” Muslims.

In a statement, Facebook reportedly said it “invested significantly in technology to find hate speech in various languages, including Hindi and Bengali” using which it “reduced the amount of hate speech that people see by half” in 2021.

“....we are improving enforcement and are committed to updating our policies as hate speech evolves online,” a company spokesperson said.

The Independent has reached out to Facebook for comment.

Read More

Facebook mysteriously banned people from saying ‘#SaltBae’

Facebook experiences second outage in two months

Facebook and Instagram go down

Facebook to delete data on a billion people’s faces

Facebook whistleblower calls for Mark Zuckerberg to step down as chief

Apple’s ad-tracking blocker is costing Facebook, YouTube, and Snapchat billions