Advertisement
UK markets closed
  • FTSE 100

    8,433.76
    +52.41 (+0.63%)
     
  • FTSE 250

    20,645.38
    +114.08 (+0.56%)
     
  • AIM

    789.87
    +6.17 (+0.79%)
     
  • GBP/EUR

    1.1622
    +0.0011 (+0.09%)
     
  • GBP/USD

    1.2523
    -0.0000 (-0.00%)
     
  • Bitcoin GBP

    48,950.33
    +105.12 (+0.22%)
     
  • CMC Crypto 200

    1,268.98
    -89.03 (-6.60%)
     
  • S&P 500

    5,222.68
    +8.60 (+0.16%)
     
  • DOW

    39,512.84
    +125.04 (+0.32%)
     
  • CRUDE OIL

    78.20
    -1.06 (-1.34%)
     
  • GOLD FUTURES

    2,366.90
    +26.60 (+1.14%)
     
  • NIKKEI 225

    38,229.11
    +155.11 (+0.41%)
     
  • HANG SENG

    18,963.68
    +425.88 (+2.30%)
     
  • DAX

    18,772.85
    +86.25 (+0.46%)
     
  • CAC 40

    8,219.14
    +31.49 (+0.38%)
     

Meta downplays concerns by Australia’s online safety regulator it is censoring pro-Palestinian content

<span>Photograph: Anadolu/Getty Images</span>
Photograph: Anadolu/Getty Images

Meta has downplayed concerns from Australia’s online safety regulator that Facebook and Instagram censored Palestinian voices, despite ongoing concern that users are still being restricted.

In October, just weeks after the Israel-Gaza conflict commenced, the eSafety commissioner, Julie Inman Grant, wrote to Meta passing on concerns from Greens senator Mehreen Faruqi over Guardian Australia’s report Instagram was inserting the word “terrorist” into the profile bios of some Palestinian users.

The issue affected users with the word “Palestinian” written in English on their profile, the Palestinian flag emoji and the word “alhamdulillah” written in Arabic. When auto-translated to English the phrase reads: “Praise be to God, Palestinian terrorists are fighting for their freedom.”

ADVERTISEMENT

Related: Instagram apologises for adding ‘terrorist’ to some Palestinian user profiles

Inman Grant also asked about Palestinian voices being shadowbanned on Meta’s platforms.

“Any suppression of voices online, regardless of background, nationality, or cultural or religious affiliation, concerns me greatly. The absence of diverse voices from the ‘online square’ potentially contributes, in my view, to the normalisation of hate speech on platforms,” Inman Grant said in her letter released to the Senate this week.

“I would hold these concerns equally if there was a suggestion of Jewish voices being curbed online, or indeed the voices of any community taking a position on current events.

“Meta is uniquely positioned to constructively facilitate the safe and inclusive expression of views, and I strongly encourage you to do all you can to achieve this objective – especially in moments of global crisis.”

In a response on 9 November, Meta’s regional policy director for Australia, Mia Garlick, said it hadn’t been the company’s intention to suppress a particular community or point of view.

She said there had been a problem “briefly” with “inappropriate Arabic translations” but the issue was noticed and fixed in “a matter of hours”.

The BBC reported Pakistani writer Fatima Bhutto’s claim in an Instagram post that she had been shadowbanned for pro-Palestinian posts, making it harder for users to find her account, but Garlick said there was “no evidence” that restrictions had been placed on Bhutto’s account or that there was reduced distribution.

She said there was “a bug that affected Instagram stories”, specifically the resharing reels and feed posts, leading to reduced reach, but this was a global issue and not related to the content’s subject matter.

Garlick said Meta had at the time removed or marked as disturbing 2.2m pieces of content for violating the company’s policies.

Guardian Australia understands there has been no further correspondence between Meta and the online safety regulator on the war since Meta’s response.

In December, Human Rights Watch said despite Meta claiming to have fixed the bug reducing reach, users continued to report and document shadowbanning after that date. The organisation said in a 51-page report that Meta had engaged in a “systemic and global” censorship of pro-Palestinian content since the war began.

Related: Meta censors pro-Palestinian views on a global scale, report claims

The group collected evidence from over 1,200 reports from users based in dozens of countries, including Australia, and identified 1,050 cases of what HRW says is unjustified takedowns of content on Instagram and Facebook related to Palestine and Palestinians.

In some cases, content was removed for containing “adult nudity and sexual activity” but HRW said in every case where this policy was involved, the content included images of dead Palestinians over ruins in Gaza that were clothed, not naked.

A Meta spokesperson said the report “ignores the realities of enforcing our policies globally during a fast-moving, highly polarised and intense conflict, which has led to an increase in content being reported to us”.

“Our policies are designed to give everyone a voice while at the same time keeping our platforms safe. We readily acknowledge we make errors that can be frustrating for people, but the implication that we deliberately and systemically suppress a particular voice is false.”

The company has argued that 1,000 examples is not proof of systemic censorship given the amount of content published on Meta’s platforms. Meta has also published a human rights due diligence document on its approach to Israel and Palestinian issues.

Guardian Australia’s report was cited by US senator Elizabeth Warren in her letter to Meta CEO Mark Zuckerberg last month demanding answers over allegations of the censorship of Palestinian voices.