Advertisement
UK markets close in 4 hours 26 minutes
  • FTSE 100

    7,833.48
    -43.57 (-0.55%)
     
  • FTSE 250

    19,284.40
    -166.27 (-0.85%)
     
  • AIM

    741.16
    -4.13 (-0.55%)
     
  • GBP/EUR

    1.1675
    -0.0008 (-0.07%)
     
  • GBP/USD

    1.2431
    -0.0007 (-0.06%)
     
  • Bitcoin GBP

    52,161.57
    +2,293.36 (+4.60%)
     
  • CMC Crypto 200

    1,329.54
    +16.91 (+1.31%)
     
  • S&P 500

    5,011.12
    -11.09 (-0.22%)
     
  • DOW

    37,775.38
    +22.07 (+0.06%)
     
  • CRUDE OIL

    82.35
    -0.38 (-0.46%)
     
  • GOLD FUTURES

    2,397.30
    -0.70 (-0.03%)
     
  • NIKKEI 225

    37,068.35
    -1,011.35 (-2.66%)
     
  • HANG SENG

    16,224.14
    -161.73 (-0.99%)
     
  • DAX

    17,724.13
    -113.27 (-0.64%)
     
  • CAC 40

    8,000.26
    -23.00 (-0.29%)
     

Facebook claims it is ‘incredibly proactive’ in taking down harmful content despite flurry of mass scandals

Facebook claims it is ‘incredibly proactive’ in taking down harmful content despite flurry of mass scandals

Facebook has claimed that it is “incredibly proactive” in taking down content when providing evidence to Parliament today, amid the flurry of negative stories being uncovered due to whistleblowers.

Antigone Davis, the global head of safety at the company, claimed that Facebook was not only “responsive” with regards to taking down posts, but that it actively searched out problem content.

Ms Davis gave the answer when asked about Apple’s threat to take Facebook and Instagram off iPhones after it found human trafficking was organised on its apps. Facebook is also struggling to identify and remove trafficking cartels based in Mexico, including violent images and recruitment materials.

“Most of the things that are brought to our attention are managed within 48 hours”, Ms Davis claimed. “Our AI is not perfect, it’s something we’re always looking to improve.”

ADVERTISEMENT

The committee, which is asking for evidence in developing online safety legislation, which would force social media companies to regulate “legal but harmful” content.

Ms Davis also said that Facebook has “no business incentive, no commercial incentive to actually provide people with a negative experience”, and said that “three million businesses in the UK use our platform to grow their business. If they aren’t safe, if they don’t feel safe, they aren’t going to use our platform”.

This statement stands in contrast to leaked audio from Mark Zuckerberg, who said that he expected advertisers to be on the platform “soon enough” and that he would not “change our policies or approach on anything because of a threat to a small percent of our revenue, or to any percent of our revenue” after a boycott by advertisers because of the amount of racist content on Facebook.

With regards to Facebook’s algorithm, and the insurrection attempt on January 6, Ms Davis claimed that the company put in “serious measures to address those issues well before January 6”.

Those measures have, however, repeatedly being criticised for failing to fully deal with the extremist content that was available on the platform.

Reporting has for instance suggested that Facebook was alerted to a ‘Stop the Steal’ group on 3 November, the day of the US election, when it was “flagged for escalation because it contained high levels of hate and violence and incitement (VNI) in the comments.” Two days later, the group had grown to over 300,000 members.

Mr Zuckerberg, Facebook’s chief executive, later told Congress that the company “made our services inhospitable to those who might do harm”.

When asked whether Facebook changed the algorithm following the January 6 event, Ms Davis did not provide a clear answer.

Ms Davis was also asked why, when Facebook can identify harmful content, its algorithms continue to promote it. She answered that the company “tr[ies] to remote content that is divisive, for example, or polarising”.

In May 2020, it was reported that Facebook executives took the decision to end research that would make the social media site less polarising for fears that it would unfairly target right-wing users. Proposals to make the site less polarising were described as “antigrowth” and requiring “a moral stance”.

“Our algorithms exploit the human brain’s attraction to divisiveness,” a 2018 presentation warned.

Ms Davis also told MPs that Facebook was “committed to providing more transparency [and that it had] taken steps to do that”.