UK markets closed
  • FTSE 100

    6,367.58
    +4.65 (+0.07%)
     
  • FTSE 250

    19,462.71
    +66.37 (+0.34%)
     
  • AIM

    1,039.01
    +0.98 (+0.09%)
     
  • GBP/EUR

    1.1125
    -0.0086 (-0.77%)
     
  • GBP/USD

    1.3314
    -0.0042 (-0.32%)
     
  • BTC-GBP

    13,255.69
    +30.39 (+0.23%)
     
  • CMC Crypto 200

    333.27
    -4.23 (-1.25%)
     
  • S&P 500

    3,638.35
    +8.70 (+0.24%)
     
  • DOW

    29,910.37
    +37.90 (+0.13%)
     
  • CRUDE OIL

    45.53
    -0.18 (-0.39%)
     
  • GOLD FUTURES

    1,788.10
    -23.10 (-1.28%)
     
  • NIKKEI 225

    26,644.71
    +107.40 (+0.40%)
     
  • HANG SENG

    26,894.68
    +75.23 (+0.28%)
     
  • DAX

    13,335.68
    +49.11 (+0.37%)
     
  • CAC 40

    5,598.18
    +31.39 (+0.56%)
     

Facebook removes more than 12m pieces of Covid-19 misinformation in three months

Martyn Landi, PA Technology Correspondent
·3-min read

Facebook removed more than 12 million pieces of misinformation content related to Covid-19 between March and October this year, the social network’s latest figures reveal.

The firm’s new Community Standards Enforcement Report showed that the millions of posts were taken down because they included misleading claims, such as fake preventative measures and exaggerated cures, which could lead to imminent physical harm.

During the same time period, Facebook said it added warning labels to around 167 million pieces of Covid-19 related content, linking to articles from third-party fact-checkers which debunked the claims made.

And while Facebook said the pandemic continued to disrupt its content review workforce, it said some enforcement metrics were returning to levels seen before the coronavirus outbreak.

This was put down to improvements in the artificial intelligence used to detect potentially harmful posts and the expansion of detection technologies into more languages.

For the period between July and September, Facebook said it took action on 19.2 million pieces of violent and graphic content, up more than four million compared to the previous quarter.

In addition, the site took action on 12.4 million pieces of content relating to child nudity and sexual exploitation, a rise of around three million the previous reporting period.

3.5 million pieces of bullying or harassment content were also removed during this time, up from 2.4 million.

On Instagram, more than four million pieces of violent graphic content had action taken against them, as well as one million pieces of child nudity and sexual exploitation content and 2.6 million posts linked to bullying and harassment, an increase in each area.

The report added that Instagram had taken action on 1.3 million pieces of content linked to suicide and self-injury, up from 277,400 in the last quarter.

It also showed Facebook had carried out enforcement against 22.1 million posts which were judged to be hate speech, with 95% of those proactively identified by Facebook and its technologies.

Guy Rosen, vice president of integrity at the social network, said: “While the Covid-19 pandemic continues to disrupt our content review workforce, we are seeing some enforcement metrics return to pre-pandemic levels.

“Our proactive detection rates for violating content are up from Q2 across most policies, due to improvements in AI and expanding our detection technologies to more languages.

“Even with a reduced review capacity, we still prioritise the most sensitive content for people to review, which includes areas like suicide and self-injury and child nudity.”

Facebook and other social media firms have faced ongoing scrutiny over their monitoring and removing of both misinformation and harmful content, particularly this year during the pandemic and in the run-up to the US presidential election.

In the UK, online safety groups, campaigners and politicians are urging the Government to bring forward the introduction of its Online Harms Bill to Parliament, currently delayed until next year.

The Bill proposes stricter regulation for social media platforms with harsh financial penalties and potentially even criminal liability for executives if sites fail to protect users from harmful content.

Facebook has previously said it would welcome more regulation within the sector.

Mr Rosen said Facebook would “continue improving our technology and enforcement efforts to remove harmful content from our platform and keep people safe while using our apps”.

Andy Burrows, head of child safety online policy at the NSPCC, said Facebook still had not done enough to protect young people in particular.

“Facebook’s takedown performance may be returning to pre-pandemic levels but young people continue to be exposed to unacceptable levels of harm due to years of refusal to design their sites with the safety of children in mind,” he said.

“The damage incurred from the steep reduction in taking down harmful content during the pandemic, particularly suicide and self-harm posts on Instagram, will undoubtedly have lasting impacts on vulnerable young people who were recommended this content by its algorithms.

“The Government has a chance to fix this by delivering a comprehensive Online Harms Bill that gives a regulator the powers it needs to hold big tech companies to account.”