Advertisement
UK markets close in 4 hours 31 minutes
  • FTSE 100

    8,115.48
    +36.62 (+0.45%)
     
  • FTSE 250

    19,829.63
    +227.65 (+1.16%)
     
  • AIM

    755.60
    +2.48 (+0.33%)
     
  • GBP/EUR

    1.1660
    +0.0003 (+0.03%)
     
  • GBP/USD

    1.2512
    +0.0001 (+0.01%)
     
  • Bitcoin GBP

    51,245.46
    +516.46 (+1.02%)
     
  • CMC Crypto 200

    1,388.64
    -7.89 (-0.56%)
     
  • S&P 500

    5,048.42
    -23.21 (-0.46%)
     
  • DOW

    38,085.80
    -375.12 (-0.98%)
     
  • CRUDE OIL

    84.11
    +0.54 (+0.65%)
     
  • GOLD FUTURES

    2,360.60
    +18.10 (+0.77%)
     
  • NIKKEI 225

    37,934.76
    +306.28 (+0.81%)
     
  • HANG SENG

    17,651.15
    +366.61 (+2.12%)
     
  • DAX

    18,049.92
    +132.64 (+0.74%)
     
  • CAC 40

    8,036.80
    +20.15 (+0.25%)
     

Instagram removed nearly 10,000 suicide and self-harm images a day after the Molly Russell scandal

Molly Russell's parents partly blamed Instagram for her 2014 death by suicide - PA
Molly Russell's parents partly blamed Instagram for her 2014 death by suicide - PA

Facebook removed nearly 10,000 images related to suicide and self harm images from Instagram every day in the months following the Molly Russell scandal, but still relies on users to report one in five.

Statistics released by the social media giant showed that it took down just under 1.7m instances of such content between April and September this year, an average of 9,400 per day, with another 4.5m removed from Facebook itself during the same period.

As of September, however, only about 79pc was detected automatically using artificial intelligence (AI) – well below the 97pc rate achieved on Facebook, as well as the 95pc rate achieved on Instagram for child nudity and sexual abuse images.

ADVERTISEMENT

The company attributed that gap to the different features of Instagram compared to Facebook, which has more easily-scannable text content, and to the complexity of distinguishing between genuinely harmful content and frank accounts from mental health sufferers expressing their experiences.

Nevertheless, Facebook said that the overall prevalence of such content remained low, with a maximum of four in every 10,000 "views" happening upon it.

Guy Rosen, Facebook's vice president of integrity and safety, said: "For the first time, we are sharing data on how we are doing at enforcing our policies on Instagram. While we use the same proactive detection systems to find and remove harmful content across both Instagram and Facebook, the metrics may be different across the two services.

"When comparing metrics in order to see where progress has been made and where more improvements are needed, we encourage people to see how metrics change, quarter-over-quarter, for individual policy areas within an app.

"This area is both sensitive and complex, and we work with experts to ensure everyone’s safety is considered."

The statistics come after the parents of Molly Russell, a British schoolgirl who died by suicide in 2017, blamed Instagram in January for "helping" to kill her by allowing her to sink into online communities that he said encouraged self-harm.

After that, Adam Mosseri, the head of Instagram, promised to ban all graphic depictions of self-harm, to place a blurry "sensitivity screen" over less graphic content that might still be upsetting and to demote non-graphic content in search results.

Although Facebook has long published statistics on the performance of content moderators for its main service, this is the first time it has published such statistics on Instagram.

The amount of suicide and self-harm content on Instagram that was detected by AI before it was reported by users climbed marginally from 77.8pc in the three months ending in June to 79.1pc in the three months to September.

The social network also removed about 1.3m instances of content which broke its rules on child nudity and child sexual abuse, 92-95pc of which was detected first by AI, and 3.5m instances that broke its rules on drug and firearms sales, 77-91pc of which were detected first by AI.

Significantly, the company said that it is now far more stringent about detecting hate speech automatically using AI, with 80pc being detected proactively in the three months to September compared to just 24pc at the end of 2017.

It revealed that users continue to upload new versions of the Christchurch massacre video, tweaked in small ways to avoid Facebook's automated systems, meaning the company has now logged over 900 different versions. Moderators removed around 4.5m such pieces of content, with 97pc detected by AI.

Facebook has around 15,000 human content moderators, many outside contractors, who manually review content that is flagged by AI systems and by social media users. Content that the company considers obviously rule-breaking may be removed automatically, with users having the right to appeal to a human.

The most-appealed rules were those against bullying and hate speech, with around one in five decisions resulting in appeals. But it was adult nudity, drugs and terrorist propaganda that saw the highest rates of appeals upheld, at around 24-25pc in each case.

Instagram did not publish appeals statistics, but Mr Rosen promised that it will in future.