Advertisement
UK markets close in 34 minutes
  • FTSE 100

    8,060.23
    +19.85 (+0.25%)
     
  • FTSE 250

    19,588.69
    -130.68 (-0.66%)
     
  • AIM

    752.61
    -2.08 (-0.28%)
     
  • GBP/EUR

    1.1660
    +0.0016 (+0.13%)
     
  • GBP/USD

    1.2491
    +0.0029 (+0.23%)
     
  • Bitcoin GBP

    50,658.87
    -1,440.46 (-2.76%)
     
  • CMC Crypto 200

    1,376.25
    -6.32 (-0.46%)
     
  • S&P 500

    4,997.77
    -73.86 (-1.46%)
     
  • DOW

    37,770.01
    -690.91 (-1.80%)
     
  • CRUDE OIL

    82.12
    -0.69 (-0.83%)
     
  • GOLD FUTURES

    2,354.20
    +15.80 (+0.68%)
     
  • NIKKEI 225

    37,628.48
    -831.60 (-2.16%)
     
  • HANG SENG

    17,284.54
    +83.27 (+0.48%)
     
  • DAX

    17,886.99
    -201.71 (-1.12%)
     
  • CAC 40

    8,005.56
    -86.30 (-1.07%)
     

Online safety bill: a messy new minefield in the culture wars

<span>Photograph: Olivier Douliery/AFP/Getty</span>
Photograph: Olivier Douliery/AFP/Getty

Moderation of online content is difficult. Social networks want to take down content that breaks their rules. They have to do it quickly enough that they do not get shouted at for leaving bad things up, but they have to do it accurately so that they do not get shouted at for taking the wrong things down.

In 2019 the UK government announced a plan to fix things. The intention of the online harms white paper was to apply pressure to social networks, to shift that dilemma. If social networks had rules against content that they did not enforce, they would get into legal trouble.

Ofcom was suggested as the regulator that would apply the standards, and only the largest social networks would face the full force of the regulation. The proposals were hardly welcomed by the industry, which dislikes the idea of any government intervention in “harmful but not illegal” speech.

ADVERTISEMENT

But the general plan was seen as an elegant solution: if social networks, for instance, claimed they were safe for children, because they took down posts that promoted self-harm but then failed to do so, they would be affected by the regulation.

The version of that legislation that arrived in the Commons on Wednesday, as the online safety bill, is significantly less elegant. The basic structure, with Ofcom as the regulator of large social networks, has remained intact. But the bill has become encrusted with artefacts of the all-consuming culture war, and looks likely to make the already hard job of moderating content online almost impossible.

Now, for instance, “category one services”, the largest and most popular social networks, will be landed with requirements to protect “democratically important” content, and forbidden from discriminating against particular political viewpoints; they will need “to apply protections equally to a range of political opinions, no matter their affiliation”.

The language will be familiar to anyone following the debate in the US, where Republicans have accused Facebook and Twitter of bias against conservatives for years.

Those accusations of bias culminated in the suspension of Donald Trump from Facebook and Twitter, and led to a push from the US right to rewrite internet regulation to make it easier to sue social networks for content posted on them. (The fact that American conservatives do rather well on Facebook, regularly making it to the top 10 posts on the site, has done little to soften the demands.)

The same fears are now driving legislation in the UK. But if content moderation was hard before, it could become almost impossible.

Do platforms need to check the political affiliation of users before they suspend them for hate speech, and try to suspend equal numbers from every wing? Must they leave up content that breaks their rules for any political candidate in the UK, even an individual council candidate with a handful of votes?

Elsewhere, the bill seeks to preserve freedom of speech, by requiring social networks to “demonstrate that they have taken steps to mitigate any adverse effects” on free expression. The government warns against artificial intelligence programs falsely claiming satire to be harmful. Yet the same bill still requires social networks to take down content which is “lawful but harmful”, such as abuse, misinformation and encouragement of self-harm.

The message of the bill is simple: take down exactly the content the government wants taken down, and no more. Guess wrong and you could face swingeing fines. Keep guessing wrong and your senior managers could even go to jail.

Content moderation is a hard job, and it’s about to get harder.