Advertisement
UK markets open in 3 hours 54 minutes
  • NIKKEI 225

    37,974.47
    -582.40 (-1.51%)
     
  • HANG SENG

    18,291.30
    -185.71 (-1.01%)
     
  • CRUDE OIL

    79.20
    -0.03 (-0.04%)
     
  • GOLD FUTURES

    2,329.60
    -11.60 (-0.50%)
     
  • DOW

    38,441.54
    -411.32 (-1.06%)
     
  • Bitcoin GBP

    53,515.95
    -588.52 (-1.09%)
     
  • CMC Crypto 200

    1,459.75
    -24.94 (-1.68%)
     
  • NASDAQ Composite

    16,920.58
    -99.30 (-0.58%)
     
  • UK FTSE All Share

    4,465.63
    -41.16 (-0.91%)
     

Ofcom to push for better age verification, filters and 40 other checks in new online child safety code

Image Credits: Alys Tomlinson / Getty Images

Ofcom is cracking down on Instagram, YouTube and 150,000 other web services to improve child safety online. A new Children’s Safety Code from the U.K. Internet regulator will push tech firms to run better age checks, filter and downrank content, and apply around 40 other steps to assess harmful content around subjects like suicide, self harm and pornography, to reduce under-18’s access to it. Currently in draft form and open for feedback until July 17, enforcement of the Code is expected to kick in next year after Ofcom publishes the final in the spring. Firms will have three months to get their inaugural child safety risk assessments done after the final Children’s Safety Code is published.

The Code is significant because it could force a step-change in how Internet companies approach online safety. The government has repeatedly said it wants the U.K. to be the safest place to go online in the world. Whether it will be any more successful at preventing digital slurry from pouring into kids’ eyeballs than it has actual sewage from polluting the country’s waterways remains to be seen. Critics of the approach suggest the law will burden tech firms with crippling compliance costs and make it harder for citizens to access certain types of information.

Meanwhile, failure to comply with the Online Safety Act can have serious consequences for UK-based web services large and small, with fines of up to 10% of global annual turnover for violations, and even criminal liability for senior managers in certain scenarios.

The guidance puts a big focus on stronger age verification. Following on from last year’s draft guidance on age assurance for porn sites, age verification and estimation technologies deemed “accurate, robust, reliable and fair” will be applied to a wider range of services as part of the plan. Photo-ID matching, facial age estimation and reusable digital identity services are in; self-declaration of age and contractual restrictions on the use of services by children are out.

ADVERTISEMENT

That suggests Brits may need to get accustomed to proving their age before they access a range of online content — though how exactly platforms and services will respond to their legal duty to protect children will be for private companies to decide: that’s the nature of the guidance here.

The draft proposal also sets out specific rules on how content is handled. Suicide, self-harm and pornography content — deemed the most harmful — will have to be actively filtered (i.e. removed) so minors do not see it. Ofcom wants other types of content such as violence to be downranked and made far less visible in children’s feeds. Ofcom also said it may expect services to act on potentially harmful content (e.g. depression content). The regulator told TechCrunch it will encourage firms to pay particular attention to the “volume and intensity” of what kids are exposed to as they design safety interventions. All of this demands services be able to identify child users — again pushing robust age checks to the fore.

Ofcom previously named child safety as its first priority in enforcing the UK’s Online Safety Act — a sweeping content moderation and governance rulebook that touches on harms as diverse as online fraud and scam adscyberflashing and deepfake revenge pornanimal cruelty; and cyberbullying and trolling, as well as regulating how services tackle illegal content like terrorism and child sexual abuse material (CSAM).

The Online Safety Bill passed last fall, and now the regulator is busy with the process of implementation, which includes designing and consulting on detailed guidance ahead of its enforcement powers kicking in once parliament approves Codes of Practice it’s cooking up.

With Ofcom estimating around 150,000 web services in scope of the Online Safety Act, scores of tech firms will, at the least, have to assess whether children are accessing their services and, if so, take steps to identify and mitigate a range of safety risks. The regulator said it’s already working with some larger social media platforms where safety risks are likely to be greatest, such as Facebook and Instagram, to help them design their compliance plans.

Consultation on the Children’s Safety Code

In all, Ofcom’s draft Children’s Safety Code contains more than 40 “practical steps” the regulator wants web services to take to ensure child protection is enshrined in their operations. A wide range of apps and services are likely to fall in-scope — including popular social media sites, games and search engines.

“Services must prevent children from encountering the most harmful content relating to suicide, self-harm, eating disorders, and pornography. Services must also minimise children’s exposure to other serious harms, including violent, hateful or abusive material, bullying content, and content promoting dangerous challenges,” Ofcom wrote in a summary of the consultation.

“In practice, this means that all services which do not ban harmful content, and those at higher risk of it being shared on their service, will be expected to implement highly effective age-checks to prevent children from seeing it,” it added in a press release Monday. “In some cases, this will mean preventing children from accessing the entire site or app. In others it might mean age-restricting parts of their site or app for adults-only access, or restricting children’s access to identified harmful content.”

Ofcom’s current proposal suggests that almost all services will have to take mitigation measures to protect children. Only those deploying age verification or age estimation technology that is “highly effective” and used to prevent children from accessing the service (or the parts of it where content poses risks to kids) will not be subject to the children’s safety duties.

Those who find — on the contrary — that children can access their service will need to carry out a follow-on assessment known as the “child user condition”. This requires them to assess whether “a significant number” of kids are using the service and/or are likely to be attracted to it. Those that are likely to be accessed by children must then take steps to protect minors from harm, including conducting a Children’s Risk Assessment and implementing safety measures (such as age assurance, governance measures, safer design choices and so on) — as well as applying an ongoing review of their approach to ensure they keep up with changing risks and patterns of use.

Ofcom does not define what “a significant number” means in this context — but “even a relatively small number of children could be significant in terms of the risk of harm. We suggest service providers should err on the side of caution in making their assessment.” In other words, tech firms may not be able to eschew child safety measures by arguing there aren’t many minors using their stuff.

Nor is there a simple one-shot fix for services that fall in scope of the child safety duty. Multiple measures are likely to be needed, combined with ongoing assessment of efficacy.

“There is no single fix-all measure that services can take to protect children online. Safety measures need to work together to help create an overall safer experience for children,” Ofcom wrote in an overview of the consultation, adding: “We have proposed a set of safety measures within our draft Children’s Safety Codes, that will work together to achieve safer experiences for children online.”

Recommender systems, reconfigured

Under the draft Code, any service that operates a recommender system — a form of algorithmic content sorting, tracking user activity — and is at “higher risk” of showing harmful content, must use “highly-effective” age assurance to identify who their child users are. They must then configure their recommender algorithms to filter out the most harmful content (i.e. suicide, self harm, porn) from the feeds of users it has identified as children, and reduce the “visibility and prominence” of other harmful content.

Under the Online Safety Act, suicide, self harm, eating disorders and pornography are classed “primary priority content”. Harmful challenges and substances; abuse and harassment targeted at people with protected characteristics; real or realistic violence against people or animals; and instructions for acts of serious violence are all classified “priority content.” Web services may also identify other content risks they feel they need to act on as part of their risk assessments.

In the proposed guidance, Ofcom wants children to be able to provide negative feedback directly to the recommender feed — in order that it can better learn what content they don’t want to see too.

Content moderation is another big focus in the draft Code, with the regulator highlighting research showing content that’s harmful to children is available on many services at scale and which it said suggests services’ current efforts are insufficient.

Its proposal recommends all “user-to-user” services (i.e. those allowing users to connect with each other, such as via chat functions or through exposure to content uploads) must have content moderation systems and processes that ensure “swift action” is taken against content harmful to children. Ofcom’s proposal does not contain any expectations that automated tools are used to detect and review content. But the regulator writes that it’s aware large platforms often use AI for content moderation at scale and says it’s “exploring” how to incorporate measures on automated tools into its Codes in the future.

“Search engines are expected to take similar action,” Ofcom also suggested. “And where a user is believed to be a child, large search services must implement a ‘safe search’ setting which cannot be turned off must filter out the most harmful content.”

“Other broader measures require clear policies from services on what kind of content is allowed, how content is prioritised for review, and for content moderation teams to be well-resourced and trained,” it added.

The draft Code also includes measures it hopes will ensure “strong governance and accountability” around children’s safety inside tech firms. “These include having a named person accountable for compliance with the children’s safety duties; an annual senior-body review of all risk management activities relating to children’s safety; and an employee Code of Conduct that sets standards for employees around protecting children,” Ofcom wrote.

Facebook- and Instagram-owner Meta was frequently singled out by ministers during the drafting of the law for having a lax attitude to child protection. The largest platforms may be likely to pose the greatest safety risks — and therefore have “the most extensive expectations” when it comes to compliance — but there’s no free pass based on size.

“Services cannot decline to take steps to protect children merely because it is too expensive or inconvenient — protecting children is a priority and all services, even the smallest, will have to take action as a result of our proposals,” it warned.

Other proposed safety measures Ofcom highlights include suggesting services provide more choice and support for children and the adults who care for them — such as by having “clear and accessible” terms of service; and making sure children can easily report content or make complaints.

The draft guidance also suggests children are provided with support tools that enable them to have more control over their interactions online — such an option to decline group invites; block and mute user accounts; or disable comments on their own posts.

The UK’s data protection authority, the Information Commission’s Office, has expected compliance with its own age-appropriate children’s design Code since September 2021 so it’s possible there may be some overlap. Ofcom for instance notes that service providers may already have assessed children’s access for a data protection compliance purpose — adding they “may be able to draw on the same evidence and analysis for both.”

Flipping the child safety script?

The regulator is urging tech firms to be proactive about safety issues, saying it won’t hesitate to use its full range of enforcement powers once they’re in place. The underlying message to tech firms is get your house in order sooner rather than later or risk costly consequences.

“We are clear that companies who fall short of their legal duties can expect to face enforcement action, including sizeable fines,” it warned in a press release.

The government is rowing hard behind Ofcom’s call for a proactive response, too. Commenting in a statement today, the technology secretary Michelle Donelan said: “To platforms, my message is engage with us and prepare. Do not wait for enforcement and hefty fines — step up to meet your responsibilities and act now.”

“The government assigned Ofcom to deliver the Act and today the regulator has been clear; platforms must introduce the kinds of age-checks young people experience in the real world and address algorithms which too readily mean they come across harmful material online,” she added. “Once in place these measures will bring in a fundamental change in how children in the UK experience the online world.

“I want to assure parents that protecting children is our number one priority and these laws will help keep their families safe.”

Ofcom said it wants its enforcement of the Online Safety Act to deliver what it couches as a “reset” for children’s safety online — saying it believes the approach it’s designing, with input from multiple stakeholders (including thousands of children and young people), will make a “significant difference” to kids’ online experiences.

Fleshing out its expectations, it said it wants the rulebook to flip the script on online safety so children will “not normally” be able to access porn and will be protected from “seeing, and being recommended, potentially harmful content”.

Beyond identity verification and content management, it also wants the law to ensure kids won’t be added to group chats without their consent; and wants it to make it easier for children to complain when they see harmful content, and be “more confident” that their complaints will be acted on.

As it stands, the opposite looks closer to what UK kids currently experience online, with Ofcom citing research over a four-week period in which a majority (62%) of children aged 13-17 reported encountering online harm and many saying they consider it an “unavoidable” part of their lives online.

Exposure to violent content begins in primary school, Ofcom found, with children who encounter content promoting suicide or self-harm characterizing it as “prolific” on social media; and frequent exposure contributing to a “collective normalisation and desensitisation”, as it put it. So there’s a huge job ahead for the regulator to reshape the online landscape kids encounter.

As well as the Children’s Safety Code, its guidance for services includes a draft Children’s Register of Risk, which it said sets out more information on how risks of harm to children manifest online; and draft Harms Guidance which sets out examples and the kind of content it considers to be harmful to children. Final versions of all its guidance will follow the consultation process, a legal duty on Ofcom. It also told TechCrunch that it will be providing more information and launching some digital tools to further support services’ compliance ahead of enforcement kicking in.

“Children’s voices have been at the heart of our approach in designing the Codes,” Ofcom added. “Over the last 12 months, we’ve heard from over 15,000 youngsters about their lives online and spoken with over 7,000 parents, as well as professionals who work with children.

“As part of our consultation process, we are holding a series of focused discussions with children from across the UK, to explore their views on our proposals in a safe environment. We also want to hear from other groups including parents and carers, the tech industry and civil society organisations — such as charities and expert professionals involved in protecting and promoting children’s interests.”

The regulator recently announced plans to launch an additional consultation later this year which it said will look at how automated tools, aka AI technologies, could be deployed to content moderation processes to proactively detect illegal content and content most harmful to children — such as previously undetected CSAM and content encouraging suicide and self-harm.

However, there is no clear evidence today that AI will be able to improve detection efficacy of such content without causing large volumes of (harmful) false positives. It thus remains to be seen whether Ofcom will push for greater use of such tech tools given the risks that leaning on automation in this context could backfire.

In recent years, a multi-year push by the Home Office geared towards fostering the development of so-called “safety tech” AI tools — specifically to scan end-to-end encrypted messages for CSAM — culminated in a damning independent assessment which warned such technologies aren’t fit for purpose and pose an existential threat to people’s privacy and the confidentiality of communications.

One question parents might have is what happens on a kid’s 18th birthday, when the Code no longer applies? If all these protections wrapping kids’ online experiences end overnight, there could be a risk of (still) young people being overwhelmed by sudden exposure to harmful content they’ve been shielded from until then. That sort of shocking content transition could itself create a new online coming-of-age risk for teens.

Ofcom told us future proposals for larger platforms could be introduced to mitigate this sort of risk.

“Children are accepting this harmful content as a normal part of the online experience — by protecting them from this content while they are children, we are also changing their expectations for what’s an appropriate experience online,” an Ofcom spokeswoman responded when we asked about this. “No user, regardless of their age, should accept to have their feed flooded with harmful content. Our phase 3 consultation will include further proposals on how the largest and riskiest services can empower all users to take more control of the content they see online. We plan to launch that consultation early next year.”