Advertisement
UK markets closed
  • FTSE 100

    8,433.76
    +52.41 (+0.63%)
     
  • FTSE 250

    20,645.38
    +114.08 (+0.56%)
     
  • AIM

    789.87
    +6.17 (+0.79%)
     
  • GBP/EUR

    1.1622
    +0.0011 (+0.09%)
     
  • GBP/USD

    1.2525
    +0.0001 (+0.01%)
     
  • Bitcoin GBP

    48,610.50
    -1,610.85 (-3.21%)
     
  • CMC Crypto 200

    1,261.13
    -96.88 (-7.13%)
     
  • S&P 500

    5,222.68
    +8.60 (+0.16%)
     
  • DOW

    39,512.84
    +125.08 (+0.32%)
     
  • CRUDE OIL

    78.20
    -1.06 (-1.34%)
     
  • GOLD FUTURES

    2,366.90
    +26.60 (+1.14%)
     
  • NIKKEI 225

    38,229.11
    +155.13 (+0.41%)
     
  • HANG SENG

    18,963.68
    +425.87 (+2.30%)
     
  • DAX

    18,772.85
    +86.25 (+0.46%)
     
  • CAC 40

    8,219.14
    +31.49 (+0.38%)
     

Apple employees are reportedly raising concerns about company’s on-device image scanning to curb child abuse

File: Apple says the feature will use the phone’s on-device machine learning to assess the content of children’s messages for photos that may be sexually explicit   (AFP via Getty Images)
File: Apple says the feature will use the phone’s on-device machine learning to assess the content of children’s messages for photos that may be sexually explicit (AFP via Getty Images)

Apple employees are reportedly raising concerns internally about the tech giant’s plans to roll out a feature that would allow its devices to scan through people’s photos and messages to check for signs of child abuse.

Employees with the company have flooded an internal slack channel with over 800 messages on the plan that was announced a week ago, news agency Reuters reported.

Many Apple workers reportedly expressed worries in a thread of messages on Slack that repressive governments could exploit the feature to find materials for censorship or arrests.

Apple announced a week ago that new features under the plan to be rolled out “later this year” would use the phone’s on-device machine learning to assess the content of children’s messages for photos that may be sexually explicit.

ADVERTISEMENT

“When receiving this type of content, the photo will be blurred and the child will be warned, presented with helpful resources and reassured it is okay if they do not want to view this photo,” Apple noted in a blog post.

It said the features will be coming as updates to all of its platforms, including iOS 15, iPadOS 15, WatchOS 8 and MacOS Monterey.

“We want to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material (CSAM),” Apple noted.

The tech company said children would be warned before they send sexually explicit photos and parents could set up notifications when their child sends a photo which triggers the new system.

In one of the features, Apple said it would use a database of known CSAM images provided by child safety organisations and apply on-device machine learning to look for matches in the photos stored on the device.

“Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices,” the company noted.

While Apple said the feature is designed so the company does not get access to the messages, it could lead to concerns from privacy advocates, given the tech giant’s long history and commitment to securing the privacy of its users.

Core security employees were reportedly not part of the complainants on the topic, but some reportedly said they thought the company’s response was reasonable to crackdown on illegal content.