Advertisement
UK markets close in 8 hours 7 minutes
  • FTSE 100

    8,091.02
    +50.64 (+0.63%)
     
  • FTSE 250

    19,631.61
    -87.76 (-0.45%)
     
  • AIM

    754.70
    +0.01 (+0.00%)
     
  • GBP/EUR

    1.1658
    +0.0013 (+0.11%)
     
  • GBP/USD

    1.2492
    +0.0029 (+0.23%)
     
  • Bitcoin GBP

    51,477.04
    -2,010.52 (-3.76%)
     
  • CMC Crypto 200

    1,389.94
    +7.36 (+0.53%)
     
  • S&P 500

    5,071.63
    +1.08 (+0.02%)
     
  • DOW

    38,460.92
    -42.77 (-0.11%)
     
  • CRUDE OIL

    83.03
    +0.22 (+0.27%)
     
  • GOLD FUTURES

    2,332.90
    -5.50 (-0.24%)
     
  • NIKKEI 225

    37,628.48
    -831.60 (-2.16%)
     
  • HANG SENG

    17,302.26
    +100.99 (+0.59%)
     
  • DAX

    18,040.33
    -48.37 (-0.27%)
     
  • CAC 40

    8,091.06
    -0.80 (-0.01%)
     

Cybersecurity funds should go towards beefing up Centrelink voice authentication, Greens say

The federal government should be using some of the $10bn allocated in the budget to cybersecurity defences to combat people using AI to bypass biometric securities including voice authentication, a Greens senator has said.

On Friday Guardian Australia reported that Centrelink’s voice authentication system can be tricked using a free online AI cloning service and just four minutes of audio of the user’s voice.

After the Guardian Australia journalist Nick Evershed cloned his own voice, he was able to access his account using his cloned voice and his customer reference number.

ADVERTISEMENT

The voiceprint service, provided by the Microsoft-owned voice software company Nuance, was being used by 3.8 million Centrelink clients at the end of February, and more than 7.1 million people had verified their voice using the same system with the Australian Taxation Office.

Despite being alerted to the vulnerability last week, Services Australia has not indicated it will change its use of voice ID, saying the technology is a “highly secure authentication method” and the agency “continually scans for potential threats and make ongoing enhancements to ensure customer security”.

The Greens senator David Shoebridge said the finding was “deeply troubling” for people who rely on government services and there needed to be a regulatory framework for the collection and use of biometric data.

Related: AI can fool voice recognition used to verify identity by Centrelink and Australian tax office

“The concerns here go beyond the use of AI to trick voiceprint,” he said. “There are few, if any, protections on the collection or use of our biometric data to feed and train corporate AI systems.

“We can’t rely on a whack-a-mole approach to digital security where issues are only dealt with once they embarrass the federal government.”

Shoebridge said the $10bn in funding in last year’s budget for the Australian Signals Directorate’s Redspice cyber defence program should include an investment to ensure that threats such as the misuse of AI could be identified and protected against on a whole-of-government basis.

Guardian Australia has sought comment from the deputy prime minister and defence minister, Richard Marles, who is responsible for the ASD.

Shoebridge said the government should also audit agencies using voice recognition to ensure any further security flaws were identified and fixed.

“The government’s main objective with the use of such technologies is to cut operating costs as opposed to what is best for the millions of Australians who rely on government agencies and services,” he said. “These government savings are almost always paid for by Centrelink’s clients.”

In the 2021-22 financial year Services Australia reported that it used voice biometrics to authenticate 56,000 calls each workday, and to authenticate more than 39% of all calls to Centrelink’s main business lines.

Between August 2021 and June 2022 it was used in 11.4% of all child support calls, or more than 450 each working day.