Advertisement
UK markets close in 1 hour 17 minutes
  • FTSE 100

    8,130.83
    +51.97 (+0.64%)
     
  • FTSE 250

    19,825.48
    +223.50 (+1.14%)
     
  • AIM

    755.70
    +2.58 (+0.34%)
     
  • GBP/EUR

    1.1674
    +0.0018 (+0.15%)
     
  • GBP/USD

    1.2498
    -0.0013 (-0.10%)
     
  • Bitcoin GBP

    51,599.62
    +684.26 (+1.34%)
     
  • CMC Crypto 200

    1,338.43
    -58.11 (-4.16%)
     
  • S&P 500

    5,104.91
    +56.49 (+1.12%)
     
  • DOW

    38,290.82
    +205.02 (+0.54%)
     
  • CRUDE OIL

    83.98
    +0.41 (+0.49%)
     
  • GOLD FUTURES

    2,354.60
    +12.10 (+0.52%)
     
  • NIKKEI 225

    37,934.76
    +306.28 (+0.81%)
     
  • HANG SENG

    17,651.15
    +366.61 (+2.12%)
     
  • DAX

    18,154.14
    +236.86 (+1.32%)
     
  • CAC 40

    8,093.10
    +76.45 (+0.95%)
     

Risk of extinction by AI should be global priority, say experts

<span>Photograph: S Decoret/Shutterstock</span>
Photograph: S Decoret/Shutterstock

A group of leading technology experts from across the world have warned that artificial intelligence technology should be considered a societal risk and prioritised in the same class as pandemics and nuclear wars.

The statement, signed by hundreds of executives and academics, was released by the Center for AI Safety on Tuesday amid growing concerns over regulation and risks the technology posed to humanity.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement said. Signatories included the chief executives of Google’s DeepMind, the ChatGPT developer OpenAI, and the AI startup Anthropic.

ADVERTISEMENT

Global leaders and industry experts – such as the leaders of OpenAI – have made calls for regulation of the technology owing to existential fears it could significantly affect job markets, harm the health of millions and weaponise disinformation, discrimination and impersonation.

This month the man often touted as the godfather of AI – Geoffrey Hinton, also a signatory – quit Google citing its “existential risk”. The risk was echoed and acknowledged by No 10 last week for the first time – a swift change of tack within government that came two months after publishing an AI white paper industry figures have warned is already out of date.

While the letter published on Tuesday is not the first, it is potentially the most impactful given its wider range of signatories and its core existential concern, according to Michael Osborne, a professor in machine learning at the University of Oxford and co-founder of Mind Foundry.

“It really is remarkable that so many people signed up to this letter,” he said. “That does show that there is a growing realisation among those of us working in AI that existential risks are a real concern.”

AI’s potential to exacerbate existing existential risks such as engineered pandemics and military arms races are concerns that led Osborne to sign the public letter, along with AI’s novel existential threats.

Calls to curb threats follow the success of ChatGPT, which launched in November. The language model has been widely adopted by millions of people and rapidly advanced beyond predictions by those best informed in the industry.

Osborne said: “Because we don’t understand AI very well there is a prospect that it might play a role as a kind of new competing organism on the planet, so a sort of invasive species that we’ve designed that might play some devastating role in our survival as a species.”