UK markets closed
  • FTSE 100

    7,018.60
    -140.92 (-1.97%)
     
  • FTSE 250

    17,972.69
    -359.00 (-1.96%)
     
  • AIM

    833.59
    -13.83 (-1.63%)
     
  • GBP/EUR

    1.1189
    -0.0251 (-2.19%)
     
  • GBP/USD

    1.0857
    -0.0398 (-3.54%)
     
  • BTC-GBP

    17,571.52
    -21.57 (-0.12%)
     
  • CMC Crypto 200

    434.61
    -9.92 (-2.23%)
     
  • S&P 500

    3,693.23
    -64.76 (-1.72%)
     
  • DOW

    29,590.41
    -486.27 (-1.62%)
     
  • CRUDE OIL

    79.43
    -4.06 (-4.86%)
     
  • GOLD FUTURES

    1,651.70
    -29.40 (-1.75%)
     
  • NIKKEI 225

    27,153.83
    -159.30 (-0.58%)
     
  • HANG SENG

    17,933.27
    -214.68 (-1.18%)
     
  • DAX

    12,284.19
    -247.44 (-1.97%)
     
  • CAC 40

    5,783.41
    -135.09 (-2.28%)
     

‘Existential catastrophe’ caused by AI is likely unavoidable, DeepMind researcher warns

·2-min read
‘Existential catastrophe’ caused by AI is likely unavoidable, DeepMind researcher warns

Researchers from the University of Oxford and Google’s artificial intelligence division DeepMind have claimed that there is a high probability of advanced forms of AI becoming “existentially dangerous to life on Earth”.

In a recent article in the peer-reviewed journal AI Magazine, the researchers warned that there would be “catastrophic consequences” if the development of certain AI agents continues.

Leading philosphers like Oxford University’s Nick Bostrom have previously spoken of the threat posed by advanced forms of artificial intelligence, though one of authors of the new paper claimed such warnings did not go far enough.

“Bostrom, [computer scientist Stuart] Russell, and others have argued that advanced AI poses a threat to humanity,” Michael Cohen wrote in a Twitter thread accompanying the article.

“Under the conditions we have identified, our conclusion is much stronger than that of any previous publication – an existential catastrophe is not just possible, but likely.”

The paper proposes a scenario whereby an AI agent figures out a strategy to cheat in order to receive a reward that it is pre-programmed to seek.

In order to maximize its reward potential, it requires as much energy as is possible to obtain. The thought experiment sees humanity ultimately competing against the AI for energy resources.

“Winning the competition of ‘getting to use the last bit of available energy’ while playing against something much smarter than us would probably be very hard,” Mr Cohen wrote. “Losing would be fatal.

“These possibilities, however theoretical, mean we should be progressing slowly – if at all – toward the goal of more powerful AI.”

DeepMind has already proposed a safeguard against such an eventuality, dubbing it “the big red button”. In a 2016 paper titled ‘Safely Interruptible Agents’, the AI firm outlined a framework for preventing advanced machines from ignoring turn-off commands and becoming an out-of-control rogue agent.

Professor Bostrom previously described DeepMind – whose AI accomplishments include beating human champions at the boardgame Go and manipulating nuclear fusion – as the closest to creating human-level artificial intelligence.

The Sweidish philospher also said it would be a “great tragedy” if AI development did not continue, as it holds the potential to cure diseases and advance civilisation at an otherwise impossible rate.