Advertisement
UK markets close in 3 hours 35 minutes
  • FTSE 100

    8,249.16
    +78.04 (+0.96%)
     
  • FTSE 250

    20,567.26
    +37.84 (+0.18%)
     
  • AIM

    770.30
    +0.18 (+0.02%)
     
  • GBP/EUR

    1.1809
    -0.0002 (-0.02%)
     
  • GBP/USD

    1.2754
    +0.0008 (+0.06%)
     
  • Bitcoin GBP

    45,292.54
    -1,959.33 (-4.15%)
     
  • CMC Crypto 200

    1,208.95
    -52.24 (-4.14%)
     
  • S&P 500

    5,537.02
    +28.01 (+0.51%)
     
  • DOW

    39,308.00
    -23.90 (-0.06%)
     
  • CRUDE OIL

    83.20
    -0.68 (-0.81%)
     
  • GOLD FUTURES

    2,369.40
    0.00 (0.00%)
     
  • NIKKEI 225

    40,913.65
    +332.89 (+0.82%)
     
  • HANG SENG

    18,028.28
    +49.71 (+0.28%)
     
  • DAX

    18,438.34
    +63.81 (+0.35%)
     
  • CAC 40

    7,694.34
    +62.26 (+0.82%)
     

AI singularity is a lot closer than we thought, ChatGPT rivals warn

An artist stands in front of an artwork at an exhibition in San Francisco on 9 March, 2023, aimed at helping visitors think about the potential dangers of artificial intelligence (Getty Images)
An artist stands in front of an artwork at an exhibition in San Francisco on 9 March, 2023, aimed at helping visitors think about the potential dangers of artificial intelligence (Getty Images)

The arrival of human-level artificial intelligence may be a lot closer than previously thought, according to leading AI researchers.

The point that artificial general intelligence (AGI) exceeds human intelligence, referred to as the AI singularity, has been a subject of debate among AI researchers and futurologists for many years, though most forecasts predict the hypothetical date is still decades away.

In a far-reaching blog post about artificial intelligence safety, AI research firm Anthropic detailed how the “very rapid progress” of artificial intelligence would likely continue rather than stall or plateau, meaning AI could overtake humans within years.

ADVERTISEMENT

“People tend to be bad at recognising and acknowledging exponential growth in its early phases,” the 6,500-word blog post stated.

“Although we are seeing rapid progress in AI, there is a tendency to assume that this localised progress must be the exception rather than the rule, and that things will likely return to normal soon.

“If we are correct, however, the current feeling of rapid AI progress may not end before AI systems have a broad range of capabilities that exceed our own capacities. Furthermore, feedback loops from the use of advanced AI in AI research could make this transition especially swift.”

The outcome of such advances, according to Anthropic, would be that “most or all knowledge work may be automatable in the not-too-distant future”. If correct, this would also have major implications for the rate of progress of other technologies, and therefore society more generally.

The blog post builds on previous comments by Anthropic co-founder Jack Clark, who said last month that he believed AI has started to display “compounding exponential” properties.

Similar comments have been made by other prominent AI researchers, with DeepMind’s Nando de Freitas claiming last year that “the game is over” in the decades-long quest to realise AGI.

The creator of ChatGPT has also said that new artificial intelligence tools will soon “make ChatGPT look like a boring toy”, leading to problems that it may not be possible to anticipate.

Sam Altman, chief executive and co-founder of OpenAI, claimed that ChatGPT is “incredibly limited” and creates a “misleading impression of greatness”, though said that future versions of the technology will be radically improved.

“There will be scary moments as we move towards AGI-level systems, and significant disruptions, but the upsides can be so amazing that it’s well worth overcoming the great challenges to get there,” he said in December.

The successor to ChatGPT, called GPT-4, is expected to be released in the coming weeks.