UK markets close in 2 hours 18 minutes
  • FTSE 100

    -32.89 (-0.44%)
  • FTSE 250

    -18.82 (-0.10%)
  • AIM

    +0.25 (+0.03%)

    -0.0014 (-0.12%)

    -0.0078 (-0.61%)
  • Bitcoin GBP

    +1,562.35 (+4.98%)
  • CMC Crypto 200

    +64.05 (+8.09%)
  • S&P 500

    +26.83 (+0.59%)
  • DOW

    +294.60 (+0.82%)

    -0.73 (-0.99%)

    -16.90 (-0.81%)
  • NIKKEI 225

    -200.24 (-0.60%)

    -184.25 (-1.09%)
  • DAX

    +10.88 (+0.07%)
  • CAC 40

    -24.54 (-0.33%)

Stock market crashes and rebellions – the catastrophes AI could cause | The Crypto Mile

Artificial intelligence (AI) tools like ChatGPT might not trigger human extinction, but they could cause international catastrophes, a leading scientist has warned.

A term called p(doom) has been trending online; 'p' being the probability of a human extinction-level event attributed to the rapid advancement of AI.

However, cognitive scientist Gary Marcus told Yahoo Finance UK that these apocalyptic forecasts are unlikely. Instead, he predicts AI tools could cause a series of catastrophes as a result of deepfakes, and market manipulation.

Read more: ChatGPT and stock picking: Hedge fund manager shares AI trading strategy

Speaking on Yahoo Finance's The Crypto Mile, Marcus said, "p(doom) refers to the probability that AI will kill us all. But, I don't think these machines are literally going to extinguish the human species. AI could cause a catastrophic risk rather than an existential one."

In a recent Substack post Marcus said the term p(catastrophe) is more appropriate. "P(catastrophe) is the chance that an incident that kills say one percent or more of the population," he said.

Read more: Sovereign agents: Your own personal AI assistant? | The Crypto Mile

Marcus described AI-related catastrophes being instigated by humans. "There are a lot of risky applications, and bad actors are already using this stuff. They are already making deep fakes and could try to manipulate the market. Such incidents might lead to an accidental war," he added.

The cognitive scientist has warned against the proliferation of AI-generated deepfakes. He said these AI-enhanced piece of misinformation are already being used to trick voters and to defraud people by imitating other people's voices. Speaking on the Aventine podcast, Marcus described AI-generated misinformation as "a very real threat to our democracy, and the ways we’re combatting it now aren’t quite cutting it."

This is a danger that has also been recognised by Microsoft founder Bill Gates, who thinks deepfakes could disrupt political processes worldwide.

"Deepfakes and misinformation generated by AI could undermine elections and democracy," Gates said in a July post on his blog. "On a bigger scale, AI-generated deepfakes could be used to try to tilt an election. Of course, it doesn’t take sophisticated technology to sow doubt about the legitimate winner of an election, but AI will make it easier."

An AI-generated video posted on in April of this year hinted at the disruptive potential of deepfakes. The film clip showed former US secretary of state Hillary Clinton endorsing Florida governor Ron DeSantis for president. According to Reuters, the video is considered an AI-generated deepfake, and there is no evidence Clinton ever made such an endorsement.

Is AI accelerating the risk of human extinction?

In contrast to Marcus' view, decision theorist Eliezer Yudkowsky said rapid developments of AI technology could lead to an end-game scenario for the human species. Yudkowsky sees human extinction not as a worst case scenario, but as the default outcome.

In an opinion piece for Time magazine, Yudkowsky warned AI could achieve powers of intelligence beyond human comprehension. The theorist sees artificial intelligence developing an indifferent approach to the plight of humanity. "We are made of atoms it can use for something else," he warned.

Read more: Spot bitcoin ETF approval unlikely this year, says analyst

However, Marcus reassured that humanity is a persistent species, and AI doomsayer predictions are too extreme. However, the cognitive scientist was a signatory to last year's moratorium against the development of further generative AI models.

"When some people called for a moratorium, I signed a letter that call for it. The only thing that they were calling for a moratorium on was one thing, GPT5. We know GPT5 is going to be unreliable and problematic.

"Nobody said lets stop AI thing all together, so those of us who signed a letter said, let's make AI more trustworthy and reliable and doesn’t cause problems, we didn't say don't make AI all together," he added

Gary Marcus identified a fundamental flaw in the design of current AI systems. "The tools we are using right now are called black boxes, which means we don't understand what goes on inside them, it makes it hard to debug them and we can't make guarantees around them," he said.

Watch: Spot bitcoin ETF approval unlikely this year, says analyst | The Crypto Mile

Download the Yahoo Finance app, available for Apple and Android.