Advertisement
UK markets close in 3 hours 2 minutes
  • FTSE 100

    7,952.78
    +20.80 (+0.26%)
     
  • FTSE 250

    19,860.55
    +49.89 (+0.25%)
     
  • AIM

    743.09
    +0.98 (+0.13%)
     
  • GBP/EUR

    1.1688
    +0.0019 (+0.17%)
     
  • GBP/USD

    1.2634
    -0.0004 (-0.03%)
     
  • Bitcoin GBP

    56,081.33
    -322.31 (-0.57%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • S&P 500

    5,248.49
    +44.91 (+0.86%)
     
  • DOW

    39,760.08
    +477.75 (+1.22%)
     
  • CRUDE OIL

    82.42
    +1.07 (+1.32%)
     
  • GOLD FUTURES

    2,232.50
    +19.80 (+0.89%)
     
  • NIKKEI 225

    40,168.07
    -594.66 (-1.46%)
     
  • HANG SENG

    16,541.42
    +148.58 (+0.91%)
     
  • DAX

    18,489.78
    +12.69 (+0.07%)
     
  • CAC 40

    8,223.87
    +19.06 (+0.23%)
     

Google’s Bard chatbot repeats mistake that wiped $120bn off share price

google headquarters
google headquarters

Google’s artificial intelligence chatbot is still making the same error that contributed to a $120bn wipeout for the tech giant’s share price a month ago.

Bard, which was opened to the public in the US and UK on Tuesday, still incorrectly claims that the James Webb Space Telescope took “the very first pictures of a planet outside of our own solar system”.

The first picture ever captured of a planet outside the solar system – an exo planet – was in fact taken by the Very Large Telescope in Chile in 2004.

Bard gave the same wrong answer when it was debuted by Google in February.

The error contributed to a $120bn sell-off in the internet search giant’s shares, amid doubts over the technology.

ADVERTISEMENT

At the time, Google insisted it planned to test the bot to "make sure Bard's responses meet a high bar for quality, safety and groundedness in real-world information".

However, when asked the same prompt in questioning by The Telegraph on Wednesday, Bard still produced the same false information.

Google has admitted that the chatbot, which was released for a public trial on Tuesday, will make errors when it is asked factual questions by users.

In a blog post announcing the open testing, Google admitted its algorithms "can provide inaccurate, misleading or false information while presenting it confidently".

A Google spokesman pointed The Telegraph to a paper published by a Google research executive on the limits of the technology behind Bard.

The paper said the models used by Bard can "generate plausible-sounding responses that include factual errors – not ideal when factuality matters but potentially useful for generating creative or unexpected outputs".

Google has labelled Bard an "experiment", rather than a product ready for general use.

The chatbot is designed to offer conversational responses to questions from users, digesting information gathered through its search engine and billions of lines of text.

The AI technology is based on a large language model, which is designed to provide plausible-sounding responses to questions. However, it has little ability to separate fact from fiction and will often repeat false information it scrapes from the web.

This can lead the AI bots to "hallucinate" by inserting realistic-sounding text into answers.