Advertisement
UK markets closed
  • FTSE 100

    7,895.85
    +18.80 (+0.24%)
     
  • FTSE 250

    19,391.30
    -59.37 (-0.31%)
     
  • AIM

    745.67
    +0.38 (+0.05%)
     
  • GBP/EUR

    1.1607
    -0.0076 (-0.65%)
     
  • GBP/USD

    1.2370
    -0.0068 (-0.55%)
     
  • Bitcoin GBP

    51,477.38
    -794.02 (-1.52%)
     
  • CMC Crypto 200

    1,371.97
    +59.34 (+4.52%)
     
  • S&P 500

    4,967.23
    -43.89 (-0.88%)
     
  • DOW

    37,986.40
    +211.02 (+0.56%)
     
  • CRUDE OIL

    83.24
    +0.51 (+0.62%)
     
  • GOLD FUTURES

    2,406.70
    +8.70 (+0.36%)
     
  • NIKKEI 225

    37,068.35
    -1,011.35 (-2.66%)
     
  • HANG SENG

    16,224.14
    -161.73 (-0.99%)
     
  • DAX

    17,737.36
    -100.04 (-0.56%)
     
  • CAC 40

    8,022.41
    -0.85 (-0.01%)
     

Inaccurate images generated by AI chatbot were ‘unacceptable’, says Google boss

The historically inaccurate images generated by Google’s Gemini AI chatbot were “unacceptable”, chief executive Sundar Pichai has said in a memo to staff.

Last week, users of Gemini began flagging that the chatbot was generating images showing a range of ethnicities and genders, even when doing so was historically inaccurate – for example, prompts to generate images of certain historical figures, such as the US founding fathers, returned images depicting women and people of colour.

Some critics accused Google of anti-white bias, while others suggested the company appeared to have over-corrected over concerns about longstanding racial bias issues within AI technology which had previously seen facial recognition software struggling to recognise, or mislabelling, black faces, and voice recognition services failing to understand accented English.

Following the Gemini image generation incident, Google apologised, paused the image tool and said it was working to fix it.

ADVERTISEMENT

But issues were then also flagged with some text responses, with an incident highlighted where Gemini said there was “no right or wrong answer” to a question equating Elon Musk’s influence on society with Adolf Hitler’s.

Now Mr Pichai has addressed the issue with staff for the first time and promised changes.

In his memo, Mr Pichai said the image and text responses were “problematic” and that Google had been working “around the clock” to address the issue.

“I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” he said.

“No Al is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.”

He said Google had “always sought to give users helpful, accurate and unbiased information” in its products and this was why “people trust them”.

“This has to be our approach for all our products, including our emerging AI products”, he added.

Going forward, Mr Pichai said “necessary changes” would be made inside the company to prevent similar issues occurring again.

“We’ll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals (sic) and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes,” he said.