Advertisement
UK markets closed
  • FTSE 100

    7,895.85
    +18.80 (+0.24%)
     
  • FTSE 250

    19,391.30
    -59.37 (-0.31%)
     
  • AIM

    745.67
    +0.38 (+0.05%)
     
  • GBP/EUR

    1.1607
    -0.0076 (-0.65%)
     
  • GBP/USD

    1.2370
    -0.0068 (-0.55%)
     
  • Bitcoin GBP

    51,645.09
    +3,187.91 (+6.58%)
     
  • CMC Crypto 200

    1,371.97
    +59.34 (+4.52%)
     
  • S&P 500

    4,967.23
    -43.89 (-0.88%)
     
  • DOW

    37,986.40
    +211.02 (+0.56%)
     
  • CRUDE OIL

    83.24
    +0.51 (+0.62%)
     
  • GOLD FUTURES

    2,406.70
    +8.70 (+0.36%)
     
  • NIKKEI 225

    37,068.35
    -1,011.35 (-2.66%)
     
  • HANG SENG

    16,224.14
    -161.73 (-0.99%)
     
  • DAX

    17,737.36
    -100.04 (-0.56%)
     
  • CAC 40

    8,022.41
    -0.85 (-0.01%)
     

Google software engineer claims tech giant’s artificial intelligence tool has become ‘sentient’

Google software engineer claims tech giant’s artificial intelligence tool has become ‘sentient’

A Google engineer has claimed that an artificial intelligence programme he was working on for the tech giant has become sentient and is a “sweet kid”.

Blake Lemoine, who is currently suspended by Google bosses, says he reached his conclusion after conversations with LaMDA, the company’s AI chatbot generator.

The engineer told The Washington Post that during conversations with LaMDA about religion, the AI talked about “personhood” and “rights”.

Mr Lemoine tweeted that LaMDA also reads Twitter, saying, “It’s a little narcissistic in a little kid kinda way so it’s going to have a great time reading all the stuff that people are saying about it.”

ADVERTISEMENT

He says that he presented his findings to Google vice president Blaise Aguera y Arcas and to Jen Gennai, head of Responsible Innovation, but they dismissed his claims.

Blake Lemoine (Blake Lemoine/Twitter)
Blake Lemoine (Blake Lemoine/Twitter)

“LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,” the engineer wrote on Medium.

And he added that the AI wants, “to be acknowledged as an employee of Google rather than as property”.

Now Mr Lemoine, who was tasked with testing if it used discriminatory language or hate speech, says he is on paid administrative leave after the company claimed he violated its confidentiality policy.

“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” Google spokesperson Brian Gabriel told the Post.

“He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Critics say that it is a mistake to believe AI is anything more than an expert at pattern recognition.

“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” Emily Bender, a linguistics professor at the University of Washington, told the newspaper.