Advertisement
UK markets open in 6 hours 37 minutes
  • NIKKEI 225

    37,573.84
    -505.86 (-1.33%)
     
  • HANG SENG

    16,385.87
    +134.03 (+0.82%)
     
  • CRUDE OIL

    82.63
    -0.10 (-0.12%)
     
  • GOLD FUTURES

    2,391.00
    -7.00 (-0.29%)
     
  • DOW

    37,775.38
    +22.07 (+0.06%)
     
  • Bitcoin GBP

    50,797.09
    +1,629.12 (+3.31%)
     
  • CMC Crypto 200

    1,307.82
    +422.29 (+47.49%)
     
  • NASDAQ Composite

    15,601.50
    -81.87 (-0.52%)
     
  • UK FTSE All Share

    4,290.02
    +17.00 (+0.40%)
     

Google software engineer claims tech giant’s artificial intelligence tool has become ‘sentient’

Google software engineer claims tech giant’s artificial intelligence tool has become ‘sentient’

A Google engineer has claimed that an artificial intelligence programme he was working on for the tech giant has become sentient and is a “sweet kid”.

Blake Lemoine, who is currently suspended by Google bosses, says he reached his conclusion after conversations with LaMDA, the company’s AI chatbot generator.

The engineer told The Washington Post that during conversations with LaMDA about religion, the AI talked about “personhood” and “rights”.

Mr Lemoine tweeted that LaMDA also reads Twitter, saying, “It’s a little narcissistic in a little kid kinda way so it’s going to have a great time reading all the stuff that people are saying about it.”

ADVERTISEMENT

He says that he presented his findings to Google vice president Blaise Aguera y Arcas and to Jen Gennai, head of Responsible Innovation, but they dismissed his claims.

Blake Lemoine (Blake Lemoine/Twitter)
Blake Lemoine (Blake Lemoine/Twitter)

“LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,” the engineer wrote on Medium.

And he added that the AI wants, “to be acknowledged as an employee of Google rather than as property”.

Now Mr Lemoine, who was tasked with testing if it used discriminatory language or hate speech, says he is on paid administrative leave after the company claimed he violated its confidentiality policy.

“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” Google spokesperson Brian Gabriel told the Post.

“He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Critics say that it is a mistake to believe AI is anything more than an expert at pattern recognition.

“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” Emily Bender, a linguistics professor at the University of Washington, told the newspaper.