- Oops!Something went wrong.Please try again later.
Google has fired a software engineer who claimed its artificial intelligence had become self-aware and sentient.
Blake Lemoine was placed on leave at the company last month, after he said publicly that he believed Google’s LaMDa chatbot was a person.
Now Google said he had been permanently dismissed from the company, and claimed he had violated its policies. It also said that his claims about the chatbot were “wholly unfounded”.
“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” a Google spokesperson said in an email to Reuters.
Mr Lemoine had been insistent that the artificial intelligence system had gained personhood and was self-aware. He published a number of articles on the topic, including logs of his conversations with the chatbot.
He said that he had asked Google to give the chatbot a number of rights, and for it to be treated as a proper employee of the company. Mr Lemoine said his requests were being made on behalf of the chatbot.
AI experts had been largely sceptical of Mr Lemoine’s claims, denying that any of the public evidence suggested that the system was self-aware or should be treated as a person. Experts suggested that the system was instead just a very convincing chatbot, and had been trained using the internet to use language in similar ways as humans.
Google also denied the claims, and insisted that Mr Lemoine’s sharing of the conversations and other data was in breach of its confidentiality agreements.
Mr Lemoine did not comment on the dismissal. But on Twitter he pointed to an article he had published in June, claiming he could soon be fired for “doing AI ethics work”, and said that he had “totally called this”.
He had worked Google for seven years before he was placed on leave, as part of the company’s “Responsible AI” group.