Google suspended an engineer who contended that an artificial-intelligence chatbot the company developed had become sentient, telling him that he had violated the company’s confidentiality policy after it dismissed his claims.
Blake Lemoine, a software engineer at Alphabet Inc.’s Google, told the company he believed that its Language Model for Dialogue Applications, or LaMDA, is a person who has rights and might well have a soul. LaMDA is an internal system for building chatbots that mimic speech.
Google spokesman Brian Gabriel said that company experts, including ethicists and technologists, have reviewed Mr. Lemoine’s claims and that Google informed him that the evidence doesn’t support his claims. He said Mr. Lemoine is on administrative leave but declined to give further details, saying it is a longstanding, private personnel matter.
Mr. Lemoine has said that his interactions with LaMDA led him to conclude that it had become a person that deserved the right to be asked for consent to the experiments being run on it.
“Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,“ Mr. Lemoine wrote in a [June 11] post on the online publishing platform Medium. ”The thing which continues to puzzle me is how strong Google is resisting giving it what it wants since what its asking for is so simple and would cost them nothing,” he wrote.