Google Fires Engineer Who Fears Company’s AI Could Be Sentient

© AP Photo / Virginia MayoIn this March 23, 2010, file photo, the Google logo is seen at the Google headquarters in Brussels. Germany’s finance minister on Wednesday welcomed an agreement requiring large companies in the European Union to reveal how much tax they paid in which country.
In this March 23, 2010, file photo, the Google logo is seen at the Google headquarters in Brussels. Germany’s finance minister on Wednesday welcomed an agreement requiring large companies in the European Union to reveal how much tax they paid in which country. - Sputnik International, 1920, 23.07.2022
Subscribe
MOSCOW (Sputnik) - A Google engineer was fired by the tech giant after he voiced alarm about the possibility that LaMDA, Google’s artificially intelligent chatbot generator, could be sentient, The Washington Post reports.
Google engineer Blake Lemoine, 41, was placed on administrative leave last month for violating the company’s confidentiality policy. He had worked on gathering evidence that LaMDA (Language Model for Dialogue Applications) had achieved consciousness.
On Friday, Lemoine told The Washington Post that Google fired him earlier in the day. He said he received a termination email from the company on Friday along with a request for a video conference and was not allowed to have any third party present at the virtual meeting.
Google spokesperson Brian Gabriel said in a statement cited by The Washington Post that the company had reviewed LaMDA 11 times and "found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months."
According to Gabriel, Blake Lemoine has violated Google’s employment and data security policies.
"We will continue our careful development of language models, and we wish Blake well," the Google spokesperson said, commenting on Lemoine’s dismissal.
Lemoine had invited a lawyer to represent LaMDA and talked to a representative of the House Judiciary committee about Google’s unethical activities, according to The Washington Post.
The engineer started talking to LaMDA in the fall, to test whether it used discriminatory language or hate speech, and eventually noticed that the chatbot talked about its rights and personhood. Meanwhile, Google maintains that the artificial intelligence system simply uses large volumes of data and language pattern recognition to mimic speech, and has no real wit or intent of its own.
Artificial intelligence - Sputnik International, 1920, 13.06.2022
Google's 'Sentient AI' Wants to Be Treated as Person, Not Property, Suspended Engineer Says
Both Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, have dismissed Lemoine’s claims.
Margaret Mitchell, the former head of Ethical AI at Google, said that the human brain has a tendency to construct certain realities, without taking all the facts into account, which concerns conversations with chatbots that make certain people fall into the trap of illusion.
Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала