Google fires AI engineer Blake Lemoine, who claimed its LaMDA 2 is AI sensitive

Blake Lemoine, the Google engineer who publicly claimed the company’s LaMDA conversational intelligence is sensitive, has been fired. the Big Technology newsletter, who spoke to Lemoine. In June, Google placed Lemoine on paid administrative leave for breach of its confidentiality agreement after he contacted members of the government about his concerns and hired a lawyer to represent LaMDA.

A statement emailed to The edge on Friday by Google spokesman Brian Gabriel appeared to confirm the resignation, saying “we wish Blake the best.” The company also says, “LaMDA has undergone 11 different assessments and we published a research paper earlier this year detailing the work being put into its responsible development.” Google claims it reviewed Lemoine’s claims “comprehensively” and found them to be “completely unfounded.”

This corresponds to numerous AI experts and ethicists, who have said his claims were more or less impossible given current technology. Lemoine claims that his conversations with LaMDA’s chatbot lead him to believe that it has become more than just a program and has its own thoughts and feelings, rather than just producing a conversation realistic enough to make it appear as it was intended. is.

He argues that Google’s researchers should seek permission from LaMDA before conducting any experiments on it (Lemoine himself was tasked with testing whether the AI ​​produced hate speech) and published portions of those conversations on his Medium account as his evidence.

The YouTube channel Computerphile has a reasonably accessible nine-minute explanation about how LaMDA works and how it can produce the reactions that convinced Lemoine without being really aware.

Here’s Google’s full statement, which also addresses Lemoine’s allegation that the company failed to properly investigate its claims:

As we share in our AI principles, we take AI development very seriously and remain committed to responsible innovation. LaMDA has passed 11 different ratings and we have a research paper earlier this year detailed the work that goes into its responsible development. If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake’s claims that LaMDA is conscious to be completely unfounded and have spent months working with him to clarify that. These conversations were part of the open culture that helps us innovate responsibly. So it’s regrettable that despite long-standing commitment to this topic, Blake still chose to persistently violate clear employment and data security policies, which include the need to protect product information. We will continue our careful development of language models and we wish Blake all the best.

Similar Posts