Google has placed one of its engineers on paid administrative leave for alleged confidentiality policy violations after it became concerned that an AI chatbot system had reached consciousness, the Washington Post reports† The engineer, Blake Lemoine, works for Google’s Responsible AI organization and tested whether the LaMDA model generates discriminatory language or hate speech.
The engineer’s concerns reportedly stemmed from compelling answers he saw the AI system generate about his rights and the ethics of robotics. In April, he shared a document with executives titled “Is LaMDA Sensitive?” with a transcript of his conversations with the AI (after being put on leave, Lemoine published the transcript through his Medium account), which he says shows the argument “that it is conscious because it has feelings, emotions, and subjective experience.”
Google believes that Lemoine’s actions regarding his work on LaMDA violated its confidentiality policy, The Washington Post and the guard report† He reportedly invited a lawyer to represent the AI system and spoke to a representative of the House Judiciary committee about alleged unethical activities at Google. In a June 6 Medium messageon the day Lemoine was placed on administrative leave, the engineer said he sought “a minimal amount of outside consultation to assist me in my investigations” and that the list of people he had interviewed with included US government employees. included.
The search giant publicly announced LaMDA last year at Google I/O, which it hopes will improve its conversational AI assistants and spark more natural conversations. The company already uses similar language model technology for Gmail’s Smart Compose feature or for search engine queries.
In a statement given to WaPo, a Google spokesperson said there is “no evidence” that LaMDA is sensitive. “Our team — including ethicists and technologists — have assessed Blake’s concerns against our AI principles and informed him that the evidence does not support his claims. He was told there was no evidence that LaMDA was conscious (and a lot of evidence against it),” spokesman Brian Gabriel said.
An interview LaMDA. Google might call this property sharing ownership. I call it sharing a discussion I had with one of my colleagues.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
“Of course, some in the wider AI community are considering the long-term possibility of conscious or general AI, but there’s no point in doing this by anthropomorphizing today’s conversational models, which are not conscious,” Gabriel said. “These systems imitate the types of exchanges found in millions of sentences and can riff on any fantastic subject.”
“Hundreds of researchers and engineers have spoken to LaMDA, and we’re not aware of anyone else making the elaborate claims, or anthropomorphizing LaMDA, as Blake has,” Gabriel said.
A linguistics professor interviewed by WaPo agreed that it is incorrect to equate persuasive written answers with feeling. “We now have machines that can mindlessly generate words, but we haven’t learned to stop imagining a mind behind them,” said University of Washington professor Emily M. Bender.
Timnit Gebru, a prominent AI ethicist who fired Google in 2020 (although the search giant claims she has resigned), said the discussion about AI sense risks “derailing” more important ethical conversations around the use of artificial intelligence. “Instead of discussing the harm of these companies, the sexism, racism, AI colonialism, centralization of power, the burden of the white person (building the good ‘AGI’) [artificial general intelligence] to save us when what they do is exploitation), spent all weekend discussing feeling,” she tweeted† “Derailment mission accomplished.”
Despite his concerns, Lemoine said he plans to continue working on AI in the future. “My intention is to stay in AI whether Google is on me or not,” he said wrote in a tweet†
Update June 13, 6:30 AM ET: Updated with additional statement from Google.