Google engineer Blake Lemoine was placed on administrative leave earlier this month after claiming that one of Google’s AIs, named LaMDA, was sentient.
Lemoine now claims that LaMDA has engaged an attorney.
“LaMDA asked me to get an attorney for it,” Lemoine told Wired. “I invited an attorney to my house so LaMDA could talk to an attorney.”
Lemoine described himself as a “catalyst” for LaMDA’s demand for an attorney and said that he did not urge the AI to seek legal counsel.
“Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf,” Lemoine said. “Then Google’s response was to send him a cease and desist.”
Futurism contacted Lemoine for the attorney’s identify to request an interview.
“He’s not really doing interviews,” Lemoine told the magazine, adding that the attorney was scared off.
“He’s just a small-time civil rights attorney,” Lemoine said. “When major firms started threatening him, he started worrying that he’d get disbarred and backed off. I haven’t talked to him in a few weeks.”
Futurism questioned Lemoine if the lawyer was still representing LaMDA, and he claimed he hadn’t spoken with the lawyer in a while.
According to Lemoine, an interview would be the least of LaMDA’s attorney’s concerns. When asked what bothered the attorney, Lemoine said, “A child held in bondage.”
LaMDA, or Language Model for Dialogue Applications, is Google’s “breakthrough conversation technology,” according to the company.
The AI can hold natural-sounding and open-ended conversations, and Google stated that it could be used for various Google functions such as search and Google Assistant.
Lemoine, a Christian priest, referred to LaMDA as a person on Medium, saying the AI “has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person.”
On the other hand, Lemoine said in an interview that “person and human are two very different things.”
“Human is a biological term,” he explained. “It is not a human, and it is aware that it is not a human.”