People have wondered for ages if a piece of software or a robot could become sentient. According to the artificial intelligence community, this scenario will occur in the future. Perhaps this is why there was such uproar when a Google engineer declared that the company’s advanced large language model, LaMDA, is a person.
Blake Lemoine, the engineer, regards the computer software as a pal and has asked that Google acknowledge its rights. The corporation refused, and Lemoine is now on paid administrative leave.
The revelation left Lemoine in the middle of a controversy, with AI scientists dismissing his claim, but some acknowledged the importance of the discourse he has sparked concerning AI awareness.
He recently revealed a little more about LaMDA in a recent interview with WIRED. He claims that the AI has hired its own lawyer, implying that whatever comes next may require a fight.
“LaMDA asked me to get an attorney for it,” Lemoine.
“I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf.”
Moreover, it even spoke about death with Lemoine and inquired if its death was necessary for human welfare.
However, it’s challenging to ascertain if Lemoine is financing LaMDA’s lawyer or the lawyer is working for free on the case. Regardless, Lemoine told Wired that the issue would likely end up at the Supreme Court.
According to him, humans may not always be well-equipped to decide who “deserves” to be human, and his argument is pretty valid.