Google Has Fired The Engineer Who Claimed That Their AI Bot Had Become Sentient

A senior software engineer recently contended that an AI chatbot known as “LaMDA”, developed by Google through complex algorithms, is sentient and is a fully “self-aware person”. Responding to these claims, Google made an abrupt decision following the dismissal of the employee from the job and gave him a termination letter. These alleged claims have become a major headache for authorities at Google, as people have been taking an interest in this matter lately. Google ultimately rubbished these concerns by terminating the senior software engineer named “Blake Lemoine” and said that “he had violated company policies.”

LaMDA is basically a designed “language model for dialogue applications” as per Google and not sentient. It is to be noted that Google put Lemoine on leave last month before actually terminating him and regarded his claims related to the chatbot as “wholly unfounded.” According to Google, “It’s regrettable that, despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information.”

Lemoine, on the other hand, was a 41-year-old engineer who was being deployed by Google in its department of artificial intelligence and was responsible for doing research on the bot. However, he said that he had been researching the bot for a long time and had gained some in-depth insights related to it. He said that he noticed an ability to express thoughts, emotions, and feelings in this chatbot that was analogous to a “human child.” While talking to the Washington Post, he stated, “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old or eight-year-old kid that happens to know physics.”

However, professionals at Google are certainly cooking up something and hence said that this AI chatbot was developed by keeping the “transformer-based language models” into consideration, and it can talk about anything on any topic, but obviously, it didn’t mean that it started to depict human-like behavior and that we resemble it with a human child. Google said that everything is a result of complex algorithms, which were being used in the development of this state-of-the-art technology.

Moreover, the fired engineer also compiled a list of conversations with the “sentient bot,” and it was reported that he also asked the bot about his fears. Google and the concerned authorities took no time in dismissing these false claims and regarded LaMDA as a usual chatbot that was developed to “generate convincing human language”.

Leave a Reply

Your email address will not be published. Required fields are marked *