Bard Was Labeled As A ‘Pathalogical Liar’ By Employees – But It Got Released Anyway

In an internal message in Google, its employees have constantly been criticizing the company-created chatbot Bard describing the system as “a pathological liar” and have been beseeching the company not to launch it.

This comes from a report from Bloomberg in which it held and cited discussion with 18 current and former Google employees as well as screenshots of internal messages. In these, an employee noted that Bard would frequently give users dangerous advice, whether on topics like how to land a plane or scuba diving.

Another said, “Bard is worse than useless: please do not launch.” Bloomberg says the company even “overruled a risk evaluation” submitted by an internal safety team saying the system was not ready for general use. Google opened up early access to the “experimental” bot in March anyway.

This report goes to show how Google has ignored all ethical concerns in order to stay neck to neck with other competitors in the market. As its employees have noticed recently, the company used to focus on work ethics and safety but recently has been prioritizing business instead.

Two researchers were fired in 2020 and 2021, Timnit Gebru and Margaret Mitchell when they authored a research paper exposing flaws in the same AI language systems that underpin chatbots like Bard.

Since these systems threaten Google’s search business model, the company seems even more focused on business over safety. As Bloomberg puts it, paraphrasing testimonials of current and former employees, “The trusted internet-search giant is providing low-quality information in a race to keep up with the competition while giving less priority to its ethical commitments.”

But almost everyone in the AI world would disagree with this since the common argument is: Public testing is necessary to develop and safeguard these systems and that the known harm caused by chatbots is minimal.

While it is true that they produce misleading and false information from time to time but there are numerous sources on the web that also do so. Popular information site Wikipedia is also not 100% factual. Hence, Google’s rivals like Microsoft and OpenAI are also arguably just as compromised as Google. The only difference is they’re not leaders in the search business and have less to lose.

Brian Gabriel, a spokesperson for Google, told Bloomberg that AI ethics remained a top priority for the company. “We are continuing to invest in the teams that work on applying our AI Principles to our technology,” said Gabriel.

Leave a Reply

Your email address will not be published. Required fields are marked *