In the near future, Google is set to release its highly anticipated AI chatbot, Bard. However, according to a recent report from The Wall Street Journal, two ex-Google engineers had built a similar chatbot, Meena, years ago, but were met with resistance from executives due to safety concerns.
Daniel De Freitas, a research engineer at Google, began working on the side project in 2018 with the goal of creating a chatbot that could mimic human conversation. He was later joined by Noam Shazeer, a software engineer for Google’s AI research unit. Together, they were able to build Meena, a chatbot that could argue about philosophy, speak casually about TV shows, and generate puns.
However, despite their belief that Meena could revolutionize online search, the engineers were unable to launch the bot. Google executives reportedly said that it did not adhere to the company’s AI safety and fairness standards. Despite multiple attempts to send the bot to external researchers, add the chat feature to Google Assistant, and launch a demo to the public, the engineers were met with resistance.
Frustrated by the executive response, De Freitas and Shazeer left Google in late 2021 to start their own company, Character.Ai. The company has since released a chatbot that can roleplay as figures like Elon Musk or Nintendo’s Mario.
The incident highlights the importance of AI safety and the challenges that arise when developing AI technologies. It also raises questions about the role of large tech companies in developing and releasing such technologies, particularly when concerns about safety and fairness arise.
As AI technologies continue to advance, it will be important for companies to prioritize the development of safe and ethical AI. The incident with Meena serves as a reminder that while AI has the potential to revolutionize many industries, it must be developed responsibly and with caution.