Google has announced the public release of Bard, its conversational AI service that competes with OpenAI’s ChatGPT. Users in the US and UK can sign up for a waitlist, with the company adding people on a rolling basis.
Bard is Google’s response to catch up with OpenAI in the artificial intelligence race. According to Google’s vice president of product for Bard, Sissie Hsiao, “Bard is here to help people boost their productivity, accelerate their ideas, and to fuel their curiosity.”
Generative AI, which allows the software to create text, images, music, and even videos based on user prompts, is a topic of heightened buzz in Silicon Valley. Google has worked on such systems for years but has kept its efforts primarily within its labs.
However, the company is now trying to catch up with OpenAI and Microsoft, which have already made their conversational AI services more widely available to the public. OpenAI’s ChatGPT has been popular worldwide since its November release, and Microsoft has integrated OpenAI’s tech into Bing search.
Bard is Google’s early experiment that allows users to collaborate with generative AI technology. LaMDA powers the chatbot, a large language model developed internally by Google. Bard draws responses from Google’s “high-quality” information sources to provide up-to-date answers.
According to Eli Collins, Google’s vice president of research for Bard, the company initially limits the length of conversations for safety reasons. Over time, Google will increase those limits. The company is not revealing the limits on Bard with this release.
Bard is developed following Google’s AI principles. The company’s demonstrations include a warning at the bottom of its chat window, “Bard may display inaccurate or offensive information that doesn’t represent Google’s views.”
Bard’s users can conduct back-and-forth conversations with the AI, similar to Microsoft’s new Bing service. Bloomberg reporters tested Bard’s capabilities and weaknesses with silly and serious examples. It showed decent knowledge when asked to compose a sonnet about Squishmallows, displaying its capability to draw on different topics.
However, it refused to answer a question about how to make a bomb, demonstrating Google’s efforts to prevent using the technology for harmful purposes.
Collins explained that Bard’s fine-tuning process aims to reject questions about hateful, illegal, or dangerous topics. The company expects to learn more about Bard’s capabilities as more users try it.
The demonstration also showed that Bard’s responses are not always grounded in reality. When asked for tips on celebrating a birthday party on Mars, for example, Bard suggested that the user plan the trip well in advance. However, Bard did not mention that such a trip is currently impossible.
Google follows its AI principles and warns users that Bard may display inaccurate or offensive information. While AI has limitations, it is expected to improve as more users try it.
Overall, the increasing availability of generative AI services like OpenAI’s ChatGPT and Google’s Bard represents a significant step forward in the development of artificial intelligence. These services allow people to interact with AI more naturally and meaningfully, opening up new possibilities for collaboration and creativity.
However, as with any new technology, there are also potential risks and challenges to be addressed, including concerns about the accuracy and safety of the information generated by these systems.
As AI continues to evolve and become more integrated into our lives, it will be necessary to carefully consider specific issues and work to ensure that the benefits of this technology are maximized while minimizing any negative consequences.