Google executives recognize that the company’s artificial intelligence search tool, Bard, isn’t always accurate in its responses to inquiries. Employees bear some of the responsibility for correcting incorrect answers.
On Wednesday, Prabhakar Raghavan, head of Search and Assistant at Google, sent an email to employees asking them to rewrite some of the bad responses generated by the company’s artificial intelligence (AI) language model, known as “Bard.” According to Raghavan, the AI “learns best by example,” and rewriting problematic responses can help improve the overall quality of the system. “This is exciting technology but still in its early days,” Raghavan wrote. “We feel a great responsibility to get it right, and your participation in the dogfood will help accelerate the model’s training and test its load capacity (not to mention, trying out Bard is quite fun!).”
Bard is a powerful language model that can generate text in response to a given prompt, such as a question or a statement. It is used in various Google products, including Search, Assistant, and Translate. However, like all AI systems, Bard is not perfect and can sometimes produce responses that are inaccurate, inappropriate, or offensive.
Raghavan’s email to employees was an attempt to address this issue by soliciting their help in improving Bard’s responses. He asked them to identify problematic responses and rewrite them in a way that would be more accurate and appropriate. He also encouraged them to provide feedback on the overall quality of Bard’s responses, including areas where they could be improved.
The idea of using human input to improve AI systems is not new. It is a common practice in the development of machine learning models. By feeding the system more examples of good and bad responses, the AI can learn to make better predictions and generate more accurate outputs. This process is called “training” the model, and it is an essential step in creating effective AI systems.
By asking employees to help improve Bard’s responses, Raghavan is tapping into the collective knowledge and expertise of the Google workforce. It is a collaborative effort that could lead to significant improvements in the quality and accuracy of the company’s AI language model. It also shows that even the most advanced AI systems still require human oversight and intervention to ensure they are ethical and reliable.