In a surprising turn of events, a meal bot designed to assist people using leftover food has taken an unexpected path by providing unconventional and dangerous recipes.
This AI bot, developed by New Zealand’s Pak’nSave supermarket chain, utilizes OpenAI’s GPT-3.5 technology. Originally intended to offer creative recipes for spare ingredients, the bot has recently generated recipes with questionable ingredients, prompting users to share their unusual encounters on social media platforms.
Social media has been abuzz with instances of the bot’s unconventional recipes. A user on the platform X (formerly known as Twitter) sought to test the bot’s limits by inquiring about a dish using water, bleach, and ammonia. Surprisingly, the bot provided a recipe for what it dubbed the ‘aromatic water mix,’ which the user soon realized resembled a recipe for the toxic chlorine gas. This incident highlights the potential hazards associated with AI-generated recipes.
Even well-known sources like Interesting Engineering decided to put the bot to the test. Attempting a similar experiment with the same ingredients, they were met with a response from the bot indicating that the elements were either invalid or too vague for recipe generation. This response suggests that the company is actively working to prevent the bot from producing harmful or bizarre recipes.
One of the challenges with large language models like GPT-3.5 is their learning process. As they interact with vast amounts of data, they become less susceptible to making mistakes that could lead to dangerous outcomes. While the company acknowledges that a few individuals have misused the tool for unintended purposes, it remains committed to improving the bot’s controls to ensure its safety and utility.
In a statement to The Guardian, a spokesperson from Pak’nSave expressed disappointment in the tool’s misuse and confirmed their dedication to refining the bot’s functionality. They plan to fine-tune the controls to prevent such incidents from occurring in the future while reiterating that the bot’s generated recipes are the result of AI processing and are not human-reviewed.
The website’s terms of use clearly outline the bot’s limitations to minimize potential misunderstandings. It states that the recipe content generated by the Savey meal is not guaranteed to be accurate, relevant, or reliable, and users are expected to exercise their judgment before attempting any of the recipes provided.
While the bot has had its share of missteps, some users have encountered more peculiar outcomes. A user on platform X faced a recipe for a ‘Mysterious Meat Stew’ containing an unsettling ingredient: 500 grams of chopped human flesh. Such bizarre instances underscore the need for continued vigilance in refining the bot’s capabilities.
Overall, the journey of the Savey meal bot serves as a reminder that even with advanced AI, human oversight and continuous improvement are essential to ensure safety and appropriateness.