Meta’s latest AI-powered Ray-Ban smart glasses come equipped with a discreet front-facing camera capable of capturing images not only when users ask but also when triggered by AI features through certain keywords like “look.” This functionality allows the smart glasses to collect numerous photos, both intentional and passive. However, Meta remains vague on whether these images will be used to train its AI models.
When TechCrunch inquired if Meta would use Ray-Ban Meta’s images to train AI, as it does with public social media data, the company refused to give a clear answer. Anuj Kumar, a senior director at Meta working on AI wearables, declined to discuss it publicly, while Meta spokesperson Mimi Huggins echoed that sentiment, saying, “we’re not saying either way.”
Concerns surrounding privacy arise due to the smart glasses’ ability to capture numerous passive photos through their AI-driven features. TechCrunch also revealed that Meta plans to introduce a real-time video feature for the glasses, which would capture a series of images triggered by keywords. These images would then be streamed into an AI model to provide real-time feedback on the user’s surroundings. The idea of unknowingly capturing private spaces, such as scanning a room while picking out an outfit, poses serious privacy concerns.
Meta’s ambiguous stance on this issue contrasts with other AI companies like Anthropic and OpenAI, which clearly state they do not train AI models on customer inputs or outputs. However, Meta has previously asserted that public social media posts are fair game for training its AI, raising concerns about how it defines publicly available data.
While Meta has yet to clarify its policy on smart glasses footage, this uncertainty leaves questions about the privacy of the images users capture with their Ray-Ban Meta glasses.