Microsoft’s AI-powered “Recall” feature has once again sparked privacy debates. Designed to capture snapshots of your screen every few seconds, the tool aimed to offer a visual history of computer usage. However, investigations by Tom’s Hardware reveal a troubling flaw: it still captures sensitive data like Social Security numbers, credit card details, and personal information, even with the “filter sensitive information” setting enabled.
This filter, introduced after Recall’s initial backlash, is supposed to block screenshots of sensitive data. Yet, tests conducted by Avram Piltch, Tom’s Hardware editor-in-chief, show it’s unreliable. In one example, Piltch noted, “When I entered a credit card number and a random username/password into Notepad, Recall captured it.” Similar issues occurred while filling out loan applications in Microsoft Edge. Alarmingly, the filter only consistently blocked data when used on online shopping sites like Pimoroni and Adafruit. “The AI filter is not foolproof,” Piltch argued. “Real users frequently handle sensitive information in ways the filter fails to recognize, such as completing PDF forms or copying data into text files.”
Initially launched as part of Microsoft’s “Copilot+ PCs,” Recall’s rollout was reversed following criticism over privacy risks and unencrypted screenshots. These security lapses heightened fears of potential misuse, with users likening the feature to a surveillance tool. Microsoft subsequently limited Recall to its Windows Insider Program before pulling it entirely.
While Recall’s concept helping users revisit past on-screen activity is appealing, its execution has faltered. Critics highlight the inherent challenges of AI in identifying all contexts of sensitive data. Microsoft’s efforts to address these issues demonstrate some progress but fall short of restoring user trust. As the tech giant continues to refine its AI tools, the Recall controversy underscores the delicate balance between innovation and safeguarding privacy.