Professor Loses Two Years Of Research Work After Clicking The Wrong Button On ChatGPT

A German university professor lost two full years of academic work after a single setting change inside ChatGPT permanently erased his saved conversations and project folders, with no recovery option available from OpenAI, according to a report published by Nature.

Marcel Bucher, a professor of plant sciences at the University of Cologne, had been using ChatGPT Plus as a central workspace for a wide range of professional tasks. These included drafting grant proposals, preparing lectures and exams, revising academic papers, organizing teaching materials, and analysing student responses. Over time, the chat history and project folders inside ChatGPT effectively became an informal archive of his ongoing research and teaching output.

The data loss occurred when Bucher attempted to disable ChatGPT’s data consent option to see whether the service would continue to function without retaining his information. Instead of merely limiting data usage, the action immediately deleted all of his chats and emptied his project folders. There was no warning explaining the consequences, no confirmation dialog that clearly stated the deletion was irreversible, and no undo option. The interface simply refreshed to a blank workspace.

Initially assuming it was a glitch, Bucher checked multiple browsers, devices, and networks. He cleared caches, reinstalled applications, and even reverted the setting change, but nothing restored the missing content. Partial backups existed for some materials he had manually saved elsewhere, but large portions of his work were lost permanently.

When Bucher contacted OpenAI for help, his first response came from an automated system. After repeated attempts, he reached a human representative, but was informed that recovery was impossible. OpenAI later confirmed to Nature that once chats are deleted, they cannot be restored through the user interface, APIs, or internal support systems, citing privacy and legal requirements. The company stated that users are advised to maintain their own backups for professional use.

The incident highlights a growing tension between AI convenience and data safety. Bucher noted that while he understood the limitations of large language models in terms of factual accuracy, he assumed the workspace itself was stable and protected. As a paying subscriber, he expected safeguards such as clearer warnings, temporary recovery windows, or redundant backups.

His experience mirrors other recent cases where cloud-based platforms erased or locked users out of years of data with little recourse, reinforcing a hard lesson for professionals increasingly relying on AI tools. Convenience does not replace responsibility, and without independent backups, even trusted platforms can become single points of failure.

As AI systems continue to integrate more deeply into academic, professional, and creative workflows, incidents like this raise uncomfortable questions about data ownership, user protection, and how much trust these tools truly deserve.

Leave a Reply

Your email address will not be published. Required fields are marked *