In a world where AI innovation runs wild, we find ourselves facing an unsettling truth: our privacy hangs in the balance.
Google’s recent privacy policy shift, allowing them to harness our public data and online content for training their ChatGPT competitors, serves as a wake-up call. Yet, there’s hope on the horizon. Governments worldwide are poised to step in and establish best practices, shielding our privacy and safeguarding copyrighted content in the realm of generative AI programs.
A future awaits us where generative AI operates autonomously on our own devices, cutting ties with the mother ship. Products like Humane’s Ai Pin and Apple’s Vision Pro hint at this possibility. Until then, we must treat ChatGPT, Google Bard, Bing Chat, and their kin as strangers trespassing in our digital homes and workplaces. Just as we wouldn’t divulge personal information or trade secrets to a stranger, we must exercise caution with these AI chatbots.
To protect your privacy, hold back personal details that could expose your identity—information such as your full name, address, birthday, or social security number. It’s crucial not to entrust ChatGPT and its peers with such sensitive information. Even though OpenAI has implemented privacy features, once you share confidential details with the chatbot, their effectiveness becomes uncertain due to potential settings disabling or bugs.
The peril lies not in ChatGPT profiting from your information or OpenAI engaging in nefarious activities. Rather, your data becomes fuel for training the AI. Furthermore, the ever-looming threat of hackers cannot be ignored. OpenAI fell victim to a data breach in May, and that’s precisely the kind of mishap that could expose your data to the wrong hands, potentially leading to identity theft.
Data breaches often become a gateway for hackers craving login information. This is why you must never share usernames and passwords with generative AI systems, especially if you reuse them across various apps and services. To ensure secure password management, consider utilizing trusted apps like Proton Pass and 1Password.
Steering clear of sharing personal banking information with ChatGPT is equally vital, as OpenAI has no need for credit card numbers or bank account details. Moreover, ChatGPT lacks the capability to process such data. Treat this category of information with utmost sensitivity, as in the wrong hands, it can wreak havoc on your financial well-being.
Be vigilant when encountering mobile or computer apps posing as ChatGPT clients, soliciting financial information from you. These could be indicators of ChatGPT malware. Under no circumstances should you provide such data. Instead, promptly delete the app and download only official generative AI apps from trusted sources like OpenAI, Google, or Microsoft.
The risks associated with sharing confidential work-related information with generative AI bots cannot be overstated. Samsung and Apple, among others, have banned the use of such bots after confidential data made its way to OpenAI’s servers. This serves as a reminder that work secrets must remain just that—secrets. Seek alternative methods to engage with ChatGPT without compromising sensitive work-related details.
Now, let’s address the complex issue of health data. It is strongly advised against sharing intricate health information with chatbots. While it may be tempting to seek guidance from ChatGPT by presenting hypothetical scenarios or symptoms, caution is warranted. Although future advancements may grant generative AI the ability to offer accurate diagnoses and insights, it is wise not to divulge all your health data unless it remains confined to personal, on-device AI products.
Moreover, personal thoughts and discussions pertaining to mental health deserve careful consideration. Some individuals rely on chatbots for therapeutic purposes, albeit controversially. Nonetheless, the central point remains unchanged: ChatGPT and similar chatbots cannot guarantee the privacy of your personal thoughts. When shared with OpenAI, Google, or Microsoft servers, they become grist for the AI training mill.
While we anticipate generative AI products that can double as personal psychologists, we haven’t reached that stage yet. Hence, exercise caution when entrusting chatbots with your innermost thoughts, ensuring that the information shared is limited.
It’s worth noting that the data provided by ChatGPT and its counterparts may not always be accurate. This applies to a wide range of subjects, including health matters. To verify the reliability of responses, always inquire about the sources of the information provided. However, resist the temptation to furnish chatbots with additional personal information in the hopes of receiving more tailored answers.
Last but not least, there’s the lurking danger of malware apps disguising themselves as generative AI programs. Falling prey to these apps means surrendering personal data unknowingly, putting yourself at the mercy of hackers. By the time you realize your mistake, it may be too late, as hackers might already be wielding your personal information against you.
All in all, until more stringent regulations are enacted and on-device AI solutions become prevalent, exercising caution when interacting with AI chatbots is paramount. The preservation of privacy and protection of sensitive information should be at the forefront as we navigate the ever-evolving landscape of generative AI.