Samsung, one of the world’s largest technology companies, has reportedly prohibited employees from using popular generative AI tools like ChatGPT, Google Bard, and Bing. This decision was made out of concerns over data security, with Samsung fearing that data used by AI platforms could end up being disclosed to unauthorized parties.
The company informed its employees of the new policy on Monday. Samsung stated that the ban was due to “growing concerns about security risks presented by generative AI.” It further added that although generative AI platforms such as ChatGPT can be useful and efficient, their security risks cannot be ignored.
Generative AI has recently gained popularity with the launch of OpenAI’s ChatGPT, which can perform tasks such as software writing, holding conversations, and composing poetry. Microsoft also uses ChatGPT’s technology foundation, GPT-4, to enhance Bing search results, provide email writing tips, and build presentations.
The new policy comes after Samsung engineers accidentally leaked internal source code by uploading it to ChatGPT. In response, Samsung has temporarily banned the use of generative AI systems on its computers, tablets, and phones. However, the company assured its employees that it was reviewing security measures to create a secure environment for the safeuse of generative AI.
Concerns over the risks associated with AI are on the rise, with hundreds of tech executives and AI experts signing an open letter in March urging leading AI labs to pause AI system development due to “profound risks” to human society.
Samsung’s decision to ban the use of generative AI tools by its employees highlights the need for heightened security measures in the use of such technology. While the ban may temporarily hinder productivity, it is a necessary precaution to ensure the protection of sensitive information.