OpenAI Is Paying Users Up To $20,000 To Spot Vulnerabilities In ChatGPT

OpenAI has announced a new initiative to encourage security researchers to help find bugs in their popular chatbot ChatGPT.

The company’s “Bug Bounty Program” will reward researchers for their contributions to keeping the technology and company secure. The program covers a range of issues, including payment, data exposure, and authentication issues.

“We are inviting the global community of security researchers, ethical hackers, and technology enthusiasts to help us identify and address vulnerabilities in our system,” the blog post read. 

Rewards for identifying bugs start at $200 for low-severity findings and can go up to $20,000 for exceptional discoveries.

“We invite you to report vulnerabilities, bugs, or security flaws you discover in our systems. By sharing your findings, you will play a crucial role in making our technology safer for everyone,” the blog post continued.

ChatGPT is an AI chatbot that uses generative technology to interact with users in a conversational manner. It can respond to queries and provide written text, songs, poems, and even computer code. Despite its popularity, the chatbot has had some security issues in the past.

“Model safety issues,” such as “issues related to the content of model prompts and responses” were not eligible for the bug bounty program, as they “are not individual, discrete bugs that can be directly fixed,” the program description stated.

For instance, there was a glitch that allowed some users to see the titles of other users’ conversations. The chatbot was also banned in Italy due to privacy concerns.

OpenAI has assured its users that they are committed to protecting their privacy and have proposed measures to address the issue.

Leave a Reply

Your email address will not be published. Required fields are marked *