OpenAI has launched a bug bounty, encouraging members of the public to find and disclose vulnerabilities in its AI services including ChatGPT. Rewards range from $200 for “low-severity findings” to $20,000 for “exceptional discoveries,” and reports are submittable via crowdsourcing cybersecurity platform Bugcrowd.

Notably, the bounty excludes rewards for jailbreaking ChatGPT or causing it to generate malicious code or text. “Issues related to the content of model prompts and responses are strictly out of scope, and will not be rewarded,” says OpenAI’s Bugcrowd page.

Jailbreaking ChatGPT usually involves inputting elaborate scenarios in the system that allow it to bypass its own safety filters. These might include encouraging the chatbot to roleplay as its “evil twin,” letting the user elicit otherwise banned responses, like hate speech or instructions for making weapons.

OpenAI says that such “model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed.” The company notes that “addressing these issues often involves substantial research and a broader approach” and reports for such problems should be submitted via the company’s model feedback page.

Although such jailbreaks demonstrate the wider vulnerabilities of AI systems, they are likely less of a problem directly for OpenAI compared to traditional security failures. For example, last month, a hacker known as rez0 was able to reveal 80 “secret plugins” for the ChatGPT API — as-yet-unreleased or experimental add-ons for the company’s chatbot. (Rez0 noted that the vulnerability was patched within a day after they disclosed it on Twitter.)

As one user replied to the tweet thread: “If they only had a paid #BugBounty program – I’m certain the crowd could help them catch these edge-cases in the future : )”

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums