Open AI, the company behind the ChatGpt chatbot, has launched its Bug Bounty Program.
This is an initiative that allows developers and code enthusiasts to look for flaws and security issues in the group's products such as ChatGpt itself.
Once the bug is patched, the organization agrees to pay bounties of up to $20,000 for major vulnerabilities.
Reports can be made through the Bugcrowd platform, already alongside other similar initiatives.
As Open AI has revealed, the rewards are based on the severity and impact of reported issues and range from $200 for low-level security flaws up to $20,000 for "exceptional discoveries."
“The OpenAI Bug Bounty program is a way for us to recognize and reward the valuable insights of security researchers that help protect our technology and our business,” OpenAI said.
"We encourage you to report security vulnerabilities, bugs, or flaws that you discover in our systems. By sharing your findings, you'll play a crucial role in making technology safer for everyone."
Among the possible criticalities of ChatGpt, there may be the techniques that hackers use to circumvent the Open AI security measures, for example regarding the creation of inappropriate content, such as research studies but also malicious code for hacking purposes.
Last month, Open AI revealed a leak of payment data of ChatGpt Plus users, attributed to a bug in the open source library of the Redis client used by the platform.
Due to this bug, ChatGpt Plus subscribers started seeing other users' email addresses on the service purchase pages.
To resolve the issue, the chatbot became inaccessible for several hours.