IBM researchers proved that ChatGPT can be hypnotized. That is, with interaction, it can be led to recommend actions that are detrimental to cybersecurity.

Cybersecurity experts managed to get the program to ask for personal data and provide vulnerable code. They recommend not assuming that the chatbot's responses are "clean" and how to reduce risks when using artificial intelligence. The problem is that, due to the open spirit of generative AI, some of these users can deliberately take that interaction where it best suits them.