ChatGPT has become a real phenomenon in recent months, with millions of users around the world chatting with the AI-powered chatbot, whether for fun or serious purposes. However, according to a new report, some have decided to take advantage of the chat's popularity – and point it in bad directions.
According to findings on the SlashNext website, a new generative AI tool called WormGPT has been circulated on underground forums as a sophisticated way to "sting" businesses and carry out digital attacks against them, by making it easy to steal information.
Hacker (illustration), photo: Reuters
"The tool presents itself as an alternative to GPT chat hackers designed for malicious purposes," said security researcher Daniel Kelly. "Cybercriminals can use this technology to automatically generate fake emails that are personalized to the recipient, increasing the chances of an attack succeeding." The creator of the tool even defined it as "ChatGPT's biggest enemy, which allows you to do all kinds of illegal things."
The fact that WormGPT operates without predefined ethical boundaries highlights the risk posed by generative AI, which even allows hackers to launch such attacks easily without dealing with the technical consequences of the attack. "Even cybercriminals with limited capabilities can use this technology," Kelly noted.
Wrong? We'll fix it! If you find a mistake in the article, please share with us