With artificial intelligence (AI) at the center of the public scene, various types of scams appeared thanks to ChatGPT, the popular chat that allows you to interact with natural language and that simulates human conversation.
And this was somewhat inevitable: with the latest tech fashion,
malware
(viruses), rogue applications and more
appeared .
First, it is worth remembering what this chat is about: “ChatGPT is an example of a program capable of holding various dialogues in natural language, trained with techniques where humans participate, both in accessing conversations from which to learn and in the subsequent process. to
evaluate and gradually improve the interactions
”, explains Javier Blanco, PhD in Computer Science from the University of Eindhoven, the Netherlands.
From this concept, various uses began to appear: from writing work emails to writing essays for college, going through legal writing, everyone began to take advantage of this technology, as amazing as even questioned.
And the big problem that comes with any massive technological advance is, almost always,
the danger it entails for cyberattacks
: the more users there are, the more the attack surface expands.
In this sense, two large categories must be distinguished: on the one hand,
the use of ChatGPT to write malicious code,
that is, “weaponize” it, turn it into a weapon, as they say in the jargon.
This requires knowledge on the part of cybercriminals that, as has been seen, has already been taken advantage of.
But on the other, there is a more tangible side that is closer to the average user,
and that is that they are deceived,
both when using this tool and when trying to install it.
And here it is worth remembering that
ChatGPT does not have any official application
, so anything that is installed is very likely to end up being a headache in the future.
Here, experts reveal the potential dangers that ChatGPT currently entails, the care that must be taken and, finally, the positive aspects that this technology has to combat cybercrime.
ChatGPT does not have an app: be careful
To use it from the phone you have to open it via browser.
Photo Bloomberg
One of the most common risks is downloading an application that claims to be ChatGPT and is actually not.
That is,
an apocryphal program.
"There are applications that have the appearance of a real ChatGPT application but with malicious objectives, such as subscribing to a service on time, making the victim lose money or stealing confidential data such as contacts, SMS, files, or other data from our cell phone", explains Alberto Herrera from Pucará, a computer security company.
There are also apocryphal web pages that pretend to be the official ChatGPT: “Many scammers create pages to steal financial data.
One of the most used ways is to create web pages with payment links related to ChatGPT that are aimed at stealing credit cards or bank accounts," he explains.
In this sense, it is worth remembering that the only way to officially access ChatGPT for the general public is through its official site: chat.openai.com.
Of course, another recurring problem is the classic phishing campaigns: "The scams also arrive by email with domains or addresses similar to those of ChatGPT, with cloned pages to trick the user into installing malicious applications."
For this reason, it is essential to be aware of certain signs: "Before installing an application on our cell phone, we must see that it comes
from a reputable Google Play store
on Android or
the App Store
on iOS and that it has been downloaded several times by others." users and have a large number of opinions”, he explains.
The key issues when installing an app: ChatGPT does not have it, so it is not convenient to install anything related.
PhotoGoogle Play Store
Another big problem is browser extensions, which can become an attack vector.
“Before installing
an extension, it is important to verify some issues
: that the domain or web address belongs to the browser in which we want to install the application -google.com for example-, that checking the number of downloads -the more, the better-, that It has opinions or comments from various users, that it has a good reputation and that, as far as possible, follows the good practices of Google or the browser provider that we choose.
There is also no official GPT extension for Chrome, although there are from other companies and they are trustworthy, but not official (and therefore the use of user data can be doubtful).
ChatGPT as an ally of cybercrime
Ransomware: one of the activities that can be used by ChatGPT.
Photo Lockbit Blog
“ChatGPT has also added some spice to the modern cyberthreat landscape, as it quickly became apparent that code generation can help less-skilled threat actors effortlessly launch cyberattacks,”
says
Dario Opezzo, Sales Manager Palo Alto Networks Regional.
"We found that using ChatGPT, you can successfully carry out an entire infection workflow, from creating a convincing phishing email to running a reverse
shell
, capable of accepting commands in English."
“On underground forums on the
Dark Web
, cybercriminals are reporting that they are using ChatGPT to create
Infostealer
( information theft)
malware
, create encryption tools, and facilitate fraudulent activity.
The researchers want
to warn of the growing interest of the attackers for ChatGPT
”, explained Alejandro Botter, Check Point Engineering Manager for southern Latin America, in dialogue with this outlet.
These three recent cases detected by the cybersecurity company show how cybercriminals and fraudsters take advantage of this tool.
The first is with the creation of an "
infostealer
", a type of virus that steals information stored in browsers (passwords, personal data, card data, etc.).
“On December 29, 2022, a thread called
'ChatGPT - Malware Benefits'
appeared on a popular underground hacking forum.
The editor of the thread revealed that he was experimenting with ChatGPT to recreate malware variants and techniques described in research publications and writings on common malware.
These messages seemed to be teaching other, less technically-savvy attackers
how to use ChatGPT for malicious purposes
, with real-life examples they could apply immediately,” Botter adds.
A second type of malicious use detected involved the creation of an
encryption tool
, that is, a program that converts readable data into encrypted data.
A cybercriminal nicknamed USDoD posted it and acknowledged that he had made a part with GPT.
“This script [file with instructions in a programming language] can be modified to encrypt a computer without any user interaction.
For example, it could turn the code into ransomware," says the expert.
Finally, ChatGPT is already used to
facilitate fraudulent activities
: “The primary role of the marketplace in the illicit underground economy is to provide a platform for automated trading of
illegal or stolen goods
such as stolen accounts or payment cards, malware, or even drugs and ammunition, with all payments in
cryptocurrencies
, ”he explains.
Lastly, there is no less than a question with the data that we ourselves are giving to the application when we use it: “Another of the problems that these language models have is that they 'learn' from what we write,
so
all the confidential or private information that we upload to the platform will be used in some way”, warns Herrera de Pucará.
“That is why it is important to be able to generate discussions about moderation and restriction in the AI industry such as ChatGPT, among others,” he concludes, in a comment that recalls the warning from Elon Musk and other technocrats last week about regulating and suspending for six months developments above GPT-4.
How can you be an ally?
ChatGPT can optimize processes to detect viruses.
Photo Reuters
Both Palo Alto and CheckPoint agree that, despite the development of malware with ChatGPT, artificial intelligence should not be "demonized" and see the positive side to precisely prosecute cybercrime
.
“We should not be afraid of AI, but see it
as an ally
because it allows automated defense.
In the case of Palo Alto Networks, we were ahead of the curve when we recognized the importance of AI and ML-based security.
For example, AI and ML in security
help establish a baseline of normal operations
and then alert a team to potential anomalies, creating an automation roadmap that saves time and resources,” says Palo Alto's Opezzo.
In fact, it was one of the topics this year at the annual meeting of the World Economic Forum, where leaders from governments, companies and cybersecurity
discussed the intersection between AI and security.
“AI is already
leading to scientific breakthroughs.
You are helping to detect financial fraud and build climate resilience.
It is a tool that we can use to improve and advance in many areas of our lives.
This includes security and cybersecurity,” says CheckPoint's Botter.
“By incorporating AI into a unified, multi-layered security architecture, cybersecurity solutions can provide an intelligent system that not only detects,
but actively prevents advanced cyberattacks
,” he adds.
Among the benefits that AI can bring to combat scams, the expert lists the automation of repetitive tasks, the automatic detection of incidents and
"situational awareness"
, that is, that artificial intelligence can collect and process data that provides more context. to warn of a dangerous situation for the user.
Thus, like all technology, the artificial intelligence provided by ChatGPT can be an ally or a threat.
Time will surely accommodate
each of the sides.
look also
Negotiating fines, writing work emails and more: 5 disruptive uses of artificial intelligence ChatGPT
Elon Musk and more than a thousand experts ask to pause advances in artificial intelligence: "There are great risks for humanity"