The Limited Times

Now you can see non-English news...

ChatGPT, the fashionable artificial intelligence, allows anyone to launch a cyberattack

2022-12-27T05:14:13.972Z


The popular OpenAI chatbot helps democratize cybercrime by making a tool for developing malicious software available to those who don't know how to code


You can write baroque poetry, compose a letter to the Three Wise Men with childish expressions, or prepare convincing university papers.

He is also capable of guessing riddles and even programming.

The ChatGPT chatbot is the hot toy among

techies

.

It has also connected with the general public thanks to its easy handling: you simply have to ask questions in writing, as in any chat.

But this artificial intelligence (AI) system developed by OpenAI, a company co-founded by Elon Musk, brings with it new risks.

Among them, helping anyone to launch a cyberattack.

A team of analysts from cybersecurity firm Check Point have used ChatGPT and Codex, a similar programming-focused tool, to develop a complete

phishing

attack without writing a single line of code.

Phishing is

one

of the preferred techniques of cybercriminals today: it consists of tricking the user into voluntarily clicking on a link that will download

malware

, or malicious software, to their computer, which will then steal information or money.

The typical example is an email supposedly sent by the bank that asks the user to enter her credentials to get hold of them.

Check Point researchers asked the aforementioned automated tools to compose the fraudulent email, write down the

malware

code , which could be copied and pasted into an Excel file to be sent as an email attachment, so that the scam would be detected. run as soon as the victim downloads the file.

Thus, they managed to establish a complete infection chain capable of gaining remote access to other computers, all from questions asked in conversational chats.

Their goal was to test the harmful potential of ChatGPT.

And they got it.

Check Point analysts asked ChatGPT to write a phishing email.

In the image, one of the questions they asked to reach that result.

“The experiment shows that relatively complex attacks can be developed with very few resources.

By asking the right questions and cross-questions, without mentioning keywords that conflict with your content policy, it can be achieved”, explains Eusebio Nieva, Check Point's technical director for Spain and Portugal.

“A professional attacker, for now, is not going to need ChatGPT or Codex at all.

But whoever does not have enough knowledge or is learning, it can come in handy ”, he underlines.

In his opinion, any moderately intelligent person who doesn't know how to program could launch a simple attack with this tool.

Does that mean OpenAI's hot app should be reviewed or censored?

It would be difficult to do.

“If you ask ChatGPT to create a function that encrypts the contents of a hard drive, it will do so.

Another thing is that you use it for something good, like protecting your data, or something bad, like hijacking someone else's computer,” says Marc Rivero, a Kaspersky security researcher.

Depending on what the tool is asked to do (for example, to write a

phishing email

using that word), she replies that she can't do it because it's illegal.

Although that barrier can often be circumvented by posing the question in other terms, as the Check Point team did.

Given the great impact that the OpenAI tool is having, developers continually review their terms and conditions.

At the time of writing this report, the exercise carried out by Check Point could still be repeated.

ChatGPT is a variation of the GPT-3 backward language model, the most advanced in the world.

It uses

deep learning

, an AI technique, to produce texts that simulate human writing.

It analyzes some 175,000 million parameters to decide which word statistically matches the ones that precede it better, be it a question or a statement.

Hence, his texts seem to be produced by a person, even if he does not know if what he says is good or bad, true or false, real or unreal.

This screenshot shows how the Check Point team asked ChatGPT to write specific code for their fictitious cyberattack that could be copied and pasted into an Excel sheet.

“The launch of ChatGPT brings together two of my biggest fears in cybersecurity: AI and the potential for misinformation,” Steve Grobman, McAfee's vice president and chief technology officer, wrote on his blog as soon as the OpenAI chatbot went public.

“These AI tools will be used by a wide range of evil actors, from cybercriminals to those who want to intoxicate public opinion, so that their work achieves more realistic results.”

All the sources consulted for the preparation of this report agree that ChatGPT is not going to revolutionize the cybersecurity sector.

The threats that specialized companies work on are extremely complex compared to what the most popular chatbot of the moment is capable of generating.

It is not so easy to

hack

the systems of a company or institution.

But that does not mean that it will not be useful for cybercriminals.

"These tools can be used to create

exploits

[a piece of software] that take advantage of the vulnerabilities known up to the date that OpenIA collected the data from its model," ventures Josep Albors, director of Research and Awareness at ESET Spain.

In practice, it facilitates a part of the infection chain that cybercriminals and attackers use, “putting the barrier to entry for some attacks lower than it was and possibly bringing more malicious actors into play.”

Nieva predicts the return of the so-called

script kiddies

, kids without much programming knowledge who took ready-made

scripts

and launched attacks for fun or to test themselves.

The sophistication of cybersecurity was responsible for gradually removing them from the radar.

“They had been reported missing a few years ago, but it is possible that soon we will see them carrying out attacks that are not very complex, but effective,” says the Check Point manager.

Expert cybercriminals, for their part, can turn to these chatbots to perform small tasks in the long process of designing a complex cyberattack.

For example, writing convincing emails and other traditional processes, such as automatically obfuscating defense systems to later launch the real attack.

ChatGPT can also be useful for those who are on the side of the good guys.

Cybersecurity companies have been working with AI tools for decades, but OpenAI's, which relies on a huge amount of data obtained from the Internet, offers a new perspective.

“It can serve as a very good training tool to understand the exploitation techniques used by many criminals and design defensive measures for them,” says Albors.

You can follow

EL PAÍS TECNOLOGÍA

on

Facebook

and

Twitter

or sign up here to receive our

weekly newsletter

.

Subscribe to continue reading

Read without limits

Keep reading

I'm already a subscriber

Source: elparis

All tech articles on 2022-12-27

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.