The Limited Times

Now you can see non-English news...

Hackers affiliated with Russian, North Korean and Chinese governments use ChatGPT

2024-02-14T18:10:20.461Z

Highlights: Hackers affiliated with Russian, North Korean and Chinese governments use ChatGPT. OpenAI said it had removed accounts linked to hackers, who used artificial intelligence. Emerald Sleet, a North Korean hacker group, and Crimson Sandstorm, associated with the Iranian Revolutionary Guards, used the chatbot to generate documents that could be used for “phishing,” according to the study. “Phishing” consists of presenting yourself to an Internet user under a false identity to obtain illegal access to passwords, codes, identifiers.


OpenAI said it had removed accounts linked to hackers, who used artificial intelligence to determine f


Hackers were not going to shy away from exploiting artificial intelligence.

Hackers affiliated with the Russian, Chinese, Iranian or North Korean governments use ChatGPT to identify vulnerabilities in computer systems, prepare “phishing” operations or disable antivirus software, report OpenAI and Microsoft in documents published Wednesday.

In a message posted on its site, OpenAI indicates that it has “disrupted” the use of generative artificial intelligence (AI) by these para-governmental actors, with the collaboration of Microsoft Threat Intelligence, a unit which lists the threats posed to companies in cyber security matters.

“The OpenAI accounts identified as affiliated with these actors have been closed,” said the creator of the generative AI interface ChatGPT.

Emerald Sleet, a North Korean hacker group, and Crimson Sandstorm, associated with the Iranian Revolutionary Guards, used the chatbot to generate documents that could be used for “phishing,” according to the study.

“Phishing” consists of presenting yourself to an Internet user under a false identity to obtain from them illegal access to passwords, codes, identifiers, or directly to non-public information and documents.

Crimson Sandstorm also used language models (LLM), the basis of generative AI interfaces, to better understand how to disable antivirus software, according to Microsoft.

Refusal to help a group of pirates close to Beijing

As for the Charcoal Typhoon group, considered close to the Chinese authorities, it used ChatGPT to try to detect vulnerabilities in anticipation of possible computer attacks.

“The goal of the partnership between Microsoft and OpenAI is to ensure the safe and responsible use of technologies powered by artificial intelligence, such as ChatGPT, according to Microsoft.

The Redmond (Washington State) group indicates that it has contributed to strengthening the protection of OpenAI's language models (LLM).

The report notes that the interface refused to help another hacker group close to the Chinese government, Salmon Typhoon, generate computer code for hacking purposes, thereby "adhering to ethical rules" built into the software.

“Understanding how the most advanced threat actors use our programs for malicious purposes tells us about practices that may become more prevalent in the future,” OpenAI says.

“We will not be able to block any ill-intentioned attempt,” the company warns.

“But by continuing to innovate, collaborate and share, we make it more difficult for bad actors to go unnoticed.

»

Source: leparis

All news articles on 2024-02-14

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.