The Limited Times

Now you can see non-English news...

"Artificial intelligence can change the balance between attacker and attacker" | Israel Hayom

2023-08-19T09:09:36.791Z

Highlights: AI generators like ChatGPT are becoming a powerful weapon in the hands of hackers. Cybersecurity professionals understand that there is a need to commit and act to moderate AI technology. "As was the case with the coronavirus, cooperation between countries and various entities is also required now", says Yuval Wolman, president of CyberProof. "We need to go back to basics, and emphasize proper procedures among employees," says Wolman. "Many companies have transferred large parts of their product to artificial intelligence interfaces," says Vollman.


There are rare cases in which business companies ask on their own initiative for regulation to apply to them • Industry professionals understand that there is a need to commit and act to moderate AI technology • "As was the case with the coronavirus, cooperation between countries and various entities is also required now"


The dark side of artificial intelligence: AI generators like ChatGPT are also becoming a powerful weapon in the hands of hackers, and organizations need to realign. We spoke with Yuval Wolman, president of cybersecurity company CyberProof, who argued that "we need to go back to basics, and emphasize proper procedures among employees."

It is rare that business companies, which are in fierce competition between them, ask on their own initiative that the legislature apply regulation and restrictions to them – and as soon as possible. This is exactly the move currently being promoted by technology companies leading the field of artificial intelligence, such as those who met with President Biden a few weeks ago.



Biden's statement after his meeting with tech giants, July 2023 Photo: Reuters



Industry professionals understand that regulatory legislation is a process that can take a long time, and therefore they have agreed at this stage to commit, voluntarily, to an ethical treaty that will mitigate the risks posed by artificial intelligence. Most of the articles of the Charter focus on security.

Among other things, the charter calls for the development of measures that allow users to identify content generated by AI generators (such as "watermarks"), transfer AI applications to an external security expert before they are released to the market to identify vulnerabilities and vulnerabilities, and share information and formulate procedures and policies together.

Yuval Wolman, president of CyberProof, which provides managed cybersecurity services to a variety of industries and belongs to UST, explained in a conversation with Israel Hayom that international cooperation is the order of the day. "During COVID-19, countries have proven that they can work together against a common goal. This collaboration is also required in the field of artificial intelligence. Business competition is accelerating technological progress, and we need regulation that will serve as a counterweight."

Yuval Wolman, President of CyberProof, Photo: Ivan Gonzalez

New and particularly

deadly attacks In general, hackers' access to generative AI engines, such as ChatGPT, requires a realignment throughout the cybersecurity world. A few months ago, researchers at HYAS Labs, a cybersecurity research firm, demonstrated how ChatGPT could be used to develop a new, highly deadly type of malware.

The researchers developed spyware, dubbed the "Black Mamba," which can change its code automatically and quickly as it spreads, and communicate with device protection systems without the need for remote control by a human hacker but using ChatGPT's writing capabilities.

The researchers showed how the spyware managed to slip through conventional defense systems and spread to millions of devices at breakneck speed. Wellman explained, "This experiment illustrated the use hackers can make of technology. This impairs the ability of existing defense systems to alert and respond."

Cybercrime, illustration, photo: GettyImages

The human factor is the most

important wall of defense"Black mamba" is an example of the sophisticated use of AI engines. The greater danger lies in the use of ChatGPT to carry out the simplest and most common type of attack: phishing. In phishing attacks, hackers try to extract personal information from people through emails, messages, and seemingly "innocent" chat communications, such as an email from the "bank" asking us to change passwords to the personal website.

The key in this type of fraud is the authenticity of the interaction. Any mistake in wording or syntax may arouse suspicion. ChatGPT's impressive natural language capabilities allow hackers to generate such content quickly and reliably, in a variety of languages. AI engine developers like OpenAI and Google are working to plug the loophole, but Vollman says that requires organizations and companies to tighten not only defense systems, but also employee alertness.

However, as with any balance of terror between "cops and thieves," AI is not only an offensive tool but also a defensive one. "Many cyber companies have transferred large parts of their product to artificial intelligence interfaces. These engines can help security professionals prepare for a wider range of attack scenarios, develop new defense tactics, and process large amounts of data efficiently. As a rule, attackers always have a tactical advantage over the defender. Artificial intelligence can change the balance," Vollmann concludes.



Wrong? We'll fix it! If you find a mistake in the article, please share with us

Source: israelhayom

All tech articles on 2023-08-19

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.