History teaches us time and time again that every new tool that humanity receives contains dark dangers (Photo: ShutterStock)
The network is abuzz with the innovations and capabilities that the new AI engines are showing recently.
Almost all of us have already experimented with the capabilities of the various free tools on the web, whether it is elaborate associative art, period photos, or elaborate videos edited without human intervention at all.
We can achieve all of this with the help of typing several lines of words in a familiar engine.
Some even increase to do, and use these engines to assist in programming and solving problems.
Some of the free services address this more successfully, and some are not quite there yet.
But along with all this good, there are people whose job it is to stop and think about all the dangers inherent in these new abilities - because history teaches us again and again that every new tool that humanity receives contains dark dangers.
This is the nature of the world - there will always be those who will test and look for the limits that can be broken with the help of the new tools, and exploit the technology to gain personal profit or promote not so sympathetic agendas.
Information gathering and session theft
Already in December of last year, various voices began to emerge on the Internet about the capabilities in the field of cybercrime of the AI engines: either on the positive side of creating new encryption tools, or in the creation of various malicious programs that can extract information from users with zero effort.
The capabilities demonstrated showed how simple engineered software could traverse files and gather information about end users.
Other software was able to create session theft and implant malicious files in the users.
Both of these types of attacks are information security risks known to the cyber community, but the feeling here is that the risks will appear like mushrooms after the rain, and will continue to evolve, become smarter and challenge what we know.
Although all these abilities are still quite basic, and there is no unknown risk here yet, but if we add all the abilities that have been presented so far, and add to that the ability to create very convincing characters, alongside the constant learning of the AI software and its development - we may already be surprised Soon.
I can't help but think about the world of identity theft, which very soon will also be very sensitive to it, especially with our advances in facial recognition technology, fingerprint recognition, and the like.
It is important to remember that most identity verification mechanisms usually refer to an image or voice (or both together) as part of the three-factor principle in the world of information security.
It will also be interesting to see how these developments will affect our daily lives, and the information security threats we are familiar with (and those that are not yet).
How long will it take until those who once "raised" fake identities in documents only, will be able to give a real identity in the real-virtual world, and maybe even open a bank account for them in applications that require face and voice recognition.
Maybe it's time to harness the existing capabilities to anticipate a cure for the blow.
Or at least take into account that the use of all the free capabilities "feed" the engines with more information.
Maybe they should be balanced with "good" information.
Ofir Oren-Namder is the head of the Java team in the technology division of Bank Hapoalim
from CodeReview - the magazine of the technology and computing division of Bank Hapoalim
Ofir Oren-Namder, in collaboration with "The Technology", the technology and computing division of Bank Hapoalim