You didn't really think that this platform was free of risks (Photo: ShutterStock)
We are approaching the middle of 2023, and it is already clear that the main technological topic of the year will be artificial intelligence, and more precisely, generative artificial intelligence (generative AI).
A generative AI is an AI sophisticated enough to generate new content based on a large database.
We've all seen crazy pictures of Midgerani and DALL-E or deep texts written by AI.
As the use of AI technologies is increasing, there are many important issues that should and are important to consider.
The most important topic for me to talk about, unsurprisingly, is scams.
ChatGPT and social engineering - this is how it works
As an analyst who works as a "fraud fighter", I tend to focus on technical and analytical challenges.
Such challenges can be groups of fraudsters who work together in coordination and create "Fraud Rings" which are characterized by the way they perform various manipulations (on the delivery address or the computer from which they connect, for example).
Others can cleverly take advantage of some stores' return policies to avoid paying.
Another and relatively new challenge is the difficulty in catching amateur thieves, who don't really operate large fraud networks and therefore sometimes (counterintuitively) it's harder to catch them.
Because of all these questions, and the desire to find answers to them, I chose to work on it.
The reality of cybercrime is that the weakest link in the chain is often the human link.
Humans may be bored, worried, stressed, inattentive, desperate and scared.
A clever scammer can exploit all of these emotions.
Tools like ChatGPT and other artificial intelligence software will inevitably become a large and significant part of the ongoing fraud war.
I have collected below some of the ways in which I see how ChatGPT may be (or already is) used in scams.
Pig butchering scams:
a nasty term for an ugly scam.
Such a scam is a scam that occurs when people are tricked into investing in fake stocks, fake digital currencies, or fake investment apps.
Some victims lose thousands or even hundreds of thousands of dollars.
The victims develop a false sense of security in the relationships they have developed with the thieves, who present themselves as shrewd dealers, usually via text messages.
ChatGPT and similar tools are friendly, persuasive and easy to maintain a conversation with.
They are ideal for building the initial relationship for this type of fraud, especially since usually, at least initially, the attackers work to a fairly fixed script.
Similar to the pig scam, smart AI-based chatbots are a good replacement for human fraudsters and fraudsters in romance scams.
Much of the chat is formulaic, as you'll see if you look for examples of victims describing their experiences.
It is possible that one person will operate several chatbots, probably without harming the chances of the fraud's success.
In these scams the scammer won't necessarily convince the victim to put money into a fake investment, they'll just convince the victim that they need money fast (did someone say tinder scammer?).
Business Ransom Requests (BEC):
Among scammers and fraudsters, the BEC scam is very popular.
The goal is to convince key people in accounting that you are senior in the company and that you need to change one of the bank accounts of one of the suppliers (the new account will of course belong to the thief himself).
Over the years, the BEC scam has evolved, and today the email and text messages are custom-written to suit the company, and be very specific so that victims do not suspect they are communicating with a thief.
A chatGPT-style artificial intelligence will have no difficulty adapting personalized messages for any purpose, which will make it difficult to know that these are fictitious messages.
Phishing through Deepfake:
In fact, this is "personalized phishing".
Not infrequently we encounter a victim who sends large sums of money believing that his boss, CEO or even a relative asked him to do so. Think how convincing these attempts will be when the fraudster who wants to carry out the fraud can ask artificial intelligence to create an email or message or even A voice message in the same person's style? Nowadays, through the use of information that is easy to find on the Internet (profiles in social networks, interviews, panels, etc.), it is very easy to copy the speech of almost anyone we want and fake a conversation.
More in Walla!
Tens of thousands have already joined a groundbreaking and life-saving medical service
Served on behalf of Shachel
Everything you know from the web can penetrate here as well (Photo: GettyImages)
The boom created by creative artificial intelligence tools is still very fresh.
Perpetrating huge frauds through huge fraud rings working in complete coordination is a highly effective way because it operates on a considerable scale.
Today, along with AI, the scale and the option to commit frauds have increased by several levels and the dangers are much greater.
Then there is the abuse of the option to deny a transaction in online shopping, fraudsters who specialize in this field check which methods are most effective against specific stores and even against certain customer service agents and act accordingly.
AI will also be useful and especially in cases where this can be handled via chat or e-mail.
ChatGPT has already been used and engineered so that it can quickly generate all the necessary materials for scams.
As with other uses of ChatGPT and its competitors, "fast engineering" is critical.
You have to know what to ask for, but just like using search engines effectively, this skill can also be learned.
Moreover, it is a skill that does not require any special technical knowledge or ability, and can be performed by complete amateurs.
We are actually facing a democratization of fraud attack materials.
And it's already happening.
In some ways, fraud by artificial intelligence in general and ChatGPT in particular are an extension of the "Crime as a Service" industry that already dominates the cybercrime ecosystem, the modus operandi of fraudsters who are able to buy or gain access to stolen data, create bots , scripts and apps in order to replace/hide their identity.
Now, the difference is that everything is "homemade".
A potential criminal does not need much understanding of computer systems to be able to use AI to make his fraud faster, easier and more efficient.
The real concern of the reality check
Duriel Abrahams (Photo: Porter)
For now, the enticing thing about ChatGPT and its competitors is that they feel like they have great potential.
At the moment, they are problematic, imprecise and unreliable (despite being impressive and fun to use).
Today's chats give poor answers, and "conjure up" imaginary things, and image-generating artificial intelligence struggles to succeed in producing human hands.
But the question that concerns everyone is - if this is the case now, what will the artificial intelligence software be able to do next year?
This is an exciting but also scary thought.
A survey by the Miter Corporation and Harris Poll found that 78% of Americans are at least somewhat concerned that artificial intelligence could be used for malicious purposes, and most support strong regulations governing artificial intelligence.
Considering the inevitable criminal applications, the sense of danger only grows stronger.
In the world of fraud prevention, we know that properly learning the machine and technology is only half the battle.
What makes fraud prevention truly effective is when technology is guided by the research and intuition of human experts.
So - what happens when forensic experts start exploring what they can do with AI?
We better start getting ready.
Duriel Abrahams is the head of risk in the US at Porter