The Limited Times

Now you can see non-English news...

Employee transfers millions to “fake bosses”: How to protect yourself from AI rip-offs

2024-03-01T06:33:38.976Z

Highlights: Employee transfers millions to “fake bosses”: How to protect yourself from AI rip-offs.. As of: March 1, 2024, 7:30 a.m By: David Holzner CommentsPressSplit Deepfakes appear deceptively real. An employee in Hong Kong fell for anAI rip-off and transferred millions. How you can protection yourself from scammers. Technological progress through artificial intelligence (AI) brings many advantages. At the same time, however, the technology can also be misused.



As of: March 1, 2024, 7:30 a.m

By: David Holzner

Comments

Press

Split

Deepfakes appear deceptively real.

An employee in Hong Kong fell for an AI rip-off and transferred millions.

How you can protect yourself from scammers.

Technological progress through artificial intelligence (AI) brings many advantages.

At the same time, however, the technology can also be misused.

So-called deepfakes are being used more and more frequently for fraud.

However, what happened in Hong Kong at the end of January 2024 falls into the “horror scenario” category.

According to a report by the

Hong Kong Free Press (HKFP)

, fraudsters defrauded a multinational company of around $26 million by impersonating senior executives using deepfake technology.

But what exactly happened?

And how can you protect yourself from such a scam?

An employee was in a video conference.

All of his conversation partners were not real.

(Symbolic image) © Pond5 Images/IMAGO

A financial employee in China's financial capital received an email invitation to a video conference,

CNN

reported .

The sender was the chief financial officer of the multinational company from the United Kingdom.

The employee was tasked with handling multiple transactions worth millions of dollars,

CNN

said.

The employee was initially suspicious and thought it was a phishing email.

But when he entered the video conference, he recognized his colleagues by their appearance and their voices.

His initial doubts were gone.

The employee followed the instructions and subsequently made 15 transactions totaling 200 million Hong Kong dollars (the equivalent of around 24 million euros) to various bank accounts, according to the

HKFP

.

As it later turned out, the colleagues in the video call were not real.

By the time the employee realized he was a victim of a scam, it was already too late.

The money was gone.

Don't miss out: You can find everything about jobs and careers in the career newsletter from our partner Merkur.de.

Artificial intelligence: Fraud with deepfakes is on the rise

Scams using AI-generated voices and images are increasing,

reported

Orf.at.

Whether fake politicians' speeches or scams on dating apps - the tactics are becoming more and more absurd.

Telephone fraud with AI-altered voices is also becoming increasingly common.

Voice cloning is the name of the technique in which a deceptively real-sounding imitation is created using snippets of a real voice.

If the AI ​​is fed enough material, any text can be artificially generated from now on.

The modification of image and video material by AI now appears deceptively real.

It takes a very close look to uncover the false photos.

But there is often not enough time to take a closer look at pictures.

As in the case of the Chinese employee, you may also feel some pressure because you think you are speaking to a superior.

It can quickly happen that fake material goes unnoticed.

What are deepfakes?

The term deepfakes refers to manipulated video, audio or text content that is created using processes based on artificial intelligence (AI).

Due to the use of deep neural networks, such methods are colloquially referred to as “deepfakes”.

Thanks to advances in AI technology, deepfakes can be created in high quality with little effort and expertise.

Common deepfake techniques include manipulating faces in photos and videos (face swapping) and faking voices (text-to-speech, voice conversion).

Source: Federal Office for Information Security (BSI)

AI rip-off: How to protect yourself from deepfakes fraud

When reading such a scary message, many questions arise: What is real?

What is fake?

Can you even trust your eyes and ears anymore?

In the age of artificial intelligence, nothing is as it seems.

You have to expect that callers are fake, that images are fake and that even videos were created with AI.

But how can you recognize deepfake fraud?

Here are some tips:

My news

  • The five most common reasons for rejection after the job interview read

  • If you can answer yes to seven questions, it's time to quit

  • The right clothing for your application photo: How to make a good impressionread

  • Salary negotiations with your boss: five tips to get more money read

  • Why openness pays off in an interview

  • Invitation to an interview: what you should pay attention to in the company environment read

  • Keep calm and ask questions:

    Similar to phishing or fraud emails, you should not follow up on claims immediately, no matter how genuine they seem.

    Even if the instructions come directly from your supposed superior, it is advisable not to act abruptly but to remain calm.

    The best thing to do is to consult your superiors or try to reach the person in question through other channels.

  • Look closely:

    Try to pay attention to unusual details.

    Choppy speech or vocal unnaturalness can be indications that it is a fake.

    Unnatural mouth movements when speaking or flickering in the image also indicate a deepfake.

  • Ask questions:

    If someone asks you to transfer money, it is advisable to be skeptical.

    If your boss calls you and asks for a transfer, the first thing you should ask yourself is whether all of this is even plausible.

    Consider asking the person questions that only they can answer.

In the case of the Chinese employee, police say the deepfake videos were recorded in advance and did not contain any dialogue or interaction with the victim.

The fraudsters could probably have been thrown off course by asking questions or interrupting the person speaking.

A query to the relevant supervisor might also have helped.

In general, it's better to ask questions than to look stupid later.

Despite the numerous dangers that AI and its misuse bring with it, one should not turn away from the technology completely.

There are ways to use artificial intelligence sensibly.

Source: merkur

All life articles on 2024-03-01

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.