Davos Conference of the World Economic Forum./Unsplash
According to the World Economic Forum's (WEF) Global Risks Report 2024, which is based on a comprehensive survey of hundreds of CEOs from around the world, fear of the consequences of adopting AI technology and cyber risks were the top two technological risk factors, and it's not just that they are tied together.
Global tensions , domestic armed conflicts and economic pressure are all contributing to an increase in cybercrime, cyberespionage and offensive cyber activities, but this is not unusual. In fact, every year the WEF issues a similar warning, but given the increased adoption of AI by cybercriminals, they are now able to increase the number of horticulturals they can target as well as the volume of attacks, lowering the "cost per attack" below a ridiculous threshold that actually allows them to attack at no cost.
Interpol Secretary General Jürgen Stock, speaking in Davos, highlighted the challenges that AI poses to cyber defenders .He said that "global law enforcement agencies are grappling with the sheer scale of cyber-related crime.
Fraud enters a new dimension with all the devices the internet provides.
Crime only knows one direction, up."
The technology works both ways
Chen Burshan/relation
If organizations move fast enough they will be able to harness the new technologies to combat these attacks, but this will likely not be the case for most companies.
As the cyber threat landscape changes rapidly, the WEF warns of a growing gap between governments and organizations that can deal with AI-powered cyber attacks, and those that cannot.
But will these warnings prevent or slow down organizations from adopting artificial intelligence?
very doubtful.
So far, AI is seen as the most promising technology by business leaders.
A recent survey by Ernst & Young found that nearly all (99%) financial services leaders surveyed reported that their organizations use artificial intelligence in some form, and all are already using, or planning to use, generative AI.
However, CEOs are not blind to the risks. A survey conducted by PWC found that 77% of CEOs think that Gen AI may increase the risk of cyber attacks.
Regulation?
you are welcome
If a new technology is both promising and frightening (such as the invention of the automobile, the airplane, or electricity) - shouldn't it be regulated by governments?
Because the world has gone through several technological cycles in the last 150 years, we know it is wise for government agencies to be involved in the adoption of new technologies.
For example, no one believes that autonomous vehicles should just start roaming unsupervised.
These vehicles undergo intensive testing under the watchful eye of the relevant regulator to ensure that they do not cause unnecessary dangers to people and the environment.
The same goes for new drugs, new airplane models, and new pesticides.
Why should AI be different?
Many believe that its adoption should be regulated.
"This is the most powerful technology of our time," said Arati Prabhakar, director of the US White House's Office of Science and Technology Policy at an event in Davos. He added that: "The US sees AI as something that must be managed. The risks must be managed."
The European Union was the first to respond with the artificial intelligence law it passed last month.
The draft regulation aims to ensure that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and the EU.
The US is not far behind, with its own executive order on the development and use of safe, secure and reliable artificial intelligence issued by the Biden administration back in October 2023.
More in Walla!
The breakthroughs, the treatments and what does the future hold?
Everything you need to know about diabetes
In collaboration with Sanofi
Technological or military race?
The AI race could easily turn into a military arms race.
It was recently reported that the American Air Force is testing new artificial intelligence software to fly its advanced drones, and it was also announced that the American company OpenAI is working with the Pentagon on a number of projects, including the development of cyber security capabilities. But the issue of AI can actually serve the opposite purpose, of building Bridges between the US and its adversaries.
It was recently reported by the Financial Times that during 2023, major players in the US AI industry engaged in discreet discussions with Chinese AI experts, possibly in initial attempts to establish guidelines for safe AI development.
Currently, China lags behind the US in the development and adoption of artificial intelligence, and with its government setting an ambitious goal of becoming a global artificial intelligence leader by 2030, such guidelines could prevent it from recklessly using the technology. Jeff Wong, global innovation director at Ernst & Young, said that "Global cooperation is required to determine common values and research policies and common legal frameworks are needed to catch up with the technology."
So is AI good or bad?
Everyone seems to agree that whatever the answer, technology is here to stay.
It's already affecting business, it's starting to affect cyber security and it will have military and international implications very soon.
The WEF has done an excellent job highlighting the importance of this issue, while driving initial international discussions on ways to curb the risks inherent in AI.
Chen Burshan is CEO of Skyhawk Security
More on the same topic:
artificial intelligence
AI
arming race
regulation
policy
Davos
World Economic Forum
U.S