The Limited Times

Now you can see non-English news...

When in war, AI will decide who survives

2022-04-04T19:55:37.375Z


(HANDLE) By Alessio Jacona * What happens if AI decides who lives and who dies in a war? What still for the moment is still only a theoretical reflection or an ethical dilemma, could soon become a real problem that affects every aspect of the conflict: from attacks on the enemy, to the management of the wounded in field hospitals. AI in war When it comes to attacking, artificial intelligence already pla


By Alessio Jacona *

What happens if AI decides who lives and who dies in a war?

What still for the moment is still only a theoretical reflection or an ethical dilemma, could soon become a real problem that affects every aspect of the conflict: from attacks on the enemy, to the management of the wounded in field hospitals.

AI in war

When it comes to attacking, artificial intelligence already plays an important role: the Ukrainian Armed Forces, for example, are already using the TB2 Bayraktar, a Turkish-made unmanned tactical aerial vehicle (i.e. drone) against the Russian army. , which can carry anti-tank missiles, which lands and takes off by itself thanks to AI, but which still requires human intervention to activate the weapons.

In short, we need someone to pull the trigger.

The use of the Lantset, a Russian "kamikaze drone" designed to attack tanks, columns of vehicles or concentrations of troops, is different: it flies autonomously until it detects a pre-set target and crashes into it, exploding.

Used in Syria, it is basically a stray but intelligent bullet, always waiting to strike.

More generally, artificial intelligence is already used in what we call information warfare (or infowar), which revolves around the use of news - true, or more often false and artfully created - to achieve strategic military objectives, guide public opinion, create or crack consensus.

Today the propaganda machine can in fact already count on the use of adversary generative networks (or GANs), a subset of machine learning in which two antagonistic neural networks basically fight and train each other until they reach a conclusion.

The result is fake news of increasingly credible text, audio and video, real media bombs that explode in the information landscape to spread virally, infecting consciences and clouding everyone's judgment.

AI and triage of battle wounded: making decisions "In The Moment"

Then there is another context, much less obvious, in which the AI ​​may soon find itself having to choose who has a chance to live and who does not: the management of triage for the wounded in battle.

In war, medical personnel work in constant emergency and welcome the wounded by continuously making difficult choices, defining the priorities of intervention to save lives based on a series of cold parameters: severity of the conditions, estimation of the possibility that the wounded person has to survive, availability of means and tools to intervene and treat it.

Doctors often have only a few moments to decide who to rescue first, to draw on their experience and make choices with enormous responsibility, which is why now the Defense Advanced Research Projects Agency, (or DARPA) the innovation arm of the '

The project is called “In The Moment Program”, will be launched shortly and aims to develop an AI-based technology that makes quick decisions in stressful situations using algorithms and data.

The basic idea is that the removal of human biases can save lives, and the project will serve to develop and evaluate reliable algorithmic decision makers for the Department of Defense (DoD) mission-critical operations, so that they become capable of replacing humans in the making decisions that do not have a single, univocal answer.

The ITM project is only in its infancy, and will probably take time to give usable results in warfare, but already now there are those who express strong ethical concerns about the use of algorithms to make life or death choices.

Especially considering that AIs have already shown that they are not immune to prejudices, which, moreover, they absorb from the humans who program them and produce the data they train on.

* Journalist, innovation expert and curator of the Artificial Intelligence Observatory ANSA.it

Source: ansa

All news articles on 2022-04-04

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.