The Limited Times

Now you can see non-English news...

"Artificial intelligence must remain a tool at the service of the doctor without making decisions for him"

2021-04-02T13:25:55.684Z


FIGAROVOX / GRAND ENTRETIEN - The use of artificial intelligence is revolutionizing the field of health. The doctor Philippe Donnou and the teacher Yann Ferguson, however, warn against the ethical drifts which could result from a use ...


Philippe Donnou, Vice-President of ANAMEVA, is a medical consultant for victims in Brest.

He directed the work

of the White Paper on compensation for bodily injury (LEH Edition, December 2020, 214 p.)

And available free in digital version.

Yann Ferguson is a teaching researcher at the Catholic Institute of Arts and Crafts in Toulouse, member of the Global Partnership for Artificial Intelligence (PMIA) of the OECD.

A specialist in work changes, he devotes his work to the ethics of Artificial Intelligence.

FIGAROVOX.-Help with diagnosis, image interpretation, predictions: the first applications of artificial intelligence to health seem relentlessly effective.

They also make it possible to eliminate many human hazards.

How to present the ethical risks, forgotten or misunderstood by the general public?

Philippe DONNOU.-

Artificial intelligence (AI) is indeed already a diagnostic aid (scanner, MRI) and therapy, particularly in oncology.

Our work addresses an aspect that is much less known to the public, but which is developing at a rapid pace, which is that of the entry of AI into the mechanisms for calculating compensation for victims.

This book offers a complete panorama of the ethical risks associated with this technology, because they are very concrete and visible.

In compensation, a good ethical configuration of an algorithm is linked to the quality of data control to avoid bias: the victim and the actors must be informed, the risk being the undervaluation of the damage sometimes to the benefit of insurers.

Yann FERGUSON.-

In addition, communication on the effectiveness or even the superiority of these applications focuses the attention of the general public on a risk of replacement of humans by machines.

However, the results of these applications are obtained in very controlled environments.

Placed in real conditions, we realize that there is still a lot of work to reach the level of humans.

On the other hand, they can already constitute a precious help.

This is where more little-known ethical issues are lodged: how to organize this human-machine relationship so that it brings real progress?

How to prevent the user from overly trusting the machine or on the contrary rejecting it without benefiting from its contribution?

In other words, what are the conditions for designing a trustworthy AI?

The potentially hegemonic drift of insurers with the evaluative power of algorithms is real to the detriment of the uniqueness and fragility of the victim.

Philippe Donnou

The risk is therefore to see insurers impose algorithms to the detriment of the necessary empathy of the doctor.

In this area, how does AI mark a “

regression

” and industrial submission?

Philippe DONNOU.-

Insurers are already using data: companies (PREDICTICE) are already developing and selling compensation repositories based on statistical analysis of millions of court decisions.

A project called “

DATAJUST

” (the decree of which dates from March 2020) for a standardization of compensation has been put on hold.

The potentially hegemonic drift of insurers with the evaluative power of algorithms is real to the detriment of the uniqueness and fragility of the victim.

Technoscience (algorithms) can crush empathy.

"

Science without conscience is nothing but the ruin of the soul

" said Rabelais, which must henceforth be read in a more literal sense!

To read also:

"Artificial intelligence: a good servant and a bad master"

AI is only at the service of man and the involuntary or passive submission of the expert doctor to this modern means is an obvious risk: doctors will no longer be able to use the spring of a just emotion felt in the patient. .

As Professor Antonio Damasio sums it up, “

Without emotions, our reasoning is biased and our simplest choices can lead to aberrant decisions

”.

The first pitfall is to consider the neutrality of artificial intelligence as an argument of authority in the face of the biases of human judgment.

Yann Ferguson

Yann FERGUSON.-

The first pitfall is to consider the neutrality of artificial intelligence as an argument of authority in the face of biases in human judgment.

However, algorithms are not neutral: training data come from past human decisions and their purpose is the expression of interests.

The second is to consider the emotional dimension as an obstacle to the establishment of a rational judgment by the doctor and to see AI as a remedy for this emotional burden.

However, it is now established that emotions are essential allies of reason.

And AI can help build our empathy.

First by proposing solutions for analyzing human emotions and then by inviting the doctor to better explain and argue the emotional component of his judgment in the face of an AI that is devoid of it.

The conditions for setting up and using these tools can lead to great dangers, but also to improvements.

Ethics is indeed a process for taking an informed discussion by evaluating conflicting arguments.

Yann Ferguson

Can physicians really become mere "

controllers

" of AI, and afford not to understand the reasoning of the machine?

Philippe DONNOU.-

The problem is that the control of AI does not depend on the insurance doctor alone, we must act upstream because without an ethical safeguard, AI could replace human decision.

Is it moral?

Expertise without ethics is expertise without foundations.

Yann FERGUSON.-

Ethics is indeed an approach to take an informed discussion by evaluating contradictory arguments.

This makes it incompatible with “

black box

” type AIs

that produce results that cannot be explained due to the complexity of their architecture.

To read also:

"Artificial intelligence, towards a diminished man?"

This is why the European Commission has introduced the notion of “

high risk

” applications that cannot be reconciled with AI of the black box type.

They are used in high-risk sectors, such as health, and can generate significant risks: in the health sector, we can thus distinguish between planning or accounting systems, tools for medical decision-making with far-reaching consequences. more serious.

Wouldn't it be a good thing to see the number of “

those who live from the illness of others

reduced

, as psychiatrist Édouard Zarifian put it?

By eliminating financial professions and making it easier for doctors?

Philippe DONNOU.-

Whatever the cost of the assessment as long as it is fair: the doctor, the lawyer, the insurer and the magistrate will all have to confront the algorithmic machinery, the result of which will help to fix the compensation of the victim.

But if they are discarded and considered as "

costs

", the takeover of other actors will be total.

Yann FERGUSON.-

It is common to stigmatize finance in sectors of activity that we would like to sanctify.

But it is not the role of ethics which is not normative but applied to situations.

Ethics is not intended to state the good and the bad in the sector of finance, medicine or AI but to orient the behavior of those who evolve there in the direction of a more just society.

The Hippocratic oath resolutely commits the doctor who must refrain from being the instrument of the insurer, his probity would be at stake.

Philippe Donnou

As we can see, the risks of instrumentalization are high.

Will legal and ethical safeguards suffice?

Can a specific Hippocratic Oath suffice?

Philippe DONNOU.-

The Hippocratic Oath resolutely commits the doctor who must be careful not to be the instrument of the insurer, his probity would be at stake. Artificial intelligence must remain a tool at the service of the evaluating doctor and the magistrate without deciding for them.

Still in this area, compensation for damages enshrined in the Badinter law (which could not anticipate the major weight of data) is not compatible with an automated process.

Victims 'associations must be informed of this risk and ensure, with the help of victims' councils (doctors and lawyers), that the fair assessment results from a debate, from a real ethical sharing, between a doctor- insurance counsel and a victim's medical advisor because, as has been argued, the insurance medical expert does not enjoy the appearance of impartiality.

Yann FERGUSON.

-The Hippocratic oath is a deontological ethic, that is to say an inner compass that guides the doctor according to principles.

The current context now calls for an ethics of discussion between different stakeholders guided by divergent interests.

The deontology is centered on the individual in a situation of dilemma, the ethics of the discussion implies an openness towards the other and his ethics.

I believe that the problem of "

fair compensation

" for damage, like the future problems posed by the implementation of artificial intelligence, is not the result so much of a lack of ethics on the part of everyone, but of a lack of ethics. shared ethics.

White paper on bodily injury compensation, edited by Philippe Donnou, LEH edition, 216 pages, 7.99 euros in paper format, free in digital format.

LEH edition

Source: lefigaro

All news articles on 2021-04-02

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.