The Limited Times

Now you can see non-English news...

“Faced with the advent of artificial intelligence, we lack philosophers”

2023-01-19T11:46:58.927Z


FIGAROVOX/INTERVIEW - For Laetitia Pouliquen, director of the "NBIC ethics" think tank, if the generalization of AI and algorithms is not accompanied by a reflection on the nature of man, society risks sinking into dystopian and transhumanist drifts.


Laetitia Pouliquen is director of "NBIC Ethics", a think tank dealing with the ethics of technologies with European institutions.

She is also the author of

Woman 2.0: Feminism and transhumanism: what future for women?

(ed. Saint-Léger, 2016).

FIGAROVOX.

- For many, the generalization of artificial intelligence in our daily lives will profoundly transform our way of life.

Do you think that the GAFAM monopoly in this area leaves enough room for ethical reflection on these changes?

Laetitia Pouliquen.

-

It is obvious that our daily life will be profoundly transformed, whether in our relationship to reality or to the other, by the immission of the machine, AI, and algorithms in our daily lives.

The social, the societal, and even anthropology are being reshaped.

The digitization of our daily lives makes us lose sight of a certain vision of man.

Faced with these changes, a moral reflection is necessary to establish a legal framework, so as not to forget what man is and how he differs from the machine.

Read alsoThe development of algorithms out of control, an obsession of the CNIL

This ethical reflection in the field of AI is entirely possible, particularly within the framework of the European Union.

We Europeans are caught between two fires, on the one hand the American GAFAMs, and on the other hand the Chinese BATXs (Baidu, Alibaba, Tencent and Xiaomi), and we remain limited in terms of investment, research and development.

However, our approach, which is more focused on ethics than on investment, gives us a special role in the development of new technologies.

Europe is historically the first center of philosophical and moral reflection, and must continue to be so in cutting-edge sectors.

Is the moral approach of the European Union towards AI, in particular with its

Ethical Guide to Artificial Intelligence

, still relevant?

Any moral reflection, whether in the field of robotics or otherwise, is based on a certain conception of man and the world, which can sometimes be totally disconnected.

Thus, although it claims to be “ethical”, the European Union's approach is not necessarily good, everything depends on the anthropological foundations on which it is based.

We in the West are lost in an endless moral wandering.

We must reinvest philosophy and the humanities in the field of AI

Laetitia Pouliquen

In 2017, for example, the Delvaux report presented to the European Commission provoked a lot of debate.

In this legislative report, the Luxembourg MEP Maddy Delvaux, proposed certain very idealized and ideologized measures with regard to the defense of robots.

Some articles presented human augmentation as something essentially positive, others established the notion of legal personality for robots, in order to make them subjects of rights... The original version even wanted to allow robots to o have an insurance heritage and invest in the stock market, in order to be able to finance their own development, we were swimming in the midst of dystopian delirium.

This text was based on a conception of man and machine totally disconnected from reality,

We have therefore written an open letter to the European Commission, with the support of 300 European signatories, in order to warn of the dangers of this project.

And among these 300 people, there were not only representatives of the technological sector, but also philosophers, anthropologists, psychiatrists, and even theologians, to clearly recall what man is, and his difference with the machine.

It is necessary to put thinkers back at the center of reflection on artificial intelligence, and to integrate them into the expert groups of the European Commission.

However, despite a few modifications and wide media coverage of our open letter, the Delvaux report ended up being adopted.

How can the State and the European Union set up an ethics of artificial intelligence, when they proclaim themselves neutral and refuse to impose moral standards on the individual?

This is the main problem of our time.

Legislating on moral issues about AI has become almost impossible today, due to the relativism of our society.

There is no longer a common base, universal principles on which to rely.

When we no longer know how to say what man is, we no longer know how to say what the machine is.

The modern individual no longer supports any other moral and natural order than his own desire.

The "I" has become the measure of humanity.

And the disconnection from reality, linked to the digital, reinforces this relativism.

We in the West are lost in an endless moral wandering.

We must reinvest philosophy and the humanities in this area.

Committees that are supposed to be “ethical” tend to rely not on morality but on a capitalistic logic, because profits are not relative.

Laetitia Pouliquen

The committees supposed to be "ethical", therefore tend to rely not on morality but on a capitalistic logic, because the benefits themselves are not relative.

And we saw this very clearly in the “European Commission Ethics Guide” on artificial intelligence, which followed the Delvaux report.

Among the 53 experts who participated in this guide, 90% of them were in the technological business, experts, or representatives of consumer groups, but there were almost no philosophers, anthropologists, or psychiatrists... L he approach was absolutely not human, but economic.

If we do not ask ourselves the question of what man is, ethics guides on AI risk becoming real dystopian and transhumanist projects.

What do you recommend to regulate the use of AI and respond to ethical issues?

I had proposed several elements when writing the “AI Ethics Guide” to the European Union.

One of the proposals concerned the freedom of the individual.

The idea was to allow the user, if he does not want to go through an algorithm, whether for an insurance contract, a loan or other, to request human interaction.

I therefore recommended a graduation, a notation, which makes it possible to see if we are dealing with a “fully AI”, “AI with human supervision”, or “fully human” service.

When we talk about justice, banking, asset management, and above all human rights, it is essential to know who we are dealing with, a human or an AI But that has not been taken up, the approach of this ethics guide has remained much more legal than ethical.

Read alsoChatGPT: “Will artificial intelligence replace teachers?”

I had also tried to set up a label for algorithms, which was called “Ethic inside”, and which would guarantee compliance with European ethical rules.

But it is almost impossible to follow the path by which an algorithm arrived at such and such a decision, and therefore to say whether it is ethical or not, whether it respects the rules.

There is also the question of liability which complicates matters.

Who is responsible for an algorithm's decisions: the company, the developer, the user, or the AI ​​itself?

How to objectify the moral character of an algorithm if we cannot even answer this question?

Developers cannot be held responsible for algorithms so complex that they no longer fully master them.

By its very nature, the I.

A partly escapes our control, and we are not going to give it a legal personality... It's a real headache.

It is therefore extremely complicated to set up checkpoints for such complex algorithms, especially when they are globalized on the internet.

To Artificial Intelligence: What is ChatGPT?

Source: lefigaro

All news articles on 2023-01-19

You may like

News/Politics 2024-02-05T08:50:56.615Z
News/Politics 2024-02-27T09:15:56.936Z
News/Politics 2024-02-27T17:23:56.390Z

Trends 24h

News/Politics 2024-03-28T06:04:53.137Z

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.