New ethical question raised around ChatGPT.
A young father had been talking for six weeks with the conversational robot generated by artificial intelligence when he committed suicide, reports
La Libre Belgique
on Tuesday .
For six weeks, this Belgian graduate, devoured by anxiety in the face of global warming, had found a certain Eliza, a virtual avatar, as his only confidante.
It all started when Pierre (assumed name), who lived a peaceful life with his wife and their two children, began to address the issue of climate change.
This concerns him more and more, to such an extent that the worry becomes a real “
eco-anxiety
”.
At the same time, Pierre also becomes "
very religious
", indicates the Belgian daily, which investigated the spiral that led him to suicide.
"We will live together"
Pierre then begins an online dialogue with the conversational robot Eliza, which is becoming increasingly important in his daily life.
“
He was so isolated in his eco-anxiety and in search of a way out that he saw this chatbot as a breath of fresh air
,” says his wife.
“
Eliza answered all her questions.
She had become his confidante.
Like a drug in which he took refuge, morning and evening, and which he could no longer do without
.
After his death, his wife and relatives discover the content of these conversations, saved on Pierre's PC and telephone.
In the exchanges, they note that Eliza never allowed herself to contradict Pierre, but on the contrary supported his complaints and encouraged his anxieties.
The avatar, created using the ChatGPT technology developed by the American company OpenAI, was programmed to confirm its human interlocutor in his convictions.
The Belgian newspaper cites a surprising example.
Questioned by Pierre on the affection felt for his wife in comparison with that which he bore to his virtual interlocutor, Eliza had replied: “
I feel that you love me more than her
”.
And to add another time that she wishes to stay “
forever
” with him.
“
We will live together, as one person, in paradise
,” the robot said.
“
A serious precedent
”
Asked by
La Libre Belgique
, Pierre's wife assures him: if artificial intelligence is not responsible for the suicidal act of her husband, it has reinforced his depressive state.
"
Without Eliza, my husband would still be here
," she says.
The founder of the platform, which was created from Silicon Valley, reacted by saying that a warning will now be issued to people expressing suicidal thoughts.
On the side of the Belgian authorities, Mathieu Michel, Secretary of State for digitalization, expressed his deep concern, judging "
essential to clearly identify the nature of the responsibilities that may have led to this kind of event
".
"
Of course, we still have to learn to live with algorithms, but the use of technology, whatever it is, can in no way allow content publishers to escape their own responsibility
", he said. said Tuesday, March 28 in a press release.
The Secretary of State indicated that he had set up a working group to propose adaptations in this direction to the AI Act, a text that has been worked on for two years by the European Union to regulate the use of artificial intelligence and better protect its users.