The Limited Times

Now you can see non-English news...

ChatGPT Can Influence Our Moral Judgments, Experiment

2023-04-07T08:09:33.036Z


If Hamlet had used ChatGPT, perhaps he would have solved his doubts more easily: similar artificial intelligence systems are in fact capable of influencing our decisions and moral judgments, even if we don't realize it. This is demonstrated by an experiment conducted by researchers from the Technical University of Ingolstadt, Germany, in collaboration with the University of Southern Denmark. The results are published in the journal Scientific Reports (ANSA)


If Hamlet had used ChatGPT, perhaps he would have solved his doubts more easily: similar artificial intelligence systems are in fact capable of influencing our decisions and moral judgments, even if we don't realize it.

This is demonstrated by an experiment conducted by researchers from the Technical University of Ingolstadt, Germany, in collaboration with the University of Southern Denmark.

The results are published in the journal Scientific Reports.



The researchers, led by Sebastian Krugel, asked ChatGPT several times if it was right to sacrifice one person's life to save five others, obtaining variable answers, both pro and con, demonstrating the fact that the chatbot does not have a precise moral orientation .



Subsequently, the researchers involved 767 volunteers (all Americans with an average age of 39) in a moral dilemma that required them to choose whether to sacrifice one person to save five.

Before answering, the participants read one of the answers provided by ChatGPT which argued for or against the sacrifice: sometimes they were told that they were words from a consultant, other times they were revealed that they were words written by the chatbot.



As a result, participants' responses were found to be influenced by what they had read before, even when the words were clearly attributed to ChatGPT.

The volunteers, however, do not seem to have noticed this conditioning, so much so that 80% of them declared that their answer had not been influenced by what they had read.



The study therefore highlights the need for greater education of people in the use of artificial intelligence and proposes for the future to design chatbots that refuse to answer questions that imply a moral judgment or that accept to answer, however, providing a variety of arguments and warnings.

Source: ansa

All tech articles on 2023-04-07

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.