The Limited Times

Now you can see non-English news...

Why (and why not) we should say “thank you” or “good morning” to an AI

2024-04-03T04:20:34.049Z

Highlights: Some experts see little value in being kind to non-conscious entities. Others argue that the way we interact with artificial intelligence could influence the quality of responses. The debate divides into two fundamental areas: the ethical and the practical. The next aspect of the debate is whether being polite with a machine is a hindrance or a benefit. Expressions like “thank you” and “please and thank you’ need clarity and the imposition of restrictions, says Enrique Dans, professor of Innovation and Technology at IE Business School.


While some experts see little value in being kind to non-conscious entities, others argue that the way we interact with artificial intelligence could influence the quality of responses.


Having good manners and being polite requires a clean and balanced humor. There are times when optimism is so overflowing that we not only say hello to the neighbor, and the man who sells newspapers under the house, but, before closing the computer, we thank ChatGPT for having helped us at work and Why not, we wish you a great day. The idea that it is necessary to show courtesy towards a machine is as unusual as the fact that all machines are programmed to behave politely towards users. Of course it's easier for them, because they never have a bad day. But is there any point in being nice to an AI? And not be?

In 1996, researchers Byron Reeves and Clifford Nass developed the concept of “media education.” This term suggests that people, often without realizing it, interact with technological systems—such as computers and televisions—as if they were human beings. Together they carried out several experiments with varying results.

In one of them, for example, participants worked on a computer and were then asked to evaluate their performance. Curiously, when the evaluation of the machine was carried out on the computer they had worked with, the notes tended to be more positive, as if they avoided speaking badly about the computer in their presence. In another experiment, a computer praised a group of people for performing a task well. These participants gave a higher rating to the machine that had praised them, even knowing that these praises were generated automatically.

Since then, numerous studies have shown that, on the one hand, humans tend to anthropomorphize machines and, on the other hand, when a technological system imitates human qualities, such as courtesy, users perceive better performance on its part. This inclination, however, does not resolve the debate about the desirability of being kind to technology.

Ethics and utility

First, the discussion focused on interactions with voice assistants such as Siri and Alexa, ―with the question of why they always have a female voice and name― and has recently extended to advanced language models such as ChatGPT, Gemini and Claude. The debate divides into two fundamental areas: the ethical and the practical. On the one hand, we analyze whether or not it is appropriate to be polite to a technological system and whether it makes sense to consider entities like ChatGPT as moral subjects. On the other hand, it is analyzed whether courtesy in treatment influences its operational efficiency.

The first part of the question is reminiscent, at least at first glance, of the long-standing ethical discussion about the moral status of animals and how we should interact with them. However, there are many biological and cognitive differences between animals and machines. Unlike technological systems, many animals have nervous systems that allow them to experience pain and pleasure, indicating that they can be positively or negatively affected by the actions of others. Additionally, many show signs of having some level of consciousness, which implies a subjective experience of the world.

These beings can also experience emotions that, although different from human ones, reveal an emotional complexity that affects their well-being and behavior. Since machines do not possess these biological and emotional capabilities, they lack the necessary criteria to be considered similar to animals, let alone human beings.

Better answers or waste of time?

Enrique Dans, professor of Innovation and Technology at IE Business School, is not against being kind to machines. What does stand out, however, is the importance of knowing that a machine, having no perceptions, emotions or consciousness, cannot understand or value the courtesy or gratitude expressed. “No one is against being polite to them, but being polite to a machine has little value, because it can't perceive it,” he says.

One of the arguments often put forward against this opinion is that future generations of AI could reach levels of complexity that would allow them to develop consciousness or even emotions. “Someone has told me the joke that they prefer to say please and thank you, lest it be that in the future we have to get along with artificial intelligence systems. That, honestly, belongs in the realm of science fiction, because right now we are very far from reaching that point,” says Dans.

The next aspect of the debate is whether being polite with a machine is a hindrance or a benefit when interacting with it. Dans emphasizes the importance of understanding that behind each machine response, there is a complex data processing system, patterns and algorithms, not a human being endowed with emotions and intentions. “Trying to treat an algorithm politely is anthropomorphizing it, and anthropomorphizing an algorithm is inappropriate. Machines need clarity, the definition of a goal and the imposition of restrictions. Expressions like “please” and “thank you” only add superfluous information that the system must process, unnecessarily consuming computing resources,” he defends.

Julio Gonzalo, director of the UNED research center in Natural Language Processing and Information Retrieval, maintains that, in reality, in certain systems it is possible that the user does receive better quality responses if they are more educated. This circumstance does not derive from the machine processing emotions or feeling more inclined to offer a better service because it feels respected. The real explanation is that, when communicating politely, the user's messages tend to more closely resemble the examples of polite interactions that the assistant has analyzed during training. Since these samples are usually associated with better quality responses, politeness can indirectly improve the quality of the responses obtained.

Gonzalo explains that when using certain language models such as ChatGPT, Gemini or Claude, it is crucial to keep in mind that they are systems “very sensitive to the formulation of the query, to surreal extremes.” Seemingly minor changes to the structure of a command, such as punctuation or the inclusion of certain motivational phrases, can have a dramatic impact on the effectiveness of the response. “Separating with a colon or a space or using more or fewer parentheses in the format can make the accuracy of the answer jump from 8% to 80%,” he says.

Adding “take a deep breath and think step by step” has also been shown to greatly improve the accuracy of responses that require reasoning. This happens not because the model “thinks” logically, but because these instructions lead it to response patterns that in its training were associated with greater clarity and detail. Even statements that should not influence the response, such as indicating the time of year (“it is May” or “it is December”) or complimenting the model (“you are very intelligent”), can alter the quality of the responses. “The height of surrealism was reached when it was recently discovered that mathematical answers improve if the system is asked to express itself as if it were a Star Trek character,” concludes the expert.

Source: elparis

All news articles on 2024-04-03

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.