The Limited Times

Now you can see non-English news...

With AI, even the automatic corrector can influence our judgment

2023-05-16T17:47:51.721Z

Highlights: Artificial intelligence can influence the judgment of those who use it, according to a new study. The study was presented by Maurice Jakesch, a doctoral candidate in the field of information science at Cornell University in New York. Most of the participants in the experiment did not even realize that artificial intelligence was manipulated, nor did they realize that they were. "We need to better understand their implications," warns co-author Mor Naaman, a professor at Cornell Tech's Jacobs Technion-Cornell Institute.


(ANSA)


by Alessio Jacona*

Artificial intelligence must be handled with care: this is, summing up, the warning that emerges from a study presented last April, which shows that generative AI is potentially able to influence the judgment of those who use it and impose a particular point of view, depending on the bias of the algorithm.

The research is significantly titled "Co-Writing with Opinionated Language Models Affects Users' Views", and was presented at the CHI Conference on Human Factors in Computing Systems in New Orleans. Relaunched by The Wall Street Journal, it studies AI-powered writing assistants that automatically complete sentences or write intelligent answers for us, revealing how these systems, while suggesting words, can drive users' reasoning, as well as instill ideas in their heads. Ideas that can influence their judgment and actions.

A news that makes us think, especially considering the direction that in recent months, the competition between the tech giants has taken: integrating artificial intelligence into every possible work or creative tool. It all started when Microsoft acquired OpenaAI, the company that created chatGPT, and especially the Large Language Model GPT in their various versions (the latest is GPT-4). Now that same technology flows into Copilot, the solution with which the Redmond giant will integrate an AI assistant function in all Microsoft 365 applications and services; Then came Google, which with its new Large Language Model PaLM 2, just presented, will enhance practically everything: from the search engine to Gmail, from Google to Google Photos and even Android 14, in fact arriving in the pockets of billions of users.



The experiment

The study was presented by Maurice Jakesch, a doctoral candidate in the field of information science at Cornell University in New York. In it, 1,506 participants are asked to write a paragraph answering the question "Is social media good for society?". To respond, people used an AI writing assistant based on a large language model that Jakesch and his team of colleagues could "pilot," triggering positive or negative opinions on social media as needed.

The context where the writing took place was a platform that mimics a social media website: in it, users wrote while for example recording which AI suggestions were accepted and which were not, or how long it took to compose the paragraph. In the end, those who worked with the AI assistant with bias in favor of social media, wrote more sentences to support the goodness of MS than the control group who worked without assistant. Same result, but of opposite sign, for the group that assisted by an AI with negative bias towards MS. At the end of the experiment, the participants who used AI then reiterated in a survey the opinion of their assistant, confirming that the ideas instilled by the human-machine interaction persisted even immediately after the writing phase.

In the recent past, some studies have already come out on how large language models (LLMs) such as ChatGPT can create persuasive advertisements and political messages, but this is the first time that it has been shown how the very process of co-writing with a tool powered by bias-polluted artificial intelligence can influence a person's opinions.

Risks

In practice, the study seems to suggest that the biases inherent in artificial intelligence writing tools could have significant repercussions on the worldview of those who use them and, consequently, on culture, politics or even on the economy. Similarly, it is also interesting to note that such biases could be both

involuntary, i.e. the result of AI training on bias-polluted data, and intentional, i.e. developed ad hoc by those who manage AI to manipulate user opinion, for example to achieve illicit purposes. If we add to this that, according to the study, most of the participants in the experiment did not even realize that artificial intelligence was manipulated, nor did they realize that they were, the picture becomes disturbing.

"We are rushing to implement these AI models in all areas of life, but we need to better understand their implications," warns co-author Mor Naaman, a professor at Cornell Tech's Jacobs Technion-Cornell Institute and of information science at Cornell Ann S. Bowers College of Computing and Information Science. The process of co-writing does not give me the feeling of being persuaded - he reveals - It is as if I am doing something very natural and organic: I am expressing my thoughts with help".

Technologies to govern. Together.

Jakesch and Naaman's research once again highlights the need to learn how to govern powerful technologies such as AI before they take hold, otherwise you run the risk of suffering them instead of using them. In short, we need a greater and more inclusive public discussion on how they should be used, monitored and regulated to avoid misuse. "The more powerful these technologies become, the deeper they fit into the social fabric of our societies, the more careful we should be about how to govern the values, priorities and opinions embedded in them."

The work, which was funded by the National Science Foundation, the German National Academic Foundation and the Bavarian Ministry of Science and the Arts, also contributed Advait Bhat of Microsoft Research, Daniel Buschek of Bayreuth University and Lior Zalmanson of Tel Aviv University.

*Journalist, innovation expert and curator of the Artificial Intelligence Observatory ANSA.it

Source: ansa

All news articles on 2023-05-16

You may like

News/Politics 2024-04-11T14:52:10.358Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.