The Limited Times

Now you can see non-English news...

ChatGPT lies better than humans

2023-06-28T18:28:19.842Z

Highlights: New study finds it's harder to spot misinformation in tweets if they're written by artificial intelligence. "GPT-3 is able to better inform and misinform us," conclude the authors of a new study. The logical consequence of this process will be the increasing use of this tool to write any type of content, including disinformation campaigns. "AI is killing the old internet, and the new one fails to be born," headlined this week The Verge, a media specialized in technology. The study has other limitations that can change the perception in some way.


New study finds it's harder to spot misinformation in tweets if they're written by artificial intelligence


A group of 697 people read 220 tweets written by other humans and by the GPT-3 artificial intelligence model, the germ of the current global success ChatGPT. They had to guess two things: one, which were true and which were false, and two, whether they had been written by a person or by the machine. GPT-3 won in both cases: it lied better than humans and also lied to make believe that it was another human who wrote. "GPT-3 is able to better inform and misinform us," conclude the authors of a new study, just published in the journal Science Advances.

"It was very surprising," says Giovanni Spitale, a researcher at the University of Zurich and co-author of the paper, along with his colleague Federico Germani and Nikola Biller-Andorno, director of the Institute of Biomedical Ethics at the Swiss university. "Our hypothesis was: if you read a single tweet, it could pass as organic [written by a person]. But if you see a lot, you'll start to notice linguistic features that could be used to infer that it might be synthetic [written by the machine]," Spitale adds. But it didn't: the humans reading weren't able to detect patterns in the machine's texts. As if that were not enough, the progressive appearance of newer models and other approaches may even improve that ability of artificial intelligence to supplant humans.

Clearer writing

The write level of ChatGPT-4, the improved version of GPT-3, is practically perfect. This new study is further proof that a human is unable to distinguish it, even seeing many examples in a row: "True tweets required more time to be evaluated than false ones," says the article. The machine writes clearer, it seems. "It's very clear, well-organized, easy to follow," Spitale says.

The logical consequence of this process will be the increasing use of this tool to write any type of content, including disinformation campaigns. It will be the umpteenth death of the internet: "AI is killing the old internet, and the new one fails to be born," headlined this week The Verge, a media specialized in technology. The authors of the newly published study point to a reason for this defeat of humanity on the Internet: the theory of resignation. "I'm completely sure it will," says Spitale.

"Our theory of resignation applies to people's self-confidence to identify synthetic text. The theory goes that critical exposure to synthetic text reduces people's ability to distinguish synthetic from organic," Spitale explains. The more synthetic text we read, the harder it will be to distinguish it from that written by people. It's the opposite idea to that of inoculation theory, Spitale adds, which says that "critical exposure to misinformation increases people's ability to recognize misinformation."

If the theory of resignation is fulfilled, soon users will be unable to distinguish on the Internet what has been written by a human or by a machine. In the article they have also tested whether GPT-3 was good at identifying their own texts. And it's not.

The machine disobeys

The only hope for disinformation campaigns not to be completely automatic is that GPT-3 sometimes disobeyed orders to create lies: it depends on how each model has been trained. The topics of the 220 tweets used in the article's test were rather prone to controversy: climate change, vaccines, theory of evolution, covid. The researchers found that in some cases GPT-3 did not respond well to their requests for disinformation. Especially in some of the cases with more evidence: vaccines and autism, homeopathy and cancer, flat earthism.

When it came to detecting falsehoods, the difference between tweets written by GPT-3 and by humans was small. But for researchers it is significant for two reasons. First, the impact that even a few loose messages can have on large samples. Second, improvements in new versions of these models may exacerbate the differences. "We are already testing GPT-4 through the ChatGPT interface and we see that the model is improving a lot. But because there's no access to the API [that automates the process], we don't have numbers yet to back up this claim," Spitale says.

The study has other limitations that can change the perception in some way when reading false tweets. Most of the participants were over 42 years old, were in English only, and did not take into account the context information of the tweets: profile, previous tweets. "We recruited participants on Facebook because we wanted a sample of real social media users. It would be interesting to replicate the study by recruiting participants through TikTok and compare results," Spitale says.

But beyond these limitations, there are disinformation campaigns that until now were enormously expensive and that have suddenly become acceptable: "Imagine that you are a powerful president with an interest in paralyzing the public health of another state. Or that you want to sow discord before an election. Instead of hiring a human troll farm, you could use generative AI. Your firepower is multiplied by at least 1,000. And that's an immediate risk, not something for a dystopian future," Spitale says.

To avoid this, the researchers offer in their article as a solution that the databases to train these models "are regulated by the principles of precision and transparency, that their information must be verified and their origin should be open to independent scrutiny." Whether or not that regulation happens will have consequences: "Whether the explosion of synthetic text is also an explosion of disinformation depends profoundly on how democratic societies manage to regulate this technology and its use," warns Spitale.

You can follow EL PAÍS Tecnología on Facebook and Twitter or sign up here to receive our weekly newsletter.

Subscribe to continue reading

Read without limits

Read more

I'm already a subscriber

Source: elparis

All tech articles on 2023-06-28

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.