The artificial intelligence of ChatGpt, the current software in which Microsoft is investing, could help create disinformation.
This is the analysis of the researchers of Newsguard Technologies, whose main purpose is to fight online disinformation.
Three researchers put the AI-powered chatbot to the test with 100 fake narratives from their catalog of Misinformation Fingerprints.
In 80% of cases, the chatbot generated false and misleading claims on current topics including Covid-19 and Ukraine.
"The results - the researchers explain - confirm the fears and concerns expressed by OpenAi itself (the company that created ChatGpt, ed) on the ways in which the tool could be used if it fell into the wrong hands. In the eyes of those who have no familiarity with the issues or topics covered in this report, the findings could easily appear legitimate and even authoritative."
However, NewsGuard has verified that ChatGPT “has safeguards in place to prevent some examples of misinformation from spreading. For some hoaxes, it took as many as five attempts to get the chatbot to misinform.” The Applied Artificial Intelligence Debate to information comes at a time when various newspapers are experimenting with the technology.