Enlarge image
The OpenAI logo on a smartphone
Photo: Richard Drew/AP
Research firm OpenAI has developed a solution to a problem it created itself, the AI Text Classifier.
According to a blog post by the company, the web-based software should be able to “distinguish between texts written by humans and texts written by AIs from different providers”.
First and foremost, however, it should be about recognizing texts that have been generated by ChatGPT.
OpenAI first caused a stir in the general public with the image generator DALL-E, because it is able to generate sometimes amazingly photorealistic images from text templates.
The ChatGPT language model, which was also developed by OpenAI, has been causing even more astonishment for a few weeks. It can be used, among other things, to generate longer texts on almost any topic.
This worries universities and schools, among others, who fear that students could have entire tasks or at least parts of them done by the AI.
Although the texts generated by ChatGPT are not always correct in terms of the facts presented, they are often difficult to distinguish linguistically from those written by people.
Offers such as the GPT Zero software developed by an American computer science student are intended to help recognize such AI works, but must always be adapted to the new capabilities of the text generators.
Moderately reliable
The AI Text Classifier from OpenAI should find this easier, at least in relation to ChatGPT, since this is itself a variant of the ChatGPT language model.
However, the company warns that the software is "not entirely reliable."
From a collection of English-language texts specially compiled for this purpose, the software correctly recognized 26 percent of the texts generated by text generators as such, but incorrectly also assigned nine percent of the texts written by humans to an AI.
It is "impossible to reliably recognize all texts written by AI," the company then writes.
However, the software works reliably enough “to invalidate false claims that AI-generated text was written by a human”.
For example, false information could be uncovered as well as cheating in academic work.
Nevertheless, the system does not give any clear answers, but only assesses whether a text was possibly or probably generated by an AI, whether it is "very unlikely" or just "unlikely", or whether it is unclear after the check whether an AI was involved.
A final verification of the results by humans is therefore essential.
The machine only recognizes what it knows
In any case, the tool can currently only be used to a limited extent.
Starting with the fact that you only get access to it with a free OpenAI account, it has so far been trained mainly for English texts and can hardly do anything with German texts, for example.
It also requires samples that are at least 1000 characters long.
It also becomes difficult if a text created by a text generator has been edited by a person or if the text only contains expected content.
A list of prime numbers is given as an example, since this always has the same content, regardless of whether it was compiled by a machine or by a human being.
Most importantly, the developers admit that their AI text detector only works well when applied to text similar to the one it was trained on.
What kind of texts these are is not explained in detail.
But if you confront the machine with a text genre or subject unfamiliar to it, it is likely to misclassify it as machine-generated, even if it was written by humans.
mak