TechSpot means tech analysis and advice you can trust. Read our ethics statement.
Why it matters: After all the chitchat and the growing contempt for text written by generative AIs like ChatGPT, OpenAI is promoting a new ML algorithm designed to identify such textual output. The results, however, are still pretty bad.
ChatGPT and other algorithms capable of producing seemingly correct textual content have quickly become a growing concern for educators, schools and universities, so much so that there is now a market for anti-AI tools like GPTZero. Another such tool has now been released by OpenAI, the very same company that created ChatGPT and started the recent AI revolution.
Like any other machine learning algorithm, OpenAI's new AI classifier has been trained on a data set of textual snippets to fulfill its task. Unlike ChatGPT, the AI classifier is designed to distinguish between text written by a human and text written by AIs "from a variety of providers." Needless to say, those providers include ChatGPT.
OpenAI says that the classifier is "not fully reliable" yet, as the tool was able to correctly identify just 26% of AI-written text within a "challenge set" of English text. False positives (9% of the challenge set) and false negatives are also very likely, the research lab warns.
The AI classifier reliability improves with larger textual snippets (above 1,000 characters), while it performs "significantly worse" with languages other than English. Other limitations include the inability to identify the nature of very predictable text like the first 1,000 prime numbers, a vulnerability to text specifically edited to evade classifiers, and a mediocre "calibration" for identifying text existing outside of the training data.
For inputs that are very different from text in the training set, OpenAI warns that sometimes the AI classifier can be "extremely confident" in its wrong predictions. Anyway, this new classifier is working better than the previous system the company was working on.
All things considered, OpenAI thinks that a service designed to detect text written by generative AI can be an important tool to assess the impact of ML algorithms on the classroom and other educational activities. The company is currently "engaging with educators in the US" to discuss ChatGPT's capabilities and limitations, striving to create and safely deploy large language models in direct contact with the affected communities. The AI classifier alone, OpenAI warns, should not be the only factor to consider when trying to identify AI-written text.