New tool claims it can identify AI-written science papers 99% of the time

Alfonso Maruccia

Posts: 1,029   +302
Staff
Why it matters: AI-based plagiarism is becoming an increasingly annoying and dangerous phenomenon, especially for genuine science research publications. Many people (and researchers) are trying to develop a practical solution against this kind of troublesome pettiness, and a new approach seems to work particularly well for a specific kind of scientific papers.

ChatGPT is extremely good at faking man-made creative content, even though actual professionals are finding the chatbot to be pretty "sh*tty" and redundant as a writer. When it comes to scientific writing, however, chatbots can turn from simple nuisances or school cheating tools to actual threats against science and proper research practices.

Newly published research by scientists from the University of Kansas is proposing a potential solution for the AI-based plagiarism problem, boasting a pretty remarkable ability to distinguish actual human-made science writing from ChatGPT output "with over 99% accuracy." A result obviously achieved through AI algorithms and a specifically trained language model.

Chemistry professor Heather Desaire and colleagues are fighting AI with AI, and they are seemingly getting very good results in that respect: the researchers focused their efforts on "perspective" articles, a particular style of article published in scientific magazines to provide overviews of specific research topics.

The scientists chose 64 perspectives articles, on topics ranging from biology to physics, and then they asked ChatGPT to generate new paragraphs on the same research to put 128 "fake" articles together. The AI spat out 1,276 paragraphs, which were then used to train the language model chosen by researchers to try and classify AI-made text.

Two more datasets, one containing 30 real perspectives articles and the other with 60 papers generated by ChatGPT, were compiled to test the newly trained algorithm. And the algorithm seemingly passed the tests prepared by researchers with flying colors: the AI classifier was able to detect ChatGPT articles 100% of the time, while accuracy for detecting individual fake paragraphs dropped to 92%.

The scientists say that chatbots mangle textual contents by using a particular "writing" style, therefore, their "hand" could be identified in a pretty effective way. Human scientists tend to have a richer vocabulary, and write longer paragraphs containing more diverse words and punctuation marks. Furthermore, ChatGPT isn't exactly renowned for its precision level, and it tends to avoid providing specific figures or quoting other scientists names.

Kansas researchers defined their approach against AI plagiarism as a "proof-of-concept" study, even though it has shown to be very effective at identifying fake perspectives articles. Further (human-made) research is needed to establish if that same approach could be applied to other types of scientific papers or general AI-made textual outputs.

Permalink to story.

 
The answer to ai is more ai. 🙃.
Look is that a cockfight, no it's just two AIs at it being used by humans. Like I said before the it's survival of the witiest. Once can just use the ai generated information and reword everything as well as cross reference everything for accuracy. It will definitely save time and embarrassment. Don't be like those leftist lawyers fron NY!
 
Last edited:
Well, that really great ..... guess they didn't think that with everyone going to AI there will be no reason for education because there will no longer be jobs for the vast majority of workers. If course, with no jobs there is no money so who's going to buy all these good produced by AI???
 
They should run that program against articles on the Wccftech site. I swear that most of them are written by an AI. They use lots of buzzphrases like 'at the end of the day', 'going forward', 'deep dive'. You often see non sequiturs that should be caught by editorial review. And the authors have very strange, fabricated sounding names.

I think Wccftech either uses an AI or does a lot of translation from other languages like Arabic or Chinese.
 
They should run that program against articles on the Wccftech site. I swear that most of them are written by an AI. They use lots of buzzphrases like 'at the end of the day', 'going forward', 'deep dive'. You often see non sequiturs that should be caught by editorial review. And the authors have very strange, fabricated sounding names.

I think Wccftech either uses an AI or does a lot of translation from other languages like Arabic or Chinese.
The ai results of the program will probably be like yours. That place is a troll magnet who barely contribute to the conversation and focus off topic 99% of the time. The user base can care less of the content. Even before generating ai the content was mostly click baits.
 
Well use your AI spotting to rid the world of spam and Bot accounts

Anyway take these articles with a grain of salt - yeah 99% of 100% spam papers -with few positives
How about well human crafted AI ones - with Human touch up

The other thing are these spotters black boxes - or can it say why if classified said paper as AI
if Black box ( no one knows how it makes it's decision ) - that has problems - court cases , false positives -only selecting a style of people educated at A, B C, universities etc
If not - then that has problems - eg AI spotter looks at X, Y and Z - so spam AI adjusts to that
 
Back