Get featured on INDIAai

Contribute your expertise or opinions and become part of the ecosystem!

OpenAI has developed a classifier to differentiate between human-authored text and material written by AIs from many providers. The researchers believe that good classifiers can inform mitigations for misleading claims that AI-generated material was written by a human, such as running automated misinformation campaigns, employing AI tools for academic dishonesty, and posing an AI chatbot as a human.

Their report indicates that the classifier could be more reliable. In their evaluations on a "challenging set" of English texts, their classifier accurately labels 26% of AI-written text (true positives) as "possibly AI-written" while mislabeling 9% of human-written material as AI-written (false positives). Typically, the classifier's accuracy improves as the length of the input text rises. This new classifier is more dependable on material from more current AI systems than the prior classifier.

Furthermore, the researchers are making this classifier available to the public to receive input on its usefulness. Finally, their work on detecting text generated by artificial intelligence will continue, and they expect to share improved techniques.

Limitations

  • The classifier has some significant drawbacks. It should not be used as the sole criterion for making decisions but rather as a supplement to other techniques for detecting the provenance of a document.
  • The classifier needs to be more accurate to brief texts (below 1,000 characters). Even longer texts are occasionally misclassified by the classifier.
  • The classifier will mistakenly but confidently categorise human-written material as AI-written on occasion.
  • The researchers advise using the classifier exclusively for English text. It performs substantially worse in other languages and cannot be relied upon with code.
  • Highly predictable text cannot be recognised well. For instance, since the correct response is always the same, it is impossible to predict whether AI or humans generated a list of the first 1,000 prime numbers.
  • AI-generated text can be altered to circumvent classification. This Classifier can be updated and retrained in response to successful attacks, although it is still being determined whether detection has a long-term advantage.
  • It is well-known that neural network-based classifiers must be better calibrated outside their training data. For inputs very dissimilar to the text in its training set, the classifier is sometimes highly confident in an incorrect prediction.

Conclusion

The researchers acknowledge that identifying AI-written text has been an essential topic of discussion among educators. Understanding the limitations and effects of AI-generated text classifiers in the classroom is as important. They have created a primary resource on the usage of ChatGPT for educators that explains some of the uses, limitations, and issues. The researchers anticipate that their classifier and accompanying classifier tools will impact journalists, mis/disinformation researchers, and other organisations in addition to educators.

The researchers interact with educators in the United States to discover what they observe in classrooms and explore ChatGPT's potential and limitations. As they learn more, they will continue to expand our outreach. These talks are crucial as part of their objective to deploy big language models safely in direct touch with affected populations.

You can try their work-in-progress classifier yourself here.

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE