ChatGPT has caught the attention of the public and press to an unprecedented degree. Within five days of its release, more than a million users had run experiments with it. But is ChatGPT worth the hype? 

ChatGPT is known for its writing skills. It can even write a research paper, to an extent. The academic world is now worried about the unethical use of these models by students. 

In this context, Edward Tian, a 22-year-old senior at Princeton University, took matters into his own hands and built an app to detect whether the text was written by ChatGPT. His motivation to create the bot was to fight what he saw as an increase in AI plagiarism. He named the model GPTZero.  

GPTZero identifies the author to be human if the text has high complexity. However, if the text is more familiar to the bot because it has been trained on such data, it is more likely to be AI-generated.  

Tian’s research proves that generative AI models like ChatGPT are not fool proof. 

Finding the shortcomings 

According to prominent AI researcher, best-selling author, and entrepreneur, Gary Marcus, ChatGPT has made all the same kinds of mistakes that its predecessors did. “This was inevitable,” points out Marcus. Gary Marcus, along with Ernest Davis, Professor of Computer Science at New York University, have been working with ChatGPT to study its capabilities and limitations. 

According to their findings, ChatGPT could neither reliably count to four or do one-digit arithmetic in simple work problems, nor determine the order of events in a story nor reason about the physical world. Also, some of the results exhibited sexist and racist biases.

The results produced could be correct but not reliable. If the experiment is rerun, the results might be the same result, correct result or different wrong result.  

“We were quite surprised and gratified to find that people from all over the internet were running similar kinds of experiments, often more creative than our own, and reporting their results on social media or by emailing them to us”, stated Gary Marcus and Ernest Davis in the report.  

Pseudo imagination

Generative AI models are known to be challenging to control. Some outputs are hard to interpret and, as a result, are derived from an ML model. It is hard to change it.  

The algorithms embedded in the generative AI models perform tasks based on their training data and cannot create anything new. Technology only allows it to combine things it already knows and present them in new ways.  

Furthermore, if the models are used by fraudsters may use generative AI for scamming and other fraudulent activities, which may be difficult to track.  

Not as generative

Generative AI models are not as generative as it sounds. As the experiments with ChatGPT, one of the best generative AI models, prove that these models are not sentient beings, nor are they never failing.  

As Gary Marcus suggests, already from the initial data collected in the first few days, a scientist, technologist, or lay reader can readily see many different types of errors and try for themselves to understand how replicable those errors are. 

Indeed, the developments of models like this mark advancements in AI. However, blindly relying on the model might not be a wise decision as they may challenge the responsible and ethical use of AI. 

Want to publish your content?

Get Published Icon
ALSO EXPLORE