In the age of ChatGPT, what’s it like to be accused of cheating?
While the public release of the artificial intelligence-driven large-language chatbot, ChatGPT, has created a great deal of excitement around the promise of the technology and expanded use of AI, it has also seeded a good bit of anxiety around what a program that can churn out a passable college-level essay in seconds means for the future of teaching and learning. Naturally, this consternation drove a proliferation of detection programs — of varying effectiveness — and a commensurate increase in accusations of cheating. But how are the students feeling about all of this? Recently published research by Drexel University’s Tim Gorichanaz, Ph.D.,provides a first look into some of the reactions of college students who have been accused of using ChatGPT to cheat.
The study, published in the journal Learning: Research and Practice as part of a series on generative AI, analyzed 49 Reddit posts and their related discussions from college students who had been accused of using ChatGPT on an assignment. Gorichanaz, who is an assistant teaching professor in Drexel’s College of Computing & Informatics, identified a number of themes in these conversations, most notably frustration from wrongly accused students, anxiety about the possibility of being wrongly accused and how to avoid it, and creeping doubt and cynicism about the need for higher education in the age of generative artificial intelligence.
“As the world of higher ed collectively scrambles to understand and develop best practices and policies around the use of tools like ChatGPT, it’s vital for us to understand how the fascination, anxiety and fear that comes with adopting any new educational technology also affects the students who are going through their own process of figuring out how to use it,” Gorichanaz said.
Of the 49 students who posted, 38 of them said they did not use ChatGPT, but detection programs like Turnitin or GPTZero had nonetheless flagged their assignment as being AI-generated. As a result, many of the discussions took on the tenor of a legal argument. Students asked how they could present evidence to prove that they hadn’t cheated, some commenters advised continuing to deny that they had used the program because the detectors are unreliable.
“Many of the students expressed concern over the possibility of being wrongly accused by an AI detector,” Gorichanaz said. “Some discussions went into great detail about how students could collect evidence to prove that they had written an essay without AI, including tracking draft versions and using screen recording software. Others suggested running a detector on their own writing until it came back without being incorrectly flagged.”
Another theme that emerged in the discussions was the perceived role of colleges and universities as “gatekeepers” to success and, as a result, the high stakes associated with being wrongly accused of cheating. This led to questions about the institutions’ preparedness for the new technology and concerns that professors would be too dependent on AI detectors — whose accuracy remains in doubt.
“The conversations happening online evolved from specific doubts about the accuracy of AI detection and universities’ policies around the use of generative AI, to broadly questioning the role of higher education in society and suggesting that the technology will render institutions of higher education irrelevant in the near future,” Gorichanaz said. More