AI writing generator could pose problems to academic integrity 

Who is really doing the writing? Created with the assistance of DALL-E2

From art to writing

Concerns about machines taking away people’s jobs have circulated since Luddites smashed textile machinery over two hundred years ago, but could this concern finally be coming to academia? A new chatbot, ChatGPT, has been breaking headlines with writing peculiarly comparable to that of humans, including academic writing.  

ChatGTP was released by the company OpenAI at the end of November 2022. The same company previously made headlines for releasing DALL-E2, a program that made realistic looking art. The robot in the article’s image is an example of DALL-E2-generated art.  

ChatGPT functions by making use of a large language model. Computer Science graduate student David Akinmade studies large language models at the University of Regina. “A large language model at its core is a large neural network which is fed with a lot of training data, usually text data, and then the model applies the process known as attention to predict what text best completes a given prompt,” Akinmade said.  

Akinmade added that improvement from previous ChatGTP models was noticeable. “Basically what they did is added reinforcement learning to large language. They built a large language model first, and then they do some reinforcement learning with human feedback. You have a set of humans who rank outputs in the order of preference. They could also rank the outputs in the order of harmfulness or the order of helpfulness. Now obviously there’s a lot of subjectivity because it’s a human being ranking these things. This additional step allows the model to better zone in on responses that human beings would prefer.” 

Using ChatGTP is a violation of academic integrity. CTV Kitchener reporter Hannah Schmidt reported that universities in the Waterloo region are under watch for work produced by the AI generator.   

From my experience with ChatGPT, it definitely makes interesting responses. While writing this article, the ChatGPT server had so many users that it reached maximum capacity. It left the user with a Shakespearean sonnet explaining the situation, which in part states: 

“But alas, the server cannot cope, 

And the error message rings loud and clear, 

‘Please check back soon,’ it gently hopes, 

As it begs for a moment’s reprieve, to reappear.” 

With the surge in popularity, several academics have expressed concern over the possibility of ChatGPT being used to mimic academic writing. Many of the concerns revolve around people passing off the work of the chatbot as their own. A quick search online can find quotes from professors at institutions from the United States, United Kingdom, and Canada all saying that ChatGPT could create at least passing, if not excellent responses to assignment questions that they give to university students. the Atlantic even ran headlines declaring “The College Essay is Dead.” 

Others have disagreed. English professor Christopher Grobe from Amherst College stated in the Chronicle of Higher Education “the things ChatGPT cannot do (cite and analyze evidence, limit claims, create logical links between claims, arrange those claims into a hierarchy of significance) are the basic stuff of college-level writing.” 

In a test of these claims, I had ChatGPT write me a shortened essay based on a question assigned as a final paper in Philosophy 100 last semester. One thing I learned from the experience was the chatbot needed to have feedback from myself over several iterations. For example, after the first iteration, I had to give it a word count and ask it to write at a university level. In order to focus the chatbot’s arguments more in the scope of the class, I had to give it a prompt to focus its argument on the work of a particular philosopher that was assigned as a reading. As a senior undergraduate student, I found the final product well-written and sensible, though lacking in original ideas.  

Before any students run to a chatbot to do their homework, some have already released apps that claim to accurately detect when something was written by ChatGPT. Google Chrome already has a different app available on its store, and the popular plagiarism detection service Turnitin has promised to soon update its service to include chatbot detection. However, none of these apps have yet been tested systematically by independent third parties, nor do they publicly post a false-positive and false-negative rate. 

Last week, Nature reported on an informal test of ChatGPT’s ability to fool scientists in the journal’s non-peer-reviewed section. The experiment had ChatGPT create 50 scientific abstracts based on papers published in academic journals and had scientists, a plagiarism checker, and an AI-output detector try to determine which abstracts were generated by ChatGPT. The plagiarism checker failed dramatically, giving the AI-generated abstracts a median originality score of 100 per cent. Both the AI-output detector and scientists only detected 66 per cent and 68 per cent of the AI-generated abstracts, respectively.  

While the writing of ChatGPT is quite hard to tell apart from human writing, other workarounds for preventing plagiarism exist. For example, the U of R’s academic integrity website already recommends that professors do actions like require rough drafts and have students submit sources for their information. Actions such as requiring that students submit more than just the final product can make cheating that spits out a ready-made essay harder. Also, ChatGPT is not updated on recent events, so original assignments with responses to current events could be harder to plagiarize.  

In an interview with reporter Connie Loizos, OpenAI CEO Sam Altman said this about the potential disruption to education: “We adapted to calculators and changed what we tested for in math classes, I imagine. This is a more extreme version of this, no doubt, but also the benefits of it are more extreme as well. We hear from teachers who are understandably very nervous about the impact of this on homework. We also hear a lot from teachers who are like ‘wow this is like an unbelievable personal tutor for each kid.’” 

David Akinmade reflected on his educational experience. “If you had calculations to do, you were given a book called a four-figure table. People were really confident in the fact that students that used four figure tables were developing a mathematical ability that students who just use calculators would simply not have. As time went on, people just realized it’s antiquated.” 

Akinmade encouraged people to seek more knowledge before using ChatGTP. “There really needs to be a space to educate people about these models.” 



Comments are closed.

More News