The Carillon

The Carillon

Is AI a helping hand or a hazard?

0
1387
ChatGPT is one of many AI tools that have taken over the web

Artificial intelligence remains a hot topic on campus

Artificial intelligence use at the university continues to be a spinning wheel of chaos for students and staff alike.

Generative AI – where a user types in a prompt to generate images, text or other content –  began to go viral in 2023, a few months after the launch of ChatGPT.

It has since been followed by Microsoft Co-Pilot, Grok and many others.

Closer to home, there has been a surge of use among students and professors at the University of Regina. There have been cases of students using it for math equations and other school work like assignments and essays.

Although some would say the genie is out of the bottle, the U of R continues to be worried about rampant misuse and is taking steps to combat it.

Along with the explosion of AI programs, there are also numerous AI-detecting apps. Some of them produce spotty results, raising the specter that some students might find themselves wrongly accused of academic misconduct.

On the other hand, some people use AI for good, generally accepted purposes, like organizing notes. 

As we head into the fall 2025 semester, faculty and students are trying to figure out the perfect balance. 

AI is not entirely banned at the university, and there’s a lot of room for discretion.

The university adopted its own AI guidelines for students and faculty soon after the launch of ChatGPT. These guidelines can be found on the Centre for Teaching and Learning’s website.

The Carillon was able to catch up with associate vice-president of academic affairs Nilgün Önder for an overview.

Nilgün Önder, associate vice-president of academic affairs (Photo credit: U of R)

The current state of play 

“Individual instructors can decide whether they want to allow their students or even encourage their students to use generative AI in some way,” Önder said.

She added that professors are increasingly doing both in an “ethical, productive” way.

For another take on the technology, we talked to Timothy Oleskiw, an assistant professor in computer science at the university. His research is focused on how the brain processes visual information. 

“A lot of the AI that I use in my research group involves training models to see and perceive objects like people do in the real world, “said Oleskiw.

He added that AI does have its uses. 

“I think the most helpful ways a student could use it would be to consolidate or organize the notes and material that they already have,” Oleskiw said.

“One of the mechanisms or problems AI is really good at is condensing some material, sort of summarizing it into a brief synopsis,” Oleskiw added.

Another legitimate use is using AI to come up with example test questions to help prep for exams.

Oleskiw says he tells students to think of AI “ as a really smart friend. And asking your friend to sort of explain the problem to you in another way or give an example of another problem that’s similar would be a good use of that friend.”

He clarified that does not mean using it to write assignments or other forms of academic work.

“Thinking  of AI as a study helper and not as the sort of oracle that can provide all the solutions to you is, I think, a good rule of thumb,”  he added.

Like many other institutions, we witnessed an increase in student academic misconduct cases. That, of course, prompted us to develop guidelines not only for faculty members, but also for students. –  Nilgün Önder, associate vice-president of academic affairs 

The rise of AI in academia 

Institutions across the country have been trying to fight cheating and plagiarism. After the launch of ChatGPT and Microsoft Co-Pilot, academic misconduct cases skyrocketed.

“Like many other institutions, we witnessed an increase in student academic misconduct cases. That, of course, prompted us to develop guidelines not only for faculty members, but also for students,” Önder said.

She noted there are various workshops that both students and faculty can participate in, to make them more aware of AI and its uses.

Issues for students

“Academic integrity, I would say, is still a concern. But that’s not the only issue, as most academics would agree,” she said.

Aside from concerns of cheating, the university also believes too much AI can replace valuable skills that students have or need to develop. These include things such as analytical skills, problem solving and critical thinking. 

“We would like generative AI to be used in a way to enhance our skills, competencies, rather than replace these skills. So it’s an ongoing learning process,” said Önder.

Oleskiw shares these sentiments.

“If you misuse it or, I think, use it too frequently, you might tend to rely on it as a crutch instead of finding solutions or building creative solutions to a problem on your own,”  he said.

Flaws of AI and combating it

The experts point out that both AI tools and AI detection tools have their flaws. One of the biggest of these flaws, according to Önder, are called hallucinations.

“Generative AI tools still continue to make up so-called facts when actually they are not facts. And there are often wrong pieces of information provided. And the problem of just making up references continues in many cases,” Önder said.

“So gen-AI tools continue to have various shortcomings, she added”

Oleskiw has also seen these hallucinations and calls on people not to rely solely on ChatGPT-type programs to solve problems.

Companies are working tirelessly to correct the flaws of AI, but even the best tools are not 100% accurate.

The same goes for AI-detection software.  In some cases, they fail to identify generative AI-created text, Önder said.

“Faculty members, yes, are strongly encouraged not to just look at the AI detection tool score and then come to a conclusion that this is a gen-AI probability detection tool,” she said.

Another problem occurs where students find sources online, especially ones that are commonly used, and rephrase them, the material can come back categorized as being plagiarized. 

“I think it’s really tricky. The worst thing you could do is blame someone for using AI when, in fact, they didn’t,” said Oleskiw.

Thinking of AI as a study helper and not as the sort of oracle that can provide all the solutions to you is, I think, a good rule of thumb. – Timothy Oleskiw, assistant professor of computer science.

Moving forward with AI

Oleskiw said the situation is dynamic, making it hard to establish clear-cut guidelines for AI use. 

“The use of AI in the classroom isn’t something we really considered three, four years ago. And now it’s sort of at the forefront of how we’re trying to deliver course material. I think removing AI completely is probably not the right solution,” he said.

Oleskiw noted that AI software has been developed to contain typos or grammatical errors, to make it appear human. 

“An alternative to, I guess, prevent the misuse of AI would be to change the way course material is delivered,” said Oleskiw.

He said that might mean instead of taking home assignments that would be at risk of AI-fueled cheating, students could use AI to help them study for more in-class quizzes and exams.

“It should be easy to complete this in-class quiz where AI would not be helpful. So I think … now that AI is here to stay, we can change the way course material is delivered so that it’s more harmonious,” he added.

Experts say AI is here to stay and making specific, detailed guidelines for its use remains difficult. University faculty here believe that there are legitimate uses, but want to maintain the difference between using it as a helper and not as a producer of content.

Leave a reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

More News