Canadian high school teacher says struggle to prevent cheating Artificial intelligence apps like Chat GPT We can’t win the technological arms race, instead we’re changing the way we conduct evaluations.
Professors are de-emphasizing essays and at-home exams and are returning to the face-to-face exams that fell out of use in the 1990s. COVID-19 PandemicYou are encouraged to talk openly with students about new technology, set rules for its use, and even make its ‘writing’ the starting point for assignments, like a first draft in need of revision and refinement.
According to academic integrity experts, technology is here to stay, and students must be encouraged to buy into a learning-first mindset.
How ChatGPT and Other AI Tools Will Change the Way Students Learn
Sarah Elaine Eaton, a professor and academic integrity expert at the University of Calgary, said she’s been getting calls every day from colleagues asking for advice on what to do since ChatGPT went live late last year. The application was created by her OpenAI and was made available for free trial on the internet late last year. Prompted with college-level questions, it produces surprisingly fluent, human-like responses.
To date, most professors have heard of students using AI to generate answers and pass them on as their own, says Eaton. Many instructors are concerned about possible cheating, especially since her ChatGPT output is not easy to find. But the level of threat posed by the new technology is still inconclusive, she said.
“There is complete moral panic, technical panic I think we need to take a step back and look at other types of technology that have been introduced,” said Professor Eaton.
“I’ve heard people say things like, ‘This will make students stupid, they won’t learn how to write or learn the basics of a language.’ It’s similar to the arguments I’ve heard about the introduction of
In a study conducted by Rahul Kumar and colleagues at Brock University, participants were given short essays that could be written by humans, copied from the internet, or generated by AI. Of the participants who were asked about AI-generated texts, about 60% believed or were unsure that they were human-generated. People with higher levels of education are also slightly more likely to make mistakes, says Professor Kumar.
Participants gave the AI-generated material an average B-minus rating, he said. The study is relatively small, with 135 of his participants, and will soon be published in a peer-reviewed journal. It has been extensively rebuilt this year.
Allison Miller, an academic integrity specialist at Toronto Metropolitan University, said she had never seen so much interest across the university in the work of the Academic Integrity Office. She helped create a guide for instructors on how to keep up with new technology.
Out of necessity, this guide is what Miller calls a “living document” that can be updated constantly.
Miller said changes are already happening when it comes to student assessment, especially essays and home exams. Some professors are trying to minimize the writing done outside the classroom, or add verbal assessment.
She said the way instructors used to mark flawed but well-written work with B’s would need to change. write in. “
Randy Boyagoda, Vice Chancellor, Faculty and Academics Officer at the University of Toronto, said the university, by its very nature, can play the long game and see how things develop. In a discipline that has relied on essays, he said essays aren’t likely to go away anytime soon, but universities are asking faculty to make sure their assessment methods reflect what they want students to learn. We encourage you to consider.
“We have experts in writing centers across the university who are thinking about this. Our colleagues in research are thinking about the implications of this. There are multiple, intersecting conversations going on across the university about this,” said Boyagoda.
Academic integrity is a commitment that U of T and all institutions monitor and take seriously, he added. Generative AI is a new concern on that front, but it’s too early to know how it will play out.
Companies such as Turnitin, a well-known plagiarism detection tool, have created applications aimed at helping AI detect writing. But many experts warn against getting caught up in a technological arms race.
“This technology is going nowhere. Trying to ban it is futile,” Professor Eaton said. “There is absolutely no way to stop this.”
She encourages instructors to make their AI expectations clear to their students. Its use should not automatically be considered cheating, she said, and teachers should discuss how students can cite results generated by her AI. Teachers may want to let students try out the app, see how they respond to test questions, and assign them to improve their results.
Cory Scurr, Academic Integrity Manager at Conestoga College, said: “Our students will be functioning in 2030, 2040, 2050. This technology will be there. How can we ethically and properly teach our students to utilize this technology? is it ok?
“The amazing thing it does is make you think about valuation and how you might or might not want to change it.”