When ChatGPT made its explosive debut on the internet just over a year ago in November 2022, Justin Klein’s friends were quick to put this artificial intelligence (AI) technology to the test, literally.
“During my data structures class, we were studying for the final,” the junior computer science major at Miami University said. “My friends were talking about how they were putting some of the practice questions into ChatGPT, and it was generating very active responses.”
In the year since then, AI technology has inspired countless think pieces about the future of education and forced professors to adapt.
Tim Lockridge, an associate professor of English at Miami, said he incorporates AI into each of his syllabi. Rather than putting a blanket ban on the technology, though, he focuses on defining acceptable uses for his students.
“To me, it’s a tool, and writing is something we do with tools,” Lockridge said. “We have to have tools to write; without tools there is no writing. Those tools range from pencils to pens to word processors to now tools like these large language models … The advice I give to students is that the tool has to align with the job and your goals for the job.”
If a student is struggling with a dense theoretical reading, Lockridge said an acceptable use of AI in his class may be to put it through a tool that can generate an abstract. In that way, AI can act as an entry point for students to approach difficult text, but Lockridge said it shouldn’t be a final step. Instead, he said students should assess the summary and use their own analytical skills to complete the reading themselves.
One of Lockridge’s classes this semester, ENG 171 humanities and technology, focuses on how various types of computer technology, including AI, relate to the humanities. He said they may use AI to help write summaries or outline arguments and make sense of readings, but it won’t be an endpoint of students’ learning.
“I’m personally interested in tactical uses of that tool to help us learn, but the concern that I have and I think many people do is that writing is thinking,” Lockridge said.“For me, if an activity is to help us do writing as thinking, then we need to make sure that the tools we’re using serve that goal of helping us to think through complex arguments and complex texts.”
In Klein’s computer science classes last semester, his professors similarly allowed students to engage with AI, but with specific restrictions.
“Using AI to generate code in order to solve a problem was not allowed,” Klein said, “but [using it] as a tutor getting an answer to a question was fine. But if you put in AI-generated code and submitted it, they wouldn’t allow that.”
Some professors have also outright banned use of AI in any form for their courses. Brenda Quaye, assistant director for academic integrity, said Miami is intentionally leaving the decision to individual faculty members rather than creating an overarching AI policy for every class.
“For me coming from the academic integrity perspective, it is similar to, you know, some faculty members will do open book or open note tests, some will not,” Quaye said. “Some will allow calculator use. Some will not … Like any of those parameters that instructors set within their courses to meet the goals and the purpose of the assignments, they can make decisions about how and when they use AI or don’t.”
Enjoy what you're reading?
Signup for our newsletter
Last semester, Quaye said 35% of her office’s 225 academic integrity cases involved potential unauthorized use of AI. Typically, students will get a zero on assignments with unauthorized uses of AI, and there may be an additional 5-10% reduction in their overall grade in the class depending on the value of the assignment. In more severe cases, students could fail the class.
Beyond imposing sanctions, Quaye makes an effort to talk to students about why they felt the need to cheat and offer alternative strategies. She said most students who come through her office cheat due to procrastination or a lack of confidence in their writing, but turning in partial or late work can have better outcomes in those situations.
“I have a lot of conversations about trusting oneself,” Quaye said. “Professors don’t really expect PhD-level writing in an undergraduate course. They expect undergraduate student writing, and there are all different levels of that. It’s really apparent really quickly when it is not a student’s voice, and faculty members pick up on that pretty fast.”
In many cases, Quaye said AI can actually make writing worse because it may come off as unnatural. For students who use AI to help with coding, she said most cases are caught because the code includes elements that haven’t been taught or don’t adhere to the assignment instructions.
Klein has experimented with AI for coding himself, and he said it isn’t infallible yet. He may use it as a baseline to get started with problems, but he said the programs today struggle to generate 100% accurate code that complies with test cases. Still, he’s expecting to use AI in his career after college.
“If you needed to write code for a website, you can have [AI] generate the basic backbone of the front end … that it’s really good at because it’s not a complicated task,” Klein said. “It helps reduce tedious tasks like that.”
For Lockridge, the biggest deterrent against unauthorized AI use is simply the damage to students’ learning.
“If you want to cheat the system, you can cheat the system and get away with it for a certain amount of time,” Lockridge said. “We have to maintain standards and we have to encourage people to do the right thing, and there’s teaching involved there, but how do you train people to find value in the work? That’s got to be the center of it.”