Established 1826 — Oldest College Newspaper West of the Alleghenies

Does higher ed know what to do about AI?

Professors are split on using AI. Some love it, some hate it. What should Miami's policy be?
Professors are split on using AI. Some love it, some hate it. What should Miami's policy be?

Six months ago, the release of GPT-4 changed the way students engage with artificial intelligence. For the first time, a student could give an essay prompt to a computer and have a 500-word essay in seconds, without needing to pay a dime.

The widespread accessibility of AI writing tools, combined with a lack of research as to what ethical use of these tools looks like, has created panic among some members of the Miami University community and has intrigued others.

I know professors who believe tools like GPT-4 will virtually doom academia and destroy the art of writing. I also know professors who believe students can learn from AI writing tools and are cautiously open to navigating using AI in their classrooms.

Both perspectives have developed valid arguments, but the lack of consensus among faculty about the value and ethics of AI has led to vague messaging from Miami on how AI can and should be used in the classroom.

As we navigate our first semester with generative AI tools widely available, Miami has briefly revised its academic integrity policy to include “possessing, referring to, or any unauthorized use of” AI tools during an assignment or paper as a possible instance of cheating or academic dishonesty.

This tiny revision prompts important questions. 

Do spell-check platforms and grammar-aide browser extensions like Grammarly count as AI tools? What does authorized use of AI look like in a classroom? Would merely “possessing” GPT-4 on your computer be considered cheating, even if a student wasn’t using it on an assignment?

These questions could be answered by a clearer AI academic integrity policy. 

However, students nationwide are divided on what AI academic integrity policies should look like.

According to a recent Forbes survey, 51% of surveyed students think using AI on an assignment is cheating, while 29% felt neutral and 20% think it isn’t cheating.

I’m in the awkward neutral category. If you have AI write your book report: Yeah, that’s cheating. But what if you just used GPT-4 to ask for a definition of a word, and you fact-checked it with other sources? What if you had AI proofread your usage of commas? 

There are upsides in Miami’s integrity policy. It leaves an opportunity for professors to incorporate authorized AI usage into their classrooms, as half of students Forbes surveyed do believe AI can be used ethically on tests and exams.

Enjoy what you're reading?
Signup for our newsletter

The only question is: What does ethical usage actually mean?

Miami’s faculty resources offer vague answers, with tips like “get to know the tools” and “AI tools may be used in a productive, creative, and educational manner,” without any specification of what this “manner” looks like.

As a writing consultant at the Howe Writing Center, I understand the apprehension toward embracing generative AI. Many of my colleagues in the writing center world are quite unsure of how AI will change our industry.

I do not want to see the art of writing transformed into a transactional, thoughtless act.

At the same time, the graphing calculator didn’t kill mathematics. It’s not like every piece of writing we do in our modern age is designed to be art — sometimes writing really is cursory (see: the citation page of every paper I’ve written).

Rather than leaving professors with limited resources on how to incorporate ethical AI usage into classrooms, universities should embrace this unique opportunity to lead the way in teaching students to skillfully and ethically navigate our AI age. 

AI writing tools can help creative writers brainstorm place names or plot twists. It can help language learners have more conversations in their target language and programmers locate bugs in their code. 

Of course, because this technology is very new, AI writing tools need to be approached with critical thinking. AI writing tools reflect the biases of their inputs and are notorious for generating false pieces of information or responses that simply don’t answer the prompt.

The possibilities of AI writing tools, good and bad, are endless — universities just need to steer students in the right direction.

By leading the conversation and education of AI, universities can direct students toward using AI in ethical ways: using AI tools with explicit permission and transparency, fact-checking AI responses with additional sources, educating students about bias and intellectual property concerns with AI and offering actual examples of how students can use AI as a brainstorming tool.

By assisting in efforts to increase AI education and producing students that can have informed and nuanced conversations around how to effectively use AI, Miami can be a leader in creating a positive future with AI.

AI is only scary if we let it be scary. If we create a generation of students that knows how to use AI in productive, ethical ways, we can create a future where AI makes life easier — and not the dystopian robots versus humans hellscape we often fear.


Meredith Perkins is a junior from Independence, Kentucky, studying diplomacy and global politics and English: creative writing with minors in French and history. She is also a writing consultant at the Howe Writing Center. She has presented research on writing consulting in virtual spaces at the East Central Writing Centers Association and is currently beginning a research partnership on perceptions of AI among student writers. Her opinion columns have been featured in the Wall Street Journal, Cleveland Plain Dealer, Cincinnati Enquirer and more.

perkin16@miamioh.edu