What about AI and Academic Integrity?
How to talk with students who have overused AI in your course
When working with writing faculty on the challenges of AI, I think often of the poem by Joseph Fasano, “For a Student Who Used AI to Write a Paper,” which pairs gentle chiding with genuine bewilderment. In it, the narrator sympathizes with the demands on the student’s time but then asks: “what are you trying / to be free of?” Fasano captures the frustration and disappointment many of us feel when we receive an AI paper: we’re concerned not only about academic integrity, but we may also be asking existential questions about why we’re here at all.
It’s important for us to acknowledge our own feelings when students overuse AI, as Fasano does. We’ve put hard work into a course, maybe even overhauled it to be AI-aware, and yet here it comes, the AI paper: turned in at 11:58 p.m. with its “delving into” an issue using tripartite examples and smooth, even-handed arguments that seem disconnected to anything discussed in class. We can feel disappointed and even tell our students that.
And yet, our response to students can’t end there. As instructors, we have an obligation to support student learning, including how they make decisions about AI. So how do we talk with students when we think they’ve overused AI?
In this post, I offer a few strategies for opening up that difficult conversation, along with a perspective on academic integrity in the age of AI.
Why academic integrity matters
When we confer a degree, we’re certifying that a student completed a program designed to build knowledge and skills. Academic integrity policies protect the reputation and authority of the institution.
But academic integrity policies also help students. They help to guide student choices and communicate the expectations of an academic community. When a student joins our academic community, they have both an opportunity and an obligation to learn its practices. In turn, they should be able to depend on others in the community who are learning the same practices—they can expect the same from peers in their collaborative work, class discussions, and assessments.
Students get frustrated when their peers don’t meet community expectations. It feels unfair when a peer coasts on groupwork or skips out on assignments. Students in a focus group study we ran at Pitt this spring tended to think that using AI resulted in better grades. So, if a student is following the rules about AI, it feels unfair when their peers aren’t. Even students who understand the intrinsic value of what they’re doing can be annoyed when a peer gets a better grade by cutting corners.
And yet students aren’t in a position to call out a friend for overusing AI. It’s our job to pay attention. Not to police—but to notice, acknowledge, and start a conversation. Academic integrity policies are only a starting point.
Here’s where I should note that the work of paying attention can’t be outsourced to AI detectors. Detectors can flag phrases more likely to be produced by AI, but they only work on writing outputs: they can’t catch students using it to outsource the thinking, outlining, or reading for their writing. They produce false positives, especially for non-native English speakers. But worse is what Janae Cohn notes from her perspective as executive director of the Center for Teaching and Learning at U.C. Berkeley: “While we recognize that A.I. detection may give some instructors peace of mind, we’ve noticed that overreliance on technology can damage a student-and-instructor relationship more than it can help it” (as quoted in the New York Times). AI detectors cannot save us.
My colleague Jeff Aziz, who is extremely sharp on AI and who serves as our Academic Integrity Dean at Pitt, says: “A first academic integrity violation is an opportunity for a conversation.” He encounters students who felt desperate and made bad choices. He says he hasn’t yet caught a student trying to skip through the entire course of study using AI; instead, he sees students “experimenting” with the boundaries of academic integrity. Students enrolled at Pitt are in a community of people responsible to each other—which is why academic integrity is important—and sometimes they need to learn that, he says. After his conversations, Aziz sees very few second offenses.
In the classroom: an opportunity for conversation
We’re all familiar with the “teaching moment,” when something unexpected happens and we turn it around so students can learn from the situation. So, when you get that AI-suspicious paper, let your conversation be a moment for teaching rather than punishment. Rebecca Moore Howard, who argued against the “academic death penalty” for plagiarism advised: “often a pedagogical rather than judicial response is appropriate.” We must separate our affective reactions (disappointment, anger, betrayal) from our pedagogical responses.
When you receive the AI paper, first ask yourself why you think the student used AI on this assignment. Did the writing not match what you expected? If so, why do you think so? What if you’re not sure whether they overused AI or not? It’s never a bad thing to talk to students about their writing, so even if you're not sure, it's okay to reach out to them. By framing your outreach as conversation rather than accusation, you can use it as an opportunity to teach them about writing.
In your conversation,
Ask the student to expand on the ideas they wrote about: “tell me more about…” or “what were you thinking when you wrote this?" Whatever they turn in to you, they are responsible for their words. If they are unable to account for their own work, then they aren’t fulfilling what you’ve asked.
Ask them to tell you about their writing process. Are they taking notes by hand? Are they keeping track of sources in a separate document? Are they seeking feedback from others, including friends, writing tutors, and AI? You can ask them directly: “what technologies did you use in composing this? Did you use AI at any stage of your writing?” Try to figure out if they see themselves as collaborating with AI, or if they’re looking for direct outputs (see Anthropic’s study of student uses of Claude for more on the difference).
Probe the context of the student’s choice. In my research, I’ve seen that students overuse AI when they’re pressed for time, when they don’t find work meaningful, and when they’re in over their head. Rather than asking a student to explain their choice, you can begin by asking the context of their work in your class: how are they scheduling time to complete it? What are their interests and goals, and how do they see work for your class related to those goals? Are they finding your course too challenging? (See The Opposite of Cheating: Teaching for Integrity in the Age of AI for more contexts and responses.)
Remind them of your policies and those of the university. Make sure they understand what's allowed in your course. (Ideally, students are involved in making these policies, as Sarah Elain Eaton argues.)
Remember that it's impossible to prove a student has used AI in their writing unless they tell you they have done so. AI detectors aren't reliable, ChatGPT won’t tell you truthfully if it wrote a paragraph, and you can't recreate the original text as you can, say, if a student copies text from a website.
If you think a student has used AI in a way that's inconsistent with your AI policy, consider letting the student retry the assignment, revise what they've turned in, or otherwise find a way for the student to learn from the assignment as well as get some credit for their work. Reach out to colleagues, department chairs, your Center for Teaching and Learning, or Academic Integrity officials for support.
And then sometimes, it just doesn’t work. You spent two weeks describing the AI policy, showed examples of possible collaborations, asked for disclosure statements, and the student still turned in a stiff paper but refuses to admit that it’s AI or talk openly when you ask them about it. The student just isn’t ready for the conversation or the learning.
One teacher online lamented that after he failed a student for an AI-written research paper, she emailed him back with an AI-written apology. He received a lot of suggestions for his next move, but my favorite was a prompt for ChatGPT: “Write an email, from YOURSELF apologising to a student that she failed her research project because she used chatGPT to produce it.” Follow the link to try the full prompt yourself! The results don't disappoint.
Thanks for reading AI & How We Teach Writing! Subscribe and come back to hear more.
I remember these comments