In the dozens of faculty workshops I’ve given about teaching and AI, the question of policy comes up all the time. Faculty know they need a policy, but they’re not always sure how to go about it. How do you balance university and program guidance with the learning goals of specific classes? What’s the right AI policy?
I was wrestling with this question as a writing program administrator in early 2023, when I received a gift from an unlikely source: the preeminent science journal, Nature. Just a couple of months after the launch of ChatGPT, Nature came out with a policy disallowing authorship credit for AI:
[N]o LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.
I couldn’t believe it: embedded in this AI policy is a theory of authorship and accountability! I seized on this theory when crafting our policy at Pitt.
I wanted a policy for our Composition program that would accommodate instructors who embraced AI use in their courses as well as those who resisted it. Nature’s idea of responsibility and writing fit with both the constraints of our academic integrity policy and the diversity of approaches among our instructors. So, I included this in our AI policy: “You are the author of your work for the course and authorship means you take responsibility for your words and claims, regardless of which tools you use.” (See the full policy at the end of this post.)
The challenges of AI for writing classes aren’t resolved by policy alone, so—I hate to say it—there is really no right AI policy. Instead, there are many ways to write an AI policy for a course or program—as demonstrated by Lance Eaton’s exhaustive online collection, which includes 186 policies! (Helpfully, Eaton provides an overview of the policies on his Substack.) Among all these choices and difficult balances, this post will help you to create thoughtful guidance for students—the first step toward helping them make good choices about AI.
Why write an AI policy for your course?
Students are looking to instructors for clarity. Inside Higher Ed reported data from May 2024 indicating that only a third of students knew when or how they could use AI for coursework. And the 31% of students who did know about appropriate uses of AI credited faculty for their knowledge, not university-wide policies.
In my survey of Pitt composition students, which I’ve administered every semester since December 2022, students overwhelmingly ask for specifics on how they’re allowed to use AI in their courses (>90%). In response to my open-ended question, “What would you like to tell your instructors at Pitt about any of these [AI] writing technologies or your use of them?,” many students asked for clearer guidelines, saying things like:
“Make clear guidelines about usage.”
“Create an outline of how you want them to be used.”
“I think it is okay to allow students to use these programs in class, but a class policy should be outlined in the syllabus on how they should be used.”
“I want to know if it counts as cheating to use them.”
Many students want to follow the rules of a course. But generative AI is so new, university policies are so broad, and students receive so many mixed messages (think: AI marketing, their friends, a bunch of different instructors) that it’s hard for them to know what’s right for each course without explicit guidance.
In a recent opinion piece for Inside Higher Ed, Dan Cryer, an Associate Professor at Johnson County Community College in Kansas, describes the uneven terrain a student must now navigate as they consider using (or not using) generative AI. Even when we think we’re being clear, we’re often not specific enough. Many policies require an “AI acknowledgment statement,” for instance, but just how much should a student disclose? If an AI platform spills more help than the student asked for, should they try their best to ignore it? Cryer calls this “responsibilization.” Universities are shifting the responsibility of maintaining academic integrity, once shared between teachers and students, down to students alone. Students are bewildered, he argues, and it’s not fair to ask them to navigate the ethical challenges of generative AI without good guidance.
While I think we should emphasize students’ responsibility for their work and their role in maintaining academic integrity, I agree with Cryer that it’s our job to give them guidance. Cryer suggests two ways instructors can alleviate the responsibility burden that AI puts on students: (1) talk to them empathetically about this challenge, and (2) provide spaces to write that are free from AI, which gives them a break from their relentless decision-making about AI. (My previous post on AI-aware courses offers more strategies for helping students make good choices about AI.) Being clear about what we expect in our courses is the least we can do to support students newly “responsibilized” by the challenges generative AI presents. We also need to discuss our expectations for their use of AI.
Connect your AI policy to course learning goals
We like to think of our writing courses as important: they teach crucial skills and even inspire students! And they are important! But it’s also true that students are overburdened and struggling with balancing their time and commitments. A writing gen-ed might not always be at the top of their list. As they’re making decisions about AI use, how do they balance their time, attention, and need for attaining skills?
A policy on a syllabus is a good starting tool—but it isn’t enough. Syllabi have become pseudo-contracts: a lot of fine print that describes terms, but few students actually read them. To help students understand what a policy means, we need to talk about it with them—offer examples of acceptable and unacceptable uses of AI and explain why. Students responding to my survey often express gratitude for these conversations. One wrote, “I appreciate that many instructors are willing to have productive conversations with their students about AI in the classroom."
Students are always making choices about whether and how to use AI. Cryer reminds us that, ultimately, it’s up to them. And yet we can and should support their agency. For example, Tim Laquintano and I wrote a letter to students, explaining some of the workings, benefits, and drawbacks of generative AI. Jennifer Sano-Franchini, an Associate Professor at West Virginia University and an advocate for refusing AI, offers her students a detailed background on generative AI’s ethical and functional shortcomings. Ultimately, students are more likely to make good choices about AI if they see their work connected to course outcomes and their own learning goals.
In the classroom
To develop or refine your own policy, consider what’s most important to you and your course, and how you might use a policy to emphasize those values. Since my courses generally allow for some uses of AI, I value flexibility in process and emphasize students’ ultimate responsibility for the work. What are your key values?
Here are some other tips as you hone your generative AI policy and implementation:
Think about how your assignment design works alongside your policy to help students make good choices about their uses of AI in your course.
Discuss your policy with students at the beginning of the term. When you introduce a new assignment, discuss how the policy applies to it.
Consider including students in your policy design or have them come up with examples of acceptable use for specific assignments.
Offer avenues for students to ask questions about your AI policy.
Work with colleagues or your teaching center to harmonize policies as much as possible.
Below are some examples of AI policies I find helpful. Feel free to share your policies or policy considerations in the comments!
Thanks for reading AI & How We Teach Writing! Subscribe and come back to hear more.
Some example AI policies
My AI policy (used in composition classes at University of Pittsburgh):
The use of generative AI writing tools (such as ChatGPT, Claude, Grammarly, Gemini, Copilot, or others) is allowed in this class within specific contexts and only if such use is properly acknowledged. Assignments for the course have been designed to help you develop as a writer, and some of them may call on you to practice writing with the help of such tools. As your instructor, I will assume that any use of these tools will be only within the contexts the assignment allows (for instance, you can use ChatGPT for brainstorming if the assignment asks you to do so). You must acknowledge the use of AI in your assignment in an "Acknowledgement of AI Use" statement that:
-Specifies which technology was used and on what date (ChatGPT, Claude, etc.)
-Identifies the prompts used or includes links to your chats
-Explains how the output was used in your work
The use of AI outside of contexts where the instructor specifies its use, or failure to acknowledge any use of AI technologies in your work will be considered an academic integrity violation and addressed according to Pitt’s Academic Integrity policies. You are the author of your work for the course and authorship means you take responsibility for your words and claims, regardless of which tools you use. Please see me if you have any questions about this policy.
Bryan Hanks, in the Anthropology Department at Pitt, offers specific guidance on AI for weekly assignments listed on his syllabus for “Peoples of the North.” To augment these specific policies, he goes over acceptable use in class. A couple of example weeks are below:
Week 3 AI GUIDELINES: You may use AI to help explain complex climate science concepts from the readings. Practice writing prompts that extract specific information rather than general summaries. Document all AI prompts that you use and your own analysis in your weekly notes.
Week 5 AI GUIDELINES: You may use AI to explore these concepts (or those you choose), the types of datasets that may be available for analysis, and academic resources that would be useful in developing your story map. Document all AI prompts that you use and your own analysis in your weekly notes.
The Refusing Generative AI in Writing Studies Guide offers detailed guidance for developing policies that help students choose not to use AI. One example is the policy from University of Arkansas’s Program in Composition and Rhetoric:
Using ChatGPT for Your Work in This Class: This is a course that asks you to work through the writing process and develop your own strategies for how to tackle each step of the process, from idea generation to drafting to revision to reflection. For this reason, I ask that you do not use artificial chatbots like ChatGPT to complete any portion of the assignments for this class.
My Pitt English colleague Sarah Hammock limits use of AI and emphasizes how she values students’ voices in her Science Fiction class’s AI policy. She notes that her students have responded well to this policy:
AI is rapidly becoming nearly ubiquitous in reading and writing. In this class, alongside our other work, we will consider when AI tools will be helpful, how to use AI tools effectively, and when using AI might negatively impact your learning or performance.
We will also discuss some of the critical issues that AI raises: How do you want to use AI in your work and daily life? How might AI affect the society we are building for ourselves and our descendants? How can science fiction help us imagine and guide the ethical and human impacts of AI technology?
I will permit or encourage you to make use of AI tools when I believe it could help you practice useful technology skills or save you time and energy.
In turn, believe me when I say that I prefer to read and interact with text and ideas written in your own imperfect human voice, without polishing, cleanup, clarification, or restatement from Grammarly, ChatGPT, or any other AI assistant – and seeking aid from these sources will not improve your grade in this class.
The overall position of this course is that you are a reader, writer, and thinker, and AI is a tool you may sometimes use to supplement – but never replace – your reading, writing, and thinking.
Love the examples at the end. They give us plenty of choices as we figure out how to approach it ourselves...
Great piece! Having an AI policy provides an opportunity to improve channels of communication, empathy, and accountability (i.e., trust) between learner and instructor. Your post has inspired me to take another look at my own AI policy as part of an overall educational philosophy (https://benjaminlstewart.notion.site/AI-Policy-37e7acb760c74de48e05114551518b25?pvs=4).