First-year composition (FYC) classes tend to get treated like the junk drawer on campus: a place to store often-useful, sometimes-crucial, miscellaneous skills—skills that all students should have but that don’t fit into any disciplinary drawer. In addition to core reading and writing skills, FYC sometimes includes library visits, community engagement, multimodal experimentation, academic integrity training, and more.
The ubiquity and pliability of the course makes it a catchall for a variety of skills and new, so-called literacies. Those of us who teach FYC know how important the course is to students’ writing development, but it’s a struggle to address everything. And now, here comes “critical AI literacy.” Can we fit that into the already overstuffed drawer that is FYC? Should we? What is critical AI literacy, anyway?
I think we should be teaching critical AI literacy in FYC or other writing-heavy gen-eds. Here, I’ll explain why, along with some ideas for how to do it.
Why teach about AI in writing classes?
On the one hand, some uses of AI undercut the goals of our writing classes. AI promises to summarize, write, research, rewrite, and respond—much more efficiently than human writers and readers. But efficiency and outsourcing are counter to the work we ask students to do as writers. On the other hand, students are already using AI, sometimes even if we ask them not to.
Plus: AI is rapidly being integrated into professional writing contexts. And large-scale public and private investments coupled with a permissive regulatory environment suggest that AI will not be extricated from these professional contexts that we’re helping prepare students to enter any time soon.
Of course, there are good reasons not to simply kowtow to the tech industry, power, money, the concept of college as vocational training, and so on. The Refusing GenAI in Writing Studies guide offers more justifications and strategies on that point (which I’ll say more on in a future post!).
Yet it’s clear that students will need to know something about AI. Teaching students critical AI literacy in at least some of our writing classes—a place they’re already using it—can help them to be better equipped to responsibly encounter, use, and resist AI. The first step to teaching critical AI literacy, of course, is figuring out what it is.
What is critical AI literacy?
“Literacy” tends to be a catchall term for any skill deemed important. We should all have textual, digital, financial, emotional literacy, and so on . . . ! I admit I’m skeptical of claims for everything being a literacy. But “critical AI literacy” has gotten traction, and it’s a good concept for considering how we go about teaching AI in our writing courses.
For me, critical AI literacy is the ability to understand, apply, and assess AI operations, uses, and outputs. This is how I think of it when breaking the concept down for faculty and students:
Functional AI literacy: Can you use AI to accomplish tasks? Can you support your own learning with AI? Do you know how it works?
Rhetorical AI literacy: Can you evaluate AI outputs? Can you tailor its uses and operations to your personal and community values and needs? Do you know when it might be appropriate and inappropriate to use AI?
Ethical AI literacy: Do you understand the risks of AI, including environmental, societal—and even existential risks? Do you understand your own personal risks?
I’m building here on Stanford University’s AI literacy framework; I’ve added my own questions to Stanford’s organizing terms to help refine what I like to emphasize in classes.
Critical AI literacy is the ability to understand, apply, and assess AI operations, uses, and outputs.
Students should know how to use AI well—have some literacy in it, if you will. They should also be able to discern when to use it, when to avoid it, and how to evaluate it—that’s the critical part of “critical AI literacy.” Maha Bali has a great breakdown of the three parts of critical, AI, and literacy, noting that “critical” means both questioning and critiquing.
The MLA/CCCC Joint Task Force on Writing and AI also has a working paper on critical AI literacy that calls on educators to shape the culture of AI use by teaching it responsibly. Every agent in higher ed, from students and faculty to departments and administrators, should aim for greater literacy in AI. They provide a detailed list of what this literacy entails, which includes understanding: bias, data security, citation, limitations of AI, technology change over time, potential impact on learning, and determining appropriate uses of AI. And they offer specific advice for FYC and ESL programs, career readiness programs, academic integrity offices, and more. The document has been critiqued for accepting the inevitability of AI (see comments on the document itself), but I find it to be a thorough, thoughtful, and helpful guide for working toward greater AI literacy.
In the classroom
Ok, fine, some of our writing classes (not all!) should include attention to critical AI literacy. But what does that look like? Here are three activities I’ve used. They can be adapted for writing classes at any level, and you can run them even if you’re totally new to AI. These alone won’t fulfill a critical AI literacy student learning outcome (SLO), but they’re ways to begin.
Can you evaluate AI outputs? A comparative exercise can help students see for themselves the difference between large language models (LLMs), and what rhetorical choices they make in summarizing text. My favorite way to do this: in class, enter the first paragraph of the Declaration of Independence into the site Chatbot Arena, and ask for a one-sentence summary. Chatbot Arena pits two random LLMs against each other in responding to a single prompt, and it allows users to rate the results. It also keeps track of ratings on a leaderboard. Students can compare what’s in the original paragraph and the summary sentences. Most LLMs change “men” to “people,” erase reference to “the Creator,” and they all offer different emphases on what’s in the original text. Chatbot Arena requires no login and it’s easy to use, plus its LLM leaderboard offers another perspective on how LLMs are evaluated. This summary exercise was designed by Tim Laquintano and is available as a free resource here.
When is AI appropriate to use? In my experience, students have an inherent sense of when it’s inappropriate to use LLMs and often welcome open discussions about their choices. One way to start this conversation is to have students view the Gemini 2024 Olympics ad, “Dear Sydney,” alongside the Google Pixel 2025 Superbowl ad, “Dream Job.” Both ads feature dads using the Gemini LLM with their daughters, but the “Dear Sydney” ad has the dad ghostwrite a fan letter while the “Dream Job” ad features a dad prepping for an interview by talking about fatherhood. (Some teachers have assigned my blog post on the “Dear Sydney” ad and found it useful for context.) These ads are pretty simple to grasp, and discussing them helps get students thinking about how contexts for writing and AI differ, and how using AI might shape our personal relationships in various ways.
What are some risks of relying on AI outputs? Students have heard that AI can be biased, but they might not have recognized examples of that bias. I’ve started discussions with students by screening Joy Buolamwini’s AI, Ain't I a Woman?, a powerful 3-minute performance about errors in AI facial recognition. First, I introduce Sojourner Truth’s speech, “Ain’t I a Woman?”—which many students are unfamiliar with. I then mention that AI facial recognition is a controversial technology, which Dr. Buolamwini helped to draw attention to with her personal experience and research. We watch the video, and students immediately get the problem with the way that AI classifies some faces better than others. To apply these concepts of recognition and representation, I’ve also asked various LLMs to “give me a picture of a university professor” or “give me a picture of a [gender studies / engineering / writing] university professor” and showed students the results to demonstrate how LLMs have different defaults. You can ask students to prompt different LLMs to generate images for “carpenter” or “nurse” and talk about their results. Ask: Who do you see represented in these images, and why? What datasets and concepts do you think the LLMs are drawing from, and how do you think their outputs are shaped by the parent company’s decisions? What are the implications of these representations—both in the United States, and globally? Here are some slides I’ve used for discussing bias in AI.
Reflection is key in any FYC activity, so I always try to help students process what they’ve learned in discussion or writing. AI gives us a great opportunity to try new things—so don’t be afraid to learn alongside your students.
Thanks for reading AI & How We Teach Writing! Subscribe and come back to hear more ideas for bringing critical AI literacy into the classroom.
This is so helpful, Annette!
Thank you! Best, John Hansen