I love this - so glad to find someone getting into the details of how AI impacts actual assignments and ways to flexibly approach each. The AI resist / embrace framework is also extremely useful when thinking about how to redesign more traditional assessments. The annotated bibliographies example was especially relevant for me as I embark on our traditional research paper with my HS class. Thanks for sharing. These are great resources!
I'm so glad this is helpful to you! As a former high school English teacher, I'm often thinking about how these assignments might play out in HS and early college.
One question I have since you've been doing so much work in this area. Do you think AI may be one of those technologies where eventually schools and institutions may have to take a stand on whether or not teachers should be incorporating it (or allowing it) into their classes? I totally respect the position - and I don't think there is any other way at the moment - to let teachers opt out and not allow or permit any AI usage in their classes in 2025. But I wonder how long that will be sustainable. I remember a long time ago one of the older faculty members at my school in a meeting where we discussed how everyone was going to have an email account and would be expected to respond and communicate with parents (yes, I've been around awhile), he said can't we just not do it. He wanted to opt out because he didn't want to deal with daily parent communication. That was an example where teachers really didn't have a choice. It already seems to me a heavy burden on kids when they have to navigate multiple teacher positions on AI, from extremely permissive to outright bans. Pedagogically, I don't know how they make sense of it. Students have to adapt to all sorts of differences in the way in which teachers do things which is absolutely fair, important, and a good life lesson. But teachers today really can't tell kids, well, you can't use the internet for anything in my class. It would sound ridiculous. My sense is that AI will be more like that.
These are all great observations, Steve. And a very good question about whether schools need to take a consistent stand on AI use. Like you, I do wonder how students navigate the rugged terrain of AI policies across their courses! I think harmonizing policy across depts in a university will prove very challenging. It might be more feasible across a high school setting, where teachers already need to respond to schoolwide policies, state curriculum standards, etc.
Email adoption was a kind of Kuhnian revolution--that is, a number of teachers just waited it out until retirement, and when they were replaced with younger teachers, email became fully standard. AI is moving so much more quickly that the analogy may be limited. In fully proctored settings, it's possible to resist it. But the utility of doing so may be more limited over time.
This is definitely top of mind for me as I feel like the level of AI literacy is so appallingly low among the admin and faculty I know. I was recently playing around with Manus, one of the new agentic models, and the ability to combine text, image, and code generation to complete an almost infinite number of projects is just ... amazing. It's not perfect yet but I just don't think the majority of educators have any idea what's coming down the pike. Right now, most of the students (at least in HS) don't either. But I can't imagine that will remain true for long. My real fear is if the students leapfrog the teachers on AI advancement, we will be playing catch up and whack a mole in the short-term future which is no fun for anyone. Glad to see others are trying to stay on top of all this (which is practically a full time job in itself!). Will be checking in here for your great work. Thanks!
Not only did I ban it, but I had my sixth graders reading articles about problems with AI like it being biased and unreliable. I wasn’t just being a naysayer; I truly think I was offering them some balance.
Annette, this is another fantastic set of tips. I have had students use SCITE, which answers a research question and puts sources into a narrative. I have had them deconstruct the results from SCITE by looking up the original sources and see if 1. are they, in fact, accurately depicted in SCITE and 2. Are they adequately being put in dialogue with other sources? SCITE can't create a lit review but does a great job amassing sources that are "near" each other. Your post is thoughtful and gives great terms to start thinking about.
Thanks so much for sharing this tool and process, Julie! That sounds like a great way for students to explore sources and augment the kinds of research skills we want them to learn.
I've been struggling with the term AI Resistant for awhile now because I think it can be perceived as meaning "I'm anti-AI", which of course it doesn't. I think the phrasing "AI Aware" is a wonderful solution to that issue! The thing we actually do need to resist is the binary framing of resistance or advocacy and I think that term should help with that. I had made this video https://voicethread.wistia.com/medias/2p9dgteqkp showing an assignment structure that works very nicely in our current AI filled landscape, but I think I'm going to go change the name from AI Resistant, to AI Aware assessments. Thank you for the idea! :-)
*I appreciate the effort behind developing what some are calling “AI-resistant” assignments. However, in my experience, “resisting” AI falls somewhere between humorous and irresponsible—humorous because it suggests we can somehow shield students from tools they’ll certainly encounter outside our classrooms, and irresponsible because it misses the opportunity to guide them in learning to use emerging tools thoughtfully and ethically.
I say this as someone with 50 years in education. The first wave of “resistance” I remember being asked to enforce was against pagers (ask ChatGPT what those were if you don’t remember). Since then, it’s been cell phones, online translators, and now AI. The challenge isn’t the technology itself—it’s how we prepare students to engage with it. Our role, in my view, is not to build walls around our assignments but to help students become discerning and responsible users of any new tool. It would be foolish to think if we successfully coerce them to "resist AI" for our courses, then the battle is won. Tomorrow will bring new and more complex challenges and I want them to be ready to analyze and make responsible decisions because of the experience they have in courses today.
Rather than designing assignments that try to shut out AI, I encourage us to take a step back and ask: What is the deeper learning we want students to take away? Assignments like research papers or annotated bibliographies are not ends in themselves—they’re vehicles for cultivating skills like critical thinking, responsible source evaluation, synthesis, and argumentation. Most students won’t ever compile a bibliography after college, but nearly all of them will need to assess information and make sound judgments in a world saturated with digital tools.
So let’s help students learn how to do that—with the tools they’ll actually have.
* As with just about anything I write these days, I used an AI assistant (ChatGPT in this case) to revise, tighten, and organize my original draft. After completing the initial revision (which I have edited) ChatGPT stated "Would you like this formatted as an email, memo, or shared discussion post?" I had it reformat it as a discussion post. However, after consideration I rejected the second version as my own version of the process of being a responsible user of emerging technology.
I do agree about the term 'resistant' being problematic, because it feeds the notion of a bipolar argument framework, which is almost always way too simplistic. So I am going to start using the phrasing 'AI aware,' which is more neutral and I think a better posture towards problem solving. To be 'aware' simply means you recognize a thing and are free to have plenty of complex and evolving thoughts about it.
I really love the way you put this: "To be 'aware' simply means you recognize a thing and are free to have plenty of complex and evolving thoughts about it." Thanks for that framing!
Indeed, one attempt at stress-testing an assignment doesn't guarantee that it's AI-resistant! But working through your assignments by trying to use AI can help you understand how you might want to redesign it.
I love this - so glad to find someone getting into the details of how AI impacts actual assignments and ways to flexibly approach each. The AI resist / embrace framework is also extremely useful when thinking about how to redesign more traditional assessments. The annotated bibliographies example was especially relevant for me as I embark on our traditional research paper with my HS class. Thanks for sharing. These are great resources!
I'm so glad this is helpful to you! As a former high school English teacher, I'm often thinking about how these assignments might play out in HS and early college.
One question I have since you've been doing so much work in this area. Do you think AI may be one of those technologies where eventually schools and institutions may have to take a stand on whether or not teachers should be incorporating it (or allowing it) into their classes? I totally respect the position - and I don't think there is any other way at the moment - to let teachers opt out and not allow or permit any AI usage in their classes in 2025. But I wonder how long that will be sustainable. I remember a long time ago one of the older faculty members at my school in a meeting where we discussed how everyone was going to have an email account and would be expected to respond and communicate with parents (yes, I've been around awhile), he said can't we just not do it. He wanted to opt out because he didn't want to deal with daily parent communication. That was an example where teachers really didn't have a choice. It already seems to me a heavy burden on kids when they have to navigate multiple teacher positions on AI, from extremely permissive to outright bans. Pedagogically, I don't know how they make sense of it. Students have to adapt to all sorts of differences in the way in which teachers do things which is absolutely fair, important, and a good life lesson. But teachers today really can't tell kids, well, you can't use the internet for anything in my class. It would sound ridiculous. My sense is that AI will be more like that.
These are all great observations, Steve. And a very good question about whether schools need to take a consistent stand on AI use. Like you, I do wonder how students navigate the rugged terrain of AI policies across their courses! I think harmonizing policy across depts in a university will prove very challenging. It might be more feasible across a high school setting, where teachers already need to respond to schoolwide policies, state curriculum standards, etc.
Email adoption was a kind of Kuhnian revolution--that is, a number of teachers just waited it out until retirement, and when they were replaced with younger teachers, email became fully standard. AI is moving so much more quickly that the analogy may be limited. In fully proctored settings, it's possible to resist it. But the utility of doing so may be more limited over time.
This is definitely top of mind for me as I feel like the level of AI literacy is so appallingly low among the admin and faculty I know. I was recently playing around with Manus, one of the new agentic models, and the ability to combine text, image, and code generation to complete an almost infinite number of projects is just ... amazing. It's not perfect yet but I just don't think the majority of educators have any idea what's coming down the pike. Right now, most of the students (at least in HS) don't either. But I can't imagine that will remain true for long. My real fear is if the students leapfrog the teachers on AI advancement, we will be playing catch up and whack a mole in the short-term future which is no fun for anyone. Glad to see others are trying to stay on top of all this (which is practically a full time job in itself!). Will be checking in here for your great work. Thanks!
Not only did I ban it, but I had my sixth graders reading articles about problems with AI like it being biased and unreliable. I wasn’t just being a naysayer; I truly think I was offering them some balance.
Annette, this is another fantastic set of tips. I have had students use SCITE, which answers a research question and puts sources into a narrative. I have had them deconstruct the results from SCITE by looking up the original sources and see if 1. are they, in fact, accurately depicted in SCITE and 2. Are they adequately being put in dialogue with other sources? SCITE can't create a lit review but does a great job amassing sources that are "near" each other. Your post is thoughtful and gives great terms to start thinking about.
Thanks so much for sharing this tool and process, Julie! That sounds like a great way for students to explore sources and augment the kinds of research skills we want them to learn.
I've been struggling with the term AI Resistant for awhile now because I think it can be perceived as meaning "I'm anti-AI", which of course it doesn't. I think the phrasing "AI Aware" is a wonderful solution to that issue! The thing we actually do need to resist is the binary framing of resistance or advocacy and I think that term should help with that. I had made this video https://voicethread.wistia.com/medias/2p9dgteqkp showing an assignment structure that works very nicely in our current AI filled landscape, but I think I'm going to go change the name from AI Resistant, to AI Aware assessments. Thank you for the idea! :-)
cool assignment! Student-produced videos are a great way to resist AI, and it's even better to have students comment on each other's work.
*I appreciate the effort behind developing what some are calling “AI-resistant” assignments. However, in my experience, “resisting” AI falls somewhere between humorous and irresponsible—humorous because it suggests we can somehow shield students from tools they’ll certainly encounter outside our classrooms, and irresponsible because it misses the opportunity to guide them in learning to use emerging tools thoughtfully and ethically.
I say this as someone with 50 years in education. The first wave of “resistance” I remember being asked to enforce was against pagers (ask ChatGPT what those were if you don’t remember). Since then, it’s been cell phones, online translators, and now AI. The challenge isn’t the technology itself—it’s how we prepare students to engage with it. Our role, in my view, is not to build walls around our assignments but to help students become discerning and responsible users of any new tool. It would be foolish to think if we successfully coerce them to "resist AI" for our courses, then the battle is won. Tomorrow will bring new and more complex challenges and I want them to be ready to analyze and make responsible decisions because of the experience they have in courses today.
Rather than designing assignments that try to shut out AI, I encourage us to take a step back and ask: What is the deeper learning we want students to take away? Assignments like research papers or annotated bibliographies are not ends in themselves—they’re vehicles for cultivating skills like critical thinking, responsible source evaluation, synthesis, and argumentation. Most students won’t ever compile a bibliography after college, but nearly all of them will need to assess information and make sound judgments in a world saturated with digital tools.
So let’s help students learn how to do that—with the tools they’ll actually have.
* As with just about anything I write these days, I used an AI assistant (ChatGPT in this case) to revise, tighten, and organize my original draft. After completing the initial revision (which I have edited) ChatGPT stated "Would you like this formatted as an email, memo, or shared discussion post?" I had it reformat it as a discussion post. However, after consideration I rejected the second version as my own version of the process of being a responsible user of emerging technology.
I do agree about the term 'resistant' being problematic, because it feeds the notion of a bipolar argument framework, which is almost always way too simplistic. So I am going to start using the phrasing 'AI aware,' which is more neutral and I think a better posture towards problem solving. To be 'aware' simply means you recognize a thing and are free to have plenty of complex and evolving thoughts about it.
I really love the way you put this: "To be 'aware' simply means you recognize a thing and are free to have plenty of complex and evolving thoughts about it." Thanks for that framing!
Stress testing assessments is naive at best and negligent at worst.
See myth no. 8.
https://open.substack.com/pub/needednowlt/p/ten-persistent-academic-integrity
Indeed, one attempt at stress-testing an assignment doesn't guarantee that it's AI-resistant! But working through your assignments by trying to use AI can help you understand how you might want to redesign it.