Great piece! Having an AI policy provides an opportunity to improve channels of communication, empathy, and accountability (i.e., trust) between learner and instructor. Your post has inspired me to take another look at my own AI policy as part of an overall educational philosophy (https://benjaminlstewart.notion.site/AI-Policy-37e7acb760c74de48e05114551518b25?pvs=4).
Valuable article and I appreciate your insights and contributions. I'm still surprised that much of the "AI policy" work is around its use for assigned classwork. It is a bit unbelievable that teachers and professors think their classes define the totality of a student's world. Is it productive to perpetuate a notion that outside of class students still won't use AI for anything because they once had a class that told them it was not allowed? Instead, as educators we have to come to grips with helping students learn to be responsible users of emerging technologies -- and take responsibility for their work no matter whether they were using AI or not. While teachers may still be able to pull up the castle drawbridge and keep students inside the walls during class, we all have to realize that no one will stay in the castle for long.
Great advice! Faculty absolutely need a policy, without one, students are flying blind. That's not only unfair, it's likely to create a mess at some point. In case you're interested, here's how we created our college's policy. We tried to maximize flexibility while still giving proper guidance to students.
That's a great guide for writing a policy! I like your emphasis on developing a flexible framework, and I see you are also highlighting *responsibility* as part of a policy.
Thanks! When we used the framework to develop our college policy (business school), there was virtually no push-back from faculty, which is kind of amazing.
Love the examples at the end. They give us plenty of choices as we figure out how to approach it ourselves...
Great piece! Having an AI policy provides an opportunity to improve channels of communication, empathy, and accountability (i.e., trust) between learner and instructor. Your post has inspired me to take another look at my own AI policy as part of an overall educational philosophy (https://benjaminlstewart.notion.site/AI-Policy-37e7acb760c74de48e05114551518b25?pvs=4).
Valuable article and I appreciate your insights and contributions. I'm still surprised that much of the "AI policy" work is around its use for assigned classwork. It is a bit unbelievable that teachers and professors think their classes define the totality of a student's world. Is it productive to perpetuate a notion that outside of class students still won't use AI for anything because they once had a class that told them it was not allowed? Instead, as educators we have to come to grips with helping students learn to be responsible users of emerging technologies -- and take responsibility for their work no matter whether they were using AI or not. While teachers may still be able to pull up the castle drawbridge and keep students inside the walls during class, we all have to realize that no one will stay in the castle for long.
Great advice! Faculty absolutely need a policy, without one, students are flying blind. That's not only unfair, it's likely to create a mess at some point. In case you're interested, here's how we created our college's policy. We tried to maximize flexibility while still giving proper guidance to students.
https://aigoestocollege.substack.com/p/creating-an-ai-policy
That's a great guide for writing a policy! I like your emphasis on developing a flexible framework, and I see you are also highlighting *responsibility* as part of a policy.
Thanks! When we used the framework to develop our college policy (business school), there was virtually no push-back from faculty, which is kind of amazing.