Is “everyone” really “cheating their way through college” as the recent New York Magazine article claims? The stories James D. Walsh tells of students skipping out on reading, writing, coding, and pretty much all of the intellectual work of college make for a good viral article. But what does the data say about student uses of AI?
Student data tells us a more complicated story. For example, from my own study team's work talking to students across four of the University of Pittsburgh campuses, I learned that: (1) students often think they know about AI because they’re users of AI; (2) students need more examples of what productive AI use could be because most of them are finding uses on their own—not all of which seem great; (3) most faculty have no idea how prevalent, varied, and complex AI use is among students, and students sense that gap.
So, how prevalent, varied, and complex is AI use among students? Recent reports from AI companies, polling companies, and educational companies can help us to understand. In this post, I’ll give an overview pulling from 10 reports that together provide a good snapshot of this moment. (You’ll find a list of links and brief descriptions of the reports at the end of this post.) Read this post once to get the gist, and then keep it bookmarked for arguments you might need to make to support faculty training, new course development, or respond to administrative pressures. Data matters!
A note about the sources: with the exception of one study (HEPI), the reports I’m drawing from are focused on the United States—the biggest educational market. It’s also worth noting that two reports are from AI companies (OpenAI and Anthropic) who have recently announced new products and partnerships with American educational institutions. These AI companies are actively tipping the scales to encourage student use through partnerships and promotions (ChatGPT Plus is free for students through May 2025, and Gemini Advanced is free for college students through “finals 2026”). The reports from OpenAI and Anthropic both end by emphasizing the importance of teaching students to use AI responsibly, and OpenAI’s report is particularly focused on promoting government and educational sponsorship of AI literacy programs for students so that they can prepare for the workforce. So, you know, grains of salt.
How prevalent is AI use among students?
Studies vary widely on their measures of student AI use, but it’s highly prevalent and increasing dramatically. And we’re probably not seeing the extent of it: only 30% of students thought their instructors were fully aware of all of their uses of AI (Educause) and 54% of higher ed leaders felt their faculty were good at detecting students’ AI use (AAC&U). These stats help paint the picture:
79% of Gen Z use GenAI, and 47% do so on a weekly basis (Walton-GSV-Gallup). Teens 13–17 doubled their adoption from 2023 to 2024, from 13% to 26% (Pew Research). The UK group HEPI also reported a surge in student use over a yearlong period: from 66% in 2024 to 92% in 2025.
Students are increasing their use for assessments: from 53% in 2024 to 88% in 2025. Use of AI to generate text has doubled from 30% in 2024 to 64% in 2025, and almost one-fifth of students report using AI-generated text directly (HEPI).
Surprisingly, 43% of students reported not using AI (Educause). This number is high, and Educause suggests students might have underreported their AI use. My most recent survey data (December 2024) indicates that 32% of students never use AI.
Very few students totally resist AI—most students in my focus group study were using it at least in some capacity. If I had to estimate use based on the focus groups, I’d guess >90% of students are using AI.
AI use among students is influenced by instructor policy, although policies aren’t fail-safe. Whereas 72% of students reported not using AI if it was prohibited, only 6% of students didn’t use it if it was allowed (Educause). My survey data from Pitt students indicates that while 59% of students used AI when instructors didn’t specify a policy, 12% of students still used it when it was disallowed.
The conclusion I draw from all of this is that there is widespread use of generative AI among undergraduate students and considerable variability in reporting of that use. Almost all students have tried it. Some students are holdouts, or they may use it so infrequently they say they don’t use it on a survey.
What are students using it for, and why?
Students turn to AI to save time, boost their grades, and get help quickly (HEPI, GenAI Conversations) as well as to avoid work they find less meaningful (GenAI Conversations). Some of the students we talked to in GenAI Conversations said they were more likely to use it when a deadline loomed. Others said they used it for feedback because it wouldn’t judge them. HEPI also cites personalized support, improving AI skills, and improving learning as factors for students choosing to use AI. Half of students felt that using AI improved the quality of their work (HEPI). Our study team also heard from students who felt AI improved their grades, so they felt it was necessary for their success and to compete with peers. Their responses made us think about how we might reframe office hours or Writing Center consultations, and the role of grades and deadlines in our classes.
Students use AI for every aspect of the writing process: generating ideas, aiding research, summarizing readings, drafting text, revising, and editing. Educause notes that common uses include brainstorming (33%), refining ideas (24%), and organizing (24%). Top uses of ChatGPT include starting papers, brainstorming, summarizing texts, and editing writing. Claude has similar uses among students: essay editing, generating practice questions, summarizing of academic texts. Anthropic notes that students also use Claude to provide technical explanations or solutions for academic assignments, including debugging code and solving math problems, and translating or proofreading content between languages.
Anthropic’s report uses Bloom’s taxonomy to categorize student use and indicates that students are using AI for higher-order cognitive tasks, namely creating and analyzing—although much less so for evaluating—another higher-order task. Students used it about half of the time for what Anthropic calls “Direct answers,” and half of the time for “Collaborative conversation.” Direct answers may suggest academic integrity concerns, and Anthropic gives a few examples they were concerned about: providing answers to multiple-choice tests, rewriting texts to avoid plagiarism, and providing answers to English language test questions. Collaborative conversations might include asking for feedback or explanations for concepts.
Since Anthropic’s analysis indicates higher-order cognitive tasks, we may worry that students are offloading these tasks to AI, rather than developing associated skills.
As a teacher, I feel more comfortable with students using AI in collaborative ways rather than direct answers—although I worry about them using AI to substitute for feedback from peers, tutors, and instructors.
Are there differences across fields and demographics?
Use in the humanities is less than in the sciences and business (HEPI, Anthropic), perhaps due to attention to authorship and originality of expression in the humanities and a greater stigma associated with AI use. And while some students perceive AI as a boost for their grades, they see the effect as lower in their humanities work (HEPI). Computer science students appear to be the greatest users (Anthropic: 38.6% of conversations, with only 5.4% of US bachelor’s degrees), with Humanities among the least (Anthropic: 6.4% of conversations, 12.5% of US bachelor’s degrees).
Women are adopting AI at a lower rate than men. There appears to be a greater fear of cheating among women, and men tend to adopt cutting-edge tech at a faster rate than women (Walton-GSV-Gallup). A 2023 BestColleges survey found 64% of men reported using AI tools for coursework versus 48% of women.
There are some statistics on adoption across race and income. According to Pew, Black and Hispanic teens 13–17 were more likely than White teens to have used ChatGPT for their schoolwork in 2024 (31% versus 22%). These demographics were even in 2023, and all of them were lower (11% each). At the same time, White teens (83%) were more likely than Black (73%) and Hispanic teens (74%) to say they’ve heard about ChatGPT. The Gen Z survey reported slightly different results: Asian and Black Gen Zers were more likely than White or Hispanic Gen Zers to use GenAI frequently. Income influences knowledge of AI: teens from households with higher income are more likely to have heard of ChatGPT: 84% of teens in households making $75,000 or more, versus 67% of teens in households making under $30,000 (Pew).
The AAC&U report on higher ed administrators (January 2025) indicates that AI use among faculty trails behind that of students—and that of administrators. Tyton Partners reported the same phenomenon in Spring 2024: most students were familiar with AI or used it regularly, whereas 50% of advisors and other frontline staff had never used AI. Administrators and instructors who had never used AI were more likely to think it would have a negative impact on student learning than those who used AI regularly. That administrators tend to use AI more than faculty might explain the July 2024 findings from The Chronicle of Higher Education: administrators are more excited than faculty about AI integration in education.
What concerns students about their AI use?
Students have mixed feelings about GenAI. Many are anxious about potentially losing critical analytical skills (49%, Walton-GSV-Gallup). More Gen Zers feel only anxious or angry about AI (30%) than feel only excitement or hope (26%) and 17% say that they feel both positive and negative emotions about AI. The Gen Z survey also indicated strong preferences for human doctors, drivers, and tutors—countering some claims that humans won’t be needed for those jobs in the future. So, Gen Z may be using AI frequently, but they are not all-in.
Students see a mismatch in their perceived future with GenAI and the training their institutions are providing. The majority of students in the Educause report (55%) expected to use generative AI in their future career, but only 20% of them felt they were getting appropriate training from their institution. Students who felt their institution had a cutting-edge approach to technology, including AI, felt more prepared for their career than students who felt their institution was behind the times (Educause).
In the classroom
So what does all this mean for our work as teachers? What actions can or should we take as one term winds down and the next looms ahead? Here are four areas I’ve been reflecting on as I plan for the term ahead.
It’s important to acknowledge that AI is everywhere for our students, and the majority of them are already using it in their coursework. Even if they don’t use it, their friends do. Students in our focus groups indicated that even if they were opposed to AI, or if there were course policies against it, they used it “just a little bit.” They are constantly making choices about AI use. Sometimes they can’t choose though: their apps and educational software foist AI on them, which students called “unnecessary.”
Students benefit from open conversations about AI in our classes. Their uses are so varied—sometimes productive and sometimes creative—but not always what we’d call beneficial to their learning. Students need concrete examples of what constructive AI use looks like for specific assignments (see my previous post on AI-Aware Teaching Examples).
We need to offer transparent assignments and explain why the work should matter to students. Work that students find to be “busywork” is a candidate for AI. If an assignment isn’t clearly connected to student learning outcomes, it should go. If it is connected, students need to understand exactly how. Channeling Marie Kondo, I’ve been looking closely at my assignments and asking myself which are likely to spark meaning—and even joy—for my students.
We should make students feel more welcome. Students sometimes choose AI in contexts where human feedback or their own work might benefit them more. They turn to AI for affective reasons: to avoid being judged, to improve their perceived self-efficacy. They tend to avoid office hours. How can we truly invite them in? Or, for students who turn to AI to meet deadlines or make better grades, would flexible deadlines and alternative grading schemes help to refocus students on achieving learning outcomes?
Sorting through all this data, I think about the public service announcement, the more you know. The more we know about how they’re using AI, the better we can do for students learning now.
Thanks for reading AI & How We Teach Writing! Subscribe and come back to hear more.
Studies cited
AAC&U. (2025, January) Leading through Disruption. https://www.aacu.org/research/leading-through-disruption. NOTE: This report focused on higher education leaders, 334 of whom responded to the survey. Institutions were diverse in terms of student population size.
Anthropic. (2024, October 26). Anthropic Education Report: How University Students Use Claude. https://www.anthropic.com/news/anthropic-education-report-how-university-students-use-claude. NOTE: Based on one million anonymized student conversations on Claude.ai; they relied on .edu addresses, so they could also be capturing staff and faculty conversations, plus they didn’t capture conversations that occurred on non .edu addresses.
BestColleges. (2023, November 22). 56% of College Students Have Used AI on Assignments or Exams. https://www.bestcolleges.com/research/most-college-students-have-used-ai-survey/. NOTE: A 2023 survey, which included 1,000 undergraduate and graduate students.
The Chronicle of Higher Education. (2024, July). How Generative AI Is Changing the Classroom. https://www.chronicle.com/featured/digital-higher-ed/how-generative-ai-is-changing-the-classroom. NOTE: This report is based on faculty interviews (including me!) and surveys of higher ed administrators and faculty.
Educause. (2025, April 14). 2025 Students and Technology Report. https://www.educause.edu/content/2025/students-and-technology-report. NOTE: The report has a section on generative AI. They surveyed 6,468 respondents from 37 higher education institutions.
Gallup & Walton Family Foundation. (2025, April 8). Voices of Gen Z: How American Youth View and Use Artificial Intelligence. https://www.gallup.com/analytics/651674/gen-z-research.aspx & https://www.waltonfamilyfoundation.org/about-us/newsroom/gen-z-is-using-ai-but-reports-gaps-in-school-and-workplace-support. NOTE: The report samples 3,465 13–28 year olds living in the United States. Their emphasis is on Gen Z, and not students per se.
HEPI. (2025, February). HEPI / Kortext Student Generative AI Survey 2025. https://www.hepi.ac.uk/wp-content/uploads/2025/02/HEPI-Kortext-Student-Generative-AI-Survey-2025.pdf. NOTE: The report is based on responses from 1,041 undergraduate students in the United Kingdom.
OpenAI. (2025, February 20). College Students and ChatGPT Adoption in the US. https://openai.com/global-affairs/college-students-and-chatgpt/. NOTE: The report includes data from a survey (n=1,200) and user data of US-based 18–24-year-olds. It’s unclear to me how they know they’re capturing student data.
Pew Research Center. (2025, January 15). About a Quarter of U.S. Teens Have Used ChatGPT for Schoolwork—Double the Share in 2023. https://www.pewresearch.org/short-reads/2025/01/15/about-a-quarter-of-us-teens-have-used-chatgpt-for-schoolwork-double-the-share-in-2023/. NOTE: A report on teens 13–17 from Fall 2024, frustratingly, their survey only asks about ChatGPT, which is an imperfect proxy for AI use generally.
Tyton Partners. (n.d.). Listening to Learners 2024. https://tytonpartners.com/listening-to-learners-2024/. NOTE: They surveyed 1,600 students across 850 institutions, with 26% being first generation students.
University of Pittsburgh. (2025, March, unpublished). GenAI Conversations. NOTE: This is my own study, collaborating with colleagues across four Pitt campuses, citing focus groups with 95 students total.
University of Pittsburgh. (2024, December, unpublished). Student Uses of Writing Technologies Survey. NOTE: This is my own survey study of Pitt Composition students (n=167).
Well done. It's a complex picture that'll continue to evolve each year.
What a comprehensive look at what we know about AI and how we can help keep students engaged. I loved the Marie Condo analogy!