4 min read

AI is a problem ...

but it isn't the only problem.

Author and journalist Will Leitch wrote to his subscribers recently about the glut of articles and think pieces and notes that have emerged about the way ChatGPT and its competitors circulate among college students. It's a welcome set of insights, connecting the problems rampant uses of AI pose to the work of thinking.

A comment on Leitch's piece pointed to some of the emotional and psychological background behind students' uses of AI tools like ChatGPT. The commenter recognized that students used ChatGPT to compose formal emails to instructors because the college students they work with are terrified of getting things wrong. AI tools like ChatGPT help students protect against the inevitable critique they'll receive from The Authorities they've engaged about their grammar, the nature of their request, comments on their "professionalism," all of that stuff - perceived or real.

The operating assumption? That students believe that they are always, already wrong.

Earlier this year, my teaching partner Trudi Wright shared an article from Inside Higher Ed titled "A Crisis of Trust in the Classroom." Seth Bruggeman, a history professor at Temple, describes wrestling with the faculty narrative and the reality of teaching students something they are required to learn to complete their college education. After describing the difficult collision of his expectations with the realities of the contemporary college classroom, Bruggeman concludes by explaining the outcome of a mini-conference, in which students were invited to teach him about their orientations toward college and its demands. Students who participated reported that they feel under-prepared, and that they feel that expectation most acutely from faculty in their classrooms. Bruggeman suggests that "there is a lot of embarrassment, shame and fear associated with this issue." Contending with fear and shame, Bruggeman says, "must be a top priority for all of us."

When I brought this article and its insights into a department meeting, colleagues of mine who are parenting young people remarked that the bullying young people experience - IRL and in online cultures of gaming, social media, other forms of streaming - amplifies whatever academic baggage they bring into the classroom. Couple this with the effects of the pandemic and the incessant attention fracking they're subject to, and it becomes clear that there's no meaningful break for them between the places they are belittled online and those they perceive they will be belittled in real life. As such, it's reasonable to believe that our students expect they will experience a mode of bullying at the hands of their college professors. Against this background, the appearance of thoughtfulness that an AI generated paper or email message projects is likely a relief, a bulwark against the bully.


The things that make AI attractive are structural. For my students, the structures that need attention are the conditions in which they complete work for my course, and the ways errors are identified and correction is invited. The responsibility for attending to and amending these structures (where appropriate) is the instructor's. As the poet says, the professor ought to check oneself before one wreck one's self and one's students.

In general, my classrooms are phone- and laptop- and tablet-free spaces. Students submit their homework as pictures of their notes in their notebook; we have different kinds of conversations when folks are turning physical pages as opposed to scrolling screens.

My intro class is organized through assignments that affirm students as experts in the things that matter for philosophical thinking. In other words, students already have a working sense of what it means to think, what it means to be courageous, what it means to love, or what it means to tell the truth. My class invites students to see these ideas in conversation with what philosophers think, and asks them to consider places where this conversation deepens or challenges their initial thinking. This approach has the added benefit of taking my standards of what is "correct" off the table; student's don't need to engage in mind-reading to do well in the class, and instead we practice careful reading strategies and writing with philosophical texts as evidence, and collaborate to hone the habits of scholarly attention that will help them in other classes and out in the workforce.

I don't ban the use of genAI on assignments, but I've also devised a major project in which using genAI doesn't offer any meaningful advantage. Using the anchor of a required AI disclosure on all assignments, first introduced by my colleague Loretta Notareschi, I instill an expectation in students that using AI requires a kind of accountability in the form of transparency - scholars practice transparency in their use of source material. I don't deduct points if a student tells me they use AI, but I do deduct points if the disclosure is missing from a writing assignment.

These strategies help, but they aren't always enough to prevent the AI-generated essay from coming into the dropbox. I've been on this turnip truck long enough and have read enough student essays that I know one when I see it. These submissions are always conversational in a way that is out of joint with the student's other work, and they cite the author we're working with (good!) but the wrong text (aha! /facepalm). The student will, usually, get a chance to redo the assignment after a conversation, where we talk about the assignment and how to approach it and those that follow in light of the demands on their life and time.

To make the rules plain (as an AI disclosure does), to dispel student beliefs that they're behind the eight ball in my classroom (as the course questions and assignments are oriented), and to offer students a break from the relentless demands of screens are each invitations to the work of study that I cherish so much. Study is possible as soon as students walk in the door, but it's my responsibility to make it so.


If you'd like another thinkpiece on the emptiness of AI's promises for our everyday, consider this by the always-reliable Mandy Brown. For a more sustained and historical critique, check out Brian Merchant's excellent Blood in the Machine: The Origins of the Rebellion against Big Tech, and learn why "luddite" is not an insult.

Regis University has a kick-ass music faculty, including Trudi Wright and Loretta Notareschi. I'm grateful for their commitments to teaching and bringing curiosity and care to the classroom.

Friend of this letter, Steve Taylor, produced a new movie called Sketch. He describes it as "Inside Out meets Jurassic Park (except nobody gets eaten)." Here's the trailer - now out in theaters!