Keeping AI out of the Classroom

In the fifth century BCE, in ancient Athens, an author commonly referred to today as the “Old Oligarch” wrote a treatise opposing the democratic formation of the Athenian government. One of the things he said in that treatise has really stuck with me: “I can forgive the common people their democracy; for anyone can be forgiven for looking after their own interests. But anyone who is not one of the common people, and yet chooses to live in a city governed by a democracy rather than one governed by an oligarchy, must be preparing to do wrong and have decided that a bad man can escape detection far more easily in a democratic than in an oligarchic city.” I’ll explain why this quote has stuck with me at the end of my talk, so make sure to not forget it.

Here’s something a lot harder to forget: I am not a large-language model. I am a human being. Specifically, my name is Adam Godel and I am a third-year math and computer science undergrad here at BU. I am also a Core Curriculum and French minor, and this year, I am exploring how the works we read in Core and related texts can help us figure out how to address a modern-day crisis, the role of AI in the classroom. To this end, I have been interviewing professors on how they have addressed this crisis as well as closely following a specific math course at BU throughout the course of last fall semester.

To start, let’s state the obvious. People use large-language models because they are easy and fast. You enter a query, and it responds to it immediately. Not using large-language models, whether we’re talking about the classroom or not, has become a conscious choice to make. I chose to not use AI tools when writing this speech. I certainly could’ve—it probably would have been a lot easier—but I felt that something from my speech would have felt missing if I had used AI.

Well, large-language models often begin their response to a lot of questions, especially broad, philosophical ones, with “as a large language model….” As you now know, I am not a large-language model; I am a human being. And to remind you of this, I want to frame my answers to the questions you might have about this challenge we face in the classroom with how I think and feel about this crisis as a human being.

So, what should I try to answer when giving a speech like this? I think a good starting point is something broad: “how do we solve the crisis of AI in the classroom?” Now, let me give you my response. As a human being, I can think, reason, and form opinions. Let’s refine the question, workshop a little. AI is an external source; it compensates for something that students should be able to do themselves, in their own minds, right? Are there things that AI has which their own mind lacks?

My mind drifts to the ideas I’ve learned so far in my undergraduate studies, such as in Core. I’m thinking about René Descartes’ MeditationsCC201Core Humanities 3: Renaissance, Rediscovery, and Reformation, where he wrote about his beliefs on consciousness, the existence of God, and a whole lot of other things. Descartes treated himself as a blank slate; his goal was to assume nothing except his own perceptions, starting at the moment where he started his meditations. Should students be more like Descartes, figuring out each and every principle and idea in their studies on their own? I don’t think so. Most courses encourage collaboration in assignments. In research, in industry, and in life, collaboration is invaluable. Considering the viewpoints of others and incorporating them into your own is what the university should stand for.

I think part of the reason AI is so effective is because of the interface; it’s not an accident that basically every large-language model platform, from ChatGPT to Claude to Grok to DeepSeek, presents their model as a chatbot, contained within a screen eerily reminiscent of text messaging. Subconsciously, asking an AI model a question is like texting a friend: a friend who responds instantly and will confidently answer any question you ask it.

The other problem I have with applying Descartes’ method here is that I’m not a blank slate, and I can’t assume to speak on behalf of others in the context of this crisis. As you know, I’m a human being. As a human being, I have my own feelings and experiences unique to me. When thinking critically about something, I think about experiences I’ve had and the way I felt having them. At this moment, my mind drifts to a conversation I had at a quantum computing event at Yale University in April. It was oddly frigid and rainy for April, but I was in a small, stuffy classroom at a workshop. I chatted a bit with an employee at a quantum software company, and I must have mentioned AI, or he did, I don’t really remember. I remember what he said after one of us brought it up, though: “you guys in academia still think of AI as a problem to be avoided. You should know that it’s been super useful for us in industry. We use it for all sorts of things.” Or something like that; I’m paraphrasing here. As a human being, my memory isn’t perfect; my views of my experiences shift and warp as my feelings in the present moment shift and warp. Maybe he didn’t speak in such a condescending tone as I remember; maybe he did but didn’t mean to.

My mind drifts to another memory, this time more recent. Throughout last fall semester, I helped grade for Professor Debra Borkovitz’s discrete math class. Professor Borkovitz’s class assignments revolve around a few very challenging math problems that require a lot of thought to figure out. Students submit a paper indicating their attempt to come up with a solution—and to prove that their solution works—to these problems, which Professor Borkovitz calls “excursions”. These problems are meant to be thought through closely and not just plainly “solved”; every student’s paper should be somewhat unique, as there are many different ways to tackle these sorts of problems. The writing process is meant to be collaborative between the students and the graders. Students submit their first attempt, which is almost always marked as “needs work” with comments; students then revise their attempts using the comments and resubmit, with the process continuing as long as the students need to have as close to a complete solution as possible. When Professor Borkovitz thinks a student is using AI tools, or any other external resource, she gives them an academic integrity reflection to fill out. This assignment asks students to admit honestly if they used any resources that they weren’t supposed to, as well as asking for a reflection of their emotions: why did they make that choice, if they did? What about the class environment, or the assignment, or the student’s own circumstances motivated them to make that choice? I don’t think this is something students are usually asked about, especially not STEM students. This gives students a chance to either explain themselves if they did something wrong, or push back and say they weren’t using external resources so the professor can look over their work again and see if there was a misunderstanding.

I can’t complete a discussion of that math course without talking about what happened towards the end of the semester, though. After the second excursion, Professor Borkovitz noticed that students were using the same variable names in their response, m and capital L. This is where having years of experience teaching the course becomes quite relevant. While this might not seem weird to most people, papers in prior years had always had a wide variety of variable names. Furthermore, these are the two variable names that a large-language model will almost always use when asked about the problem in the excursion. Almost the entire class was using these variable names, so Professor Borkovitz asked most of the class to fill out the academic integrity reflection. While the system she had put in place ostensibly worked, you can imagine how disheartening it was for something like that to happen, and the next time Professor Borkovitz is teaching the class, she plans to introduce changes to her fundamental assignment model that she's still considering exactly how to implement.

As a human being, I recognize patterns and draw loose connections based on what makes sense to me and how different experiences make me feel. I’m thinking about a moment in Part Two in Miguel de Cervantes’ Don QuixoteCC201Core Humanities 3: Renaissance, Rediscovery, and Reformation, another text we read in Core, where the self-proclaimed “Bachelor of Arts” Sansón Carrasco says that the interpolated stories in the first part of the epic novel are “out of place and has nothing to do with the history of the great Don Quixote.” The gag Cervantes is emphasizing here is that the first part of his novel became popular with university students who weren’t intelligent enough to read it for its satirical elements or commentary on the state of his contemporaneous Spanish society and culture, but instead saw it as a story about a great knight with a bunch of out-of-place diversions. Is AI making students more like Sansón Carrasco? Probably. Sometimes I worry to myself that I’m being like Carrasco when I’m working on an assignment, just seeing or thinking about things on a surface level. AI tools certainly don’t help us avoid that; they’re a way out, an ability to “skip” the need to let ideas simmer in our minds. This is what I love about Descartes, whose approach in MeditationsCC201Core Humanities 3: Renaissance, Rediscovery, and Reformation I mentioned earlier. Maybe it’s good to take time to think through things yourself before turning to others in general. Maybe this is a good piece of advice for the classroom environment in general, regardless of AI.

But wait, this is a talk about AI. Sorry. As a human being, I find it hard to keep focus on things for a long time. We have shorter attention spans than goldfish, so I’ve heard. So let me flatly state my working thesis, which you have probably deduced by now: embrace the human more in your classrooms. There are many different ways to do this. Something that some Core professors do, for example, is ask students how the reading made them feel at the start of each discussion section. I like this approach because it gives students a “way into” the discussion. As we boot our brains up at the start of class, we can give impressions or our visceral reaction, which doesn’t require all that much thought. I think student conferences, another approach many professors have taken in response to AI, is very much related to this. It brings the human element of the student closer to the professor than it used to be, and I don’t see that as a bad thing—although it is a lot of work on the part of the professor.

As a human being, I’m not certain about things, and I have doubts about even my own ideas. This is my “working thesis” because it’s something that I think will be helpful, corroborated by discussions I’ve had with professors as well as my own experiences. I am certainly not a purist against AI. In the research projects I work on, I use AI tools to scour the field for related papers as well as to write trivial code for me. In this context, I have found large-language models to be exceptionally useful in reducing the busy work I need to do and letting me focus on the “thinking” part of my research projects. But that’s the key point: I try to use AI tools to allow me to use my thinking more, not less.

Even if AI shouldn’t be used in academia, like the person I spoke to at that Yale event suggested, maybe he was right about its utility in the workplace. I think AI can be a democratizing tool in a work environment. Which brings me back to the Old Oligarch. I resonate a lot with him; sometimes I really do feel like an oligarch who grumbles about the future. As a human being, when I don’t understand something, I sometimes fall into the habit of ridiculing it. Is AI like ancient Athenian democracy? Maybe. But are those who support using AI in the classroom—or those against it—“preparing to do wrong?” I doubt it.

As a human being, I always want to feel like what I’m doing is right. And with the stresses of class and work and life, it can be easy to forget that our professors and our other figures of authority are human beings too, especially when it can sometimes feel like the ideal student wouldn’t be a human being at all. So take a few more moments in class to embrace the human—to remind students that you’re human beings and that they are too—and that the ideal student in your classes wouldn’t be a large-language model who understands everything but a human that makes imperfect connections and has imperfect ideas based on how they think and feel. Maybe you feel you’ve been doing that already, and I’m sure you all have to some extent. But in a world where critical thinking can feel more and more like a diversion from students’ career goals rather than an integral part of them, maybe it’s not a bad idea to emphasize it more. As a student, as a researcher, and most importantly, as a human being, that feels right to me. And I think it’s alright to feel. Thank you.

AI Free Classroom symposium speech
My speech at the AI Free Classroom symposium.