How does it feel like to face a tidal wave?
That isn't the feeling I originally had when I first heard of generative AI. I didn't think much of it as first, other than the fact that it was a cool new technology. I didn't really fully think through its ramifications, positive or negative. But gradually, over time, it became more prominent. I saw more news on it; I tried out the newer models and saw how much stronger they were getting; I saw my peers start to use them in more and more places.
Eventually—I don't know exactly when it was—it felt like it was everywhere. And one of those places was the classroom. Basically every class I was taking acknowledged the presence of AI tools—some embraced them, some rejected them—but everyone said something. I was a TA and a grader and I saw the assignments I graded shift. The language sounded a little more stilted; the gap between the quality of the homework assignments and the exams for classes widened more and more. I started getting concerned, or at the very least, feeling like the way that classes were conducted will considerably change. I started to see the tidal wave.
I was inspired to start this project as a result of my experience as a TA for CS112, which I discuss a bit more closely in the AI in Computer Science essay. But it was really my experience grading for Prof. Debra Borkovitz's discrete math class that made me realize the extent of the crisis of AI.
I think there is a vision among both professors and students that the class environment is crucial in affecting the way students view and participate the classes they're in. One idea is that if students feel like they both have an understanding and somewhat of a say in the way the class is conducted, they'll be more engaged. Essentially, there is a sort of social contract established between the professor and the students, whether that contract is physically present (like something students sign at the start of the semester) or implicit.
I thought this was a reasonable idea whenever I was presented in it in my own classes. But does it really make a difference?
We can try turning to the way authors we read in Core have discussed societies and governments for an answer. Émile Durkheim, in The Division of Labor in SocietyCC221Making the Modern World: Progress, Politics, and Economics, describes the "collective consciousness," a societal element separate from the beliefs of the individual. While Durkheim's ideas don't map on exactly, I think it's a good starting point to think about what's going on outside the classroom that is affecting students.
At first, AI started on the outskirts of the collective conscience; most people didn't think about it, and those who did were people more in the know about the technology and its development. As it became more and more impressive, more and more people started to become familiar with it, and it began to take a more dominant role in people's minds.
I remember seeing OpenAI's launch of GPT-3, probably the first moment where I understood that generative AI would be a technology that would change the world. I didn't really think about how it might affect schools in particular. I think some of the larger effects of AI hit offices first. Before long, tons of different companies in many different fields were discussing how to integrate AI in their workplaces; employees were using AI agents as an integral part of their work.
Academia was slower—it took time for committees to be formed and for different interests to be heard and decisions to be made in universities. But students weren't slow in adopting AI tools in many different ways, and professors couldn't be slow in setting standards for their classrooms.
There were no rules yet. It was (to use Locke'sCC221Making the Modern World: Progress, Politics, and Economics terminology) a state of nature, eerily close to developing into a state of war. On top of that, there was the question of to what extent AI tools should be used in the classroom. In a world where AI is being rapidly adopted in industries across so many different disciplines, it doesn't make sense to just get things back to the way they used to be—universities should embrace the technology in a forward-thinking way as they tried to for prior major technological advances.
This was the situation I had found myself in when I realized that this was truly a tidal wave, sitting in my empty office hours block, seeing hordes of student work that were clearly AI-generated, as well as my peers using AI for their classwork.
I talked to a handful of Core professors about their experiences with AI, and I found a lot of common themes in how they approached it. Improve the class environment. Create more opportunities for student engagement, more interactive or multimedia projects when possible, check in with students in conferences. AI stood opposed to humanity, an isolating tool, and if classrooms were more human, students would feel less of a need to use it.
This is a very natural conception; I mean, it matches with a lot of what our perception of "AI" is. While it's fading, I think there is still an idea in the back of people's minds of AI as a superintelligence—whether it be benevolent or malevolent—that doesn't really care about the interests of humanity. At best, it ignores humanity to do what it wants, and at worst, it becomes a something like Terminator.
So I turned to the class I had taken with, in my opinion, the best class environment—Prof. Debra's Borkovitz's discrete math (MA293) class. Prof. Borkovitz's class emphasizes both student-professor and student-student interactions, something quite rare for most undergraduate math courses, especially at large universities like BU. The vast majority of each class is spent on group work, and instead of weekly homework assignments, there are larger-scale "excursion" papers where students write up their solutions and reflections on a particularly interesting or challenging mathematical problem. Students meet regularly with Prof. Borkovitz in conferences and are given frequent feedback as part of the process of creating and revising their excursions as well as portfolio websites where they put all of their work for their class.
On the outset, Prof. Borkovitz's class model looks inherently resistant to AI usage. Students interact with both the professor and their peers frequently, and there are essentially no "high-stakes" graded situations, such as exams. Students can always revise and resubmit their excursions until they are considered "satisfactory," and this is an inherent part of the process; the vast majority of student excursions are not considered satisfactory on the first submission.
To combat the inherent uncertainty of AI usage, Prof. Borkovitz instituted the following plan: when she thought a student was using AI for an assignment, she sent them an academic integrity reflection, which is a short assignment to describe if or how they used external resources. This gives students a place to describe if there was a misunderstanding, how or if they used AI or any other external tools they weren't supposed to, and if or how they can redo the assignment if they did in fact use external resources.
And when the fall semester started, it seemed like keeping this model mostly intact was a good choice. The first excursions came in and they looked pretty good—students were engaging with the material, and they seemed to be producing high-quality work that looked authentic. But by the time the second excursion had come, things had started to change. I mean, the excursions I graded looked okay to me; maybe I could tell that some of the wording seemed a bit more stilted than usual, but it didn't seem too different than the first one.
However, Prof. Borkovitz noticed something interesting: apart from the way the excursions were worded, the vast majority of students chose the same mathematical variables n and L in their formulas. These are standard variable name choices in the field of discrete math, and that's exactly what made it striking to Prof. Borkovitz. In prior years, students would choose a wide variety of variable names, but this year, it was almost entirely constricted to a few choices, all widely used in the field. This was about as clear evidence of widespread AI usage as you could get, and it totally slipped under my radar.
But of course, why would students do this—especially in a class that was structured in such a way that they had little incentive to do so?
To start, I don't want to discount the class environment entirely. I do think being conscious and active about providing a "human" class environment, however that looks like, a really important part of combatting AI usage in classes. When I gave a speech at BU AIDA's AI Free Classroom symposium, I centered it around the importance of "embracing the human." In short, my point was that by improving the class environment in more "human" ways, such as asking students more closely how they feel about what they're working on, can be really helpful in combatting AI.
I still believe this, but I have shifted my focus towards some other ideas that, I think, are important to think about in tandem with the class environment. One of them is a clear, well-defined set of rules in the context of AI. That doesn't necessarily mean rejecting AI usage entirely. But it means making as clear as possible what is or isn't allowed.
When I talked to Prof. Borkovitz this semester, I was surprised at how her attitude had changed. She had essentially developed the perspective that students should "cut the crap" when it comes to AI usage. If the class is structured in such a way that there's no punishment to getting things wrong, students using AI can clearly seem not just wrong but disrespectful, especially to professors who have taken the time to try to create a class environment and structure where AI really shouldn't even seem like it would help students succeed in any way.
So at that point, maybe the most we can do is set boundaries. At the start of CC221 last fall, a course that I took, Prof. Catherine Klancer, who was coordinating the course, described the course policy on AI. In short, she gave a speech (which she highlighted was not written using AI) about how AI usage stifles human creativity and labor, and about how it had absolutely no place in CC221.
It is perhaps fitting, then, that the first author we read in CC221 is Niccolo MachiavelliCC201Core Humanities 3: Renaissance, Rediscovery, and ReformationCC221Making the Modern World: Progress, Politics, and Economics. Machiavelli famously describes the balance between being feared and being loved as a ruler: ideally, you will be both, but if you have to pick one, it's more important to be feared than to be loved. Why is this the case? Because subjects who are afraid are more likely to obey their ruler. One strategy to addressing AI in education, whether it's a speech saying AI can't be used or a reflection assignment they have to complete explaining themselves, is to instill a little bit of fear, and that's not necessarily a bad thing.
But the primary purpose of Prof. Klancer's speech was not to instill fear, but to present clarity, both to the students and the university administration. If a student is adjudicated for academic misconduct as a result of AI usage, the AI policy of the class is apparent and it's really hard for students to claim that they didn't know it when it's so visible. So wherever a class lands on how AI should be used, clarity about it really may be what's actually important.
This may seem like a bit of a cop-out answer. I don't deny that; I definitely don't have definitive answers on what each class should do. But I want to end with one story that illustrates my view as a whole on AI, and the issue of AI in education. At the start of this year, I basically never used AI tools for anything. But as I started to work on more serious research, I decided to try it out and incorporate it into my workflow more and more, since my peers were using it and said it made them much more productive.
And as time has passed, I have been more and more impressed about what AI has allowed me to do, especially in regards to writing code. I used Claude Code to write a large portion of the code for this website; the tool has essentially changed my work when doing a lot of programming from reading through potentially tedious documentation and writing repetitive, potentially buggy code, to describing what I want precisely to an AI model in natural language and having it translate it much faster than I ever could.
There are a few other ways I've incorporated AI tools into my workflows. Other than writing code, I use them to generate plots of data I've gathered, as well as to do certain tedious mathematical calculations, like large matrix multiplication or diagonalization. That being said, I have tried to keep my AI usage strictly out of my classes and only in my external work, such as my research projects. But is that really the right long-term choice? Although I haven't used it as such, I can certainly imagine AI being a useful tool for actual idea generation, and that might be useful even in the context of classes.
When I started this project, I felt a lot like Dante at the start of InfernoCC102Core Humanities 2: The Way: Antiquity and the Medieval World—I was lost, unsure exactly how students and professors should address this crisis. I thought that maybe if I went on a journey, speaking to students and professors and thinking closely at my own presence as a student, as DescartesCC201Core Humanities 3: Renaissance, Rediscovery, and Reformation thought about life and consciousness as part of his meditations, I would be able to gain more clarity.
Instead, I feel a lot like the titular character at the end of the first part of Goethe's FaustCC202Core Humanities 4: Enlightenment, Romanticism, and Modernity. Faust's lover Gretchen is captured and convicted of the murder of her illegitimate child, and Faust unsuccessfully attempts to rescue her. In the end, Faust gives up after his devil counterpart Mephisto calls him back.
What I was able to figure out is that there isn't one answer to this crisis. AI isn't going away, it's not entirely a bad thing, and people will never stop using it. When I talked to Prof. Sophie Klein about AI, she said something that has stuck with me: the reason we shouldn't expect students to stop using AI is because it's so easy to use, and it's human nature to take the easiest way through a task.
My first reaction, when I realized that AI in education was truly a tidal wave in my CS112 office hours block, was to think something along the lines of, well, we are offering what we are offering. Whether students take it or not is their choice. And despite the perspective's cynical nature, as I have gotten to higher and higher level courses, that has been what more and more students decide to do. Being an undergraduate is a process of becoming, and I do believe that students, as they progress to higher level courses, use AI when they have incentive to use it and find benefit in it and drop it when they don't. I know because that's what I found myself doing in my research work throughout the course of this year.
So if the incentives are probably there—use AI when it seems useful, and clearly restrict it when it's not—and if the clarity is there—"cutting the crap" about when students use it, so to speak—it won't stop the tidal wave, but it will allow it to pass more cleanly. Eventually, as time passes, people will adjust, and it will become more clear what exactly to use AI for. But for now, all we can do is press onwards in the ways we know how to, seeing how the technology evolves, and going from there.
