In a recent piece for the News, George Beevers ’29 argues that Yale needs to adopt AI into its academic curriculum if it wants to remain at the forefront of higher education.
I find the push to implement AI into higher education profoundly unsettling, and I argue Yale should not seek to broadly implement AI tools into the liberal arts curriculum.
To start, the claims Beevers and the ocean of voices clamoring for broader AI use make about AI in education are often either overblown or indicative of a misunderstanding of the function of higher education.
For instance, in his piece, Beevers echoes the ubiquitous claim that AI will “enhance productivity.” That might sound great in the abstract, but the practical relevance of enhancing productivity in the educational context is unclear.
AI enhances productivity by helping workers do repetitive, simple tasks faster. A Yale seminar, however, is not centered on productivity, at least not in this corporate sense. Discussions, readings or problem sets are not menial tasks to export to ChatGPT, but important ways to engage with problems and practice applying theoretical knowledge. As a result, the arguments that justify AI as a time-saver in the professional world don’t easily translate to the world of liberal arts education.
Another claim Beevers makes is that AI can help “explore complex problems deeply.” This is just plainly false. Large language models summarize, digest and dilute, and do so pretty arbitrarily. They are a complete black box: put in a query asking about Don Quixote, and get back a summary without explanation of why each piece of information was included or, more importantly, why each omission was omitted.
And once you’ve prompt-engineered your way to a response with detailed explanation and academic rigor, what you have accomplished is little more than a thorough Google search, albeit one that contributes to people’s taps running dry in Georgia. Generative AI can help understand a gist or a trend or an overview, but nothing close to understanding the deep complexities of a problem.
Later, discussing AI for writing, Beevers differentiates between “passive” use, which hurts the “cognitive struggle” essential to learning, and what he sees as proper AI implementation, namely “using it to deconstruct your own logic, identify weaknesses and refine your arguments.”
But are deconstructing your own logic and identifying weaknesses in your own arguments not essential parts of the cognitive struggle?
Learning to write is inseparable from learning to edit, and learning to argue is similarly dependent on learning to reconsider. As with the discussion of productivity, the argument for AI here treats the steps of the writing process not as stepping stones but as inconvenient obstacles. But these tasks are not useless grunt work — they’re prerequisites for actually learning to write.
We have to remember that ChatGPT is not a digital human; it constructs an output based on what it thinks is the most likely response given the trillions of bits of data it was fed. Thus LLMs are fundamentally, by design, uncreative. Aside from hampering your own learning, having Perplexity or Claude edit your paper cheapens what could be innovative and original work into whatever uninspired monotony aligns with the most common response in the model’s training data.
Data which, by the way, now includes more and more work itself produced by AI, resulting in worse results, hallucinations, and “model death,” where progressive iterations of an artificial intelligence bot grow dumber and dumber.
This also means artificial intelligence should not be entrusted with “simulating the Socratic exchange”. The key feature of Socratic exchange is dialogue with people who have the capability to form original, dynamic responses, not mere reformulations of the most popular existing ideas. And AI’s sycophantic nature means its Socratic exchanges tend to convince people they’re the next Einstein, a troubling tendency that falls far from the confrontational truth-seeking that makes dialogue pedagogically useful.
This is not to downplay the potential advantages to generative AI in specific subjects and situations, say for lab work and data analysis. And Yale has already invested significantly into generative AI tools and AI-related research.
But this is wholly different from the sort of adoption and integration into the curriculum that Beevers calls for; what I take issue with is not the use of AI in and of itself, rather the nebulous, wide-ranging call for “integrating AI into the educational framework.” It’s one thing to invest into AI and another thing entirely to deconstruct the academic curriculum in order to fit ChatGPT into a seminar on Spinoza.
Separately from these substantive issues: a growing body of work indicates that the incredible investments into AI are feeding a bubble whose eventual burst will devastate entire sectors of the economy that have tied themselves to OpenAI, Nvidia, Meta, and Google. So why should Yale or any other institution tether its academic excellence and curriculum to the teetering AI bandwagon?
The push to integrate AI everywhere is motivated by profit, not benevolence — by a billion dollar industry that wants to become a trillion dollar industry before it drives off a cliff. Therefore, until the benefits of AI in education become more than this self-serving marketing fluff, Yale’s curriculum should steer clear.
MANU BOSTEELS is a sophomore in Pauli Murray College studying Philosophy and Math. His column “A Yale Life” runs biweekly and explores takes from the student perspective on campus and academic experiences and broader political developments. He can be reached at manu.bosteels@yale.edu.






