Rachel Mak, Contributing Photographer
Years ago, Computer Science Lecturer Jay Lim allowed an undergraduate learning assistant, or ULA, to introduce AI to students in his CS50 class, a decision that later backfired.
One of the head ULAs had been interning at Harvard, where he learned about artificial intelligence. Excited, he introduced to the class software intended to guide students through problem sets and be a remedy for overwhelmed teaching assistants. Students, however, quickly learned how to exploit this tool.
“I thought this was a great idea because back then we needed as much help as possible to help the students,” Lim shared. “And to some extent, it worked out really well in the first semester because students were not used to it. Then news got around that you can start tricking the AI without the entire solution.”
Upon the course’s conclusion, Lim realized that if one student starts using artificial intelligence irresponsibly, many others will follow. He believes it is due to students’ fear of “losing out” and spending more time on problem sets than their peers.
Prior to OpenAI’s ChatGPT, Lim had an open-note and open-internet policy during exams, hoping that students wouldn’t feel pressured to memorize “nitty-gritty details.” Now, he told the News, this is no longer an option.
Lim believes that AI has allowed students to complete their work without learning. Using ChatGPT, most students have become more comfortable receiving partial credit on their work in a shorter time frame than receiving full points after hours of effort, according to Lim.
Computer science professor Holly Rushmeier said that ChatGPT is not a sudden cultural phenomenon, though. She recalled that Yale first used the computer program ELIZA, which simulated natural language conversation, and other learning systems as early as 1982, and classes around 2006 were first offered on the topic.
However, she knows that modern generative AI presents new issues for Yale staff.
“We’ve seen AI summers and winters before,” Rushmeier said. “Asking systems for homework solutions, though, is another avenue for cheating. [Professors in the CS department] need to be mindful as we design and update our courses.”
Senior lecturer Stephen Slade first studied AI at Yale nearly 50 years ago. He believes that the language models that power ChatGPT today are “not proper cognitive models” and cannot currently replicate the human mind’s potential.
Lim, though, believes the tool is a “limiting factor” that has made teaching students more difficult. While the students who abuse the resource are a minority in Lim’s eyes, this small portion of learners has nonetheless made it difficult for CS professors like him to trust entire classes.
He feels as though his peers in the department are taking different approaches to AI in classrooms. Those teaching senior-level courses are “more lenient” because the students taking these courses are more likely to be interested in the topic and less likely to cheat. Classes for underclassmen face the highest risk of students desperate to merely get a good grade.
Jennifer Frederick, executive director of the Yale Poorvu Center and associate provost for academic initiatives, said that faculty inquiry and concern about AI started in 2022. Since then, Poorvu Center has hosted workshops for professors in the CS department to become well-versed in AI education and learning.
“We realized that this technology could have a transformative effect on education and that we needed to pay close attention and educate ourselves in order to support faculty,” Frederick wrote.
She believes that it is “too early” to know what impact generative AI will have on student engagement and performance. She offered, though, the optimistic belief that AI can help students navigate complex tasks and can help support students with diverse learning needs.
Next semester, the Poorvu Center will pilot a program to “support course transformation and development of AI-intensive courses” in all departments next semester, according to Frederick. Frederick hopes that “Yale faculty will rise to the challenge” of assisting students in learning in the midst of this unprecedented era in education.
Bill Qian ’26 first came to Yale in August 2022; ChatGPT was released just months later. Qian said the original version of ChatGPT “wasn’t that great” and that most of his peers preferred Google Search and Stack Overflow over using the program.
Now, he says chat rooms and boards like Stack Overflow are no longer as active as before. Qian said he tends not to use ChatGPT at all to complete code but admits that he will occasionally ask the program for miscellaneous answers to small questions.
“I basically never generate code for an assignment,” Qian said. “I set limits for myself because I want to learn how to code and how the content works for myself. However, if I feel that I have learned the material and I’m not missing out on learning by using ChatGPT, then I will use it.”
Qian believes it is largely the students’ responsibility to strike a balance between ethical and unethical use of the platform.
Most of his friends use ChatGPT in a ‘pretty healthy’ way, not to generate answers for entire CS problem sets. Some students in his classes, he has observed, are not nearly as conservative.
“A student once came into office hours with just straight-up ChatGPT code, and the ULA spent 45 minutes debugging code that was GPT’d, essentially,” Qian said. “And it became quite obvious at one point that the student didn’t know anything about what was going on. And there’s this group of people like the student, I would say, that is growing.”
Qian says that he has observed ChatGPT critics try the program just once and quickly become fully immersed in it weeks later. On the other hand, he believes that students who choose to avoid the program totally will be left behind in a job sector that demands efficiency and productivity.
Lim said that in his classes cheating is not tolerated; students caught using ChatGPT on an assignment will be reported to the Executive Committee for plagiarism.
Lim said that AI alone is ‘very helpful,’ but the department lacks a straightforward way to distinguish between what is responsible usage and what is not.
“I think [AI] is going to move towards the better at some point,” Lim said. “At the same time, unfortunately, this is going to put more pressure on the students. We can’t make the students learn anymore. They have to be willing to learn.”
Over the next five years, Yale plans to invest $150 million in AI.