Katya Agrawal, Contributing Photographer

On March 25, students enrolled in Computer Science 223, “Data Structures and Programming Techniques,” received a Canvas announcement stating that “clear evidence of AI usage” had been detected in one-third of submissions for the course’s second problem set. Over 150 students are currently enrolled in the class.

Students were given 10 days to decide to either admit to using AI on any problem set — and have 50 points deducted from the corresponding grade — or risk being faced with disciplinary action if AI use were to be detected in any of their problem set submissions.

“Students who do not come forward voluntarily but are identified through our investigation will receive a score of 0 for each affected problem set,” the announcement read. “Most importantly, your case will be referred to the Executive Committee, which is currently overwhelmed by similar cases. Due to the delays, it is likely that you will not receive a final grade for this course this semester.”

The News interviewed three students in the class, who requested anonymity out of fear of academic consequences. All three students said that they did not use AI to generate code for their problem sets.

One student in the course told the News that their professor told students to admit to AI usage before April 4 and explain the AI-generated code. The student recalled the professor saying that those who failed to self-report would be referred to the Executive Committee.

The Executive Committee, or ExComm, is the body responsible for enforcing the Yale College undergraduate regulations.

Group referrals to ExComm are rare. The last time that a large portion of one class was reported to the committee occurred when 81 students in a biological anthropology class allegedly collaborated during an online final exam in 2022.

ExComm releases summaries each term of the disciplinary cases it has decided. The most recent report summarizes cases from Spring 2024 and contains five instances in which students were “reprimanded via agreement of responsibility” for use of AI in problem sets, projects or papers. 

The Spring 2023 report was the first to cite ChatGPT being used as the violation, with four related cases. The Fall 2023 report records seven such instances.

According to CPSC 223 instructor James Glenn, the traditional practice of running student homework submissions through digital detection tools precedes the advent of AI chatbots like ChatGPT. For decades, Glenn said, professors have used digital tools to detect similarity between submissions.

“These collaboration detection tools are probably better at [detecting similarity] than detecting use of AI,” Glenn said.

Students interviewed by the News reiterated concerns regarding the reliability of AI detection, citing being falsely accused as a significant source of anxiety. One student in the course told the News that it is unclear how students could prove themselves innocent.

“The majority of people I’ve talked to are unsure because I think the biggest worry is that they are going to be told that they used AI, but they didn’t and they wouldn’t be able to explain themselves,” the student told the News.

Another student said that the way problem sets are submitted — along with a completed code, students also upload a log written by the student containing the steps involved in solving the problem — would make it difficult to definitively prove AI usage either way, saying that the log could be “easily faked.”

“There are definitely more than one-third of people in the course who are using AI,” the student said, “and [disciplinary action] would be unfair to the one-third [of students].”

Whether or not professors opt to use plagiarism detectors is up to each individual instructor, in line with the Computer Science Department’s discretionary approach to AI policy in classrooms. 

AI-related policies, however, should be made clear to students, according to Department Chair Holly Rushmeier.

“[Computer science] instructors are given wide pedagogical latitude to structure their courses in the ways they see fit,” Theodore Kim, the department’s director of undergraduate studies, wrote to the News. “This includes the level of AI usage allowed, and the detection methods employed. As in the past, we strive to educate students so that their skillsets are not tied to specific software products, AI or otherwise.”

For the Spring 2025 term, CPSC 223’s syllabus explicitly prohibits the use of AI-based code generators.

A student in the course told the News that the professors did note during the first lecture that AI was not allowed, but rather that most of the emphasis was on not collaborating with someone else. Additionally, learning concepts with AI was allowed, but generating code for problem sets was expressly not.

Glenn expressed that professors teaching lower-level computer science courses like CPSC 223 often impose stricter AI-use policies than those teaching more advanced courses.

“It’s easier to use AI to help in the intro course,” Glenn said. “Our goal is to teach students exactly the kinds of things that AIs are good at.”

Ozan Erat, who teaches CPSC 223 alongside Glenn and Alan Weide, cited student concerns around AI’s impact on job availability within the field of computer science. According to Erat, the fact that employers may begin adopting AI technologies to replace software developers increases the responsibility students have to avoid the use of these technologies.

“[The adoption of AI technologies in the workplace] makes it even more important for students to fully engage with their studies and master all concepts so that they are undispensable for their future jobs,” Erat wrote to the News. “I tell my students that if you let AI do the job for you, AI will take your job.”

Students in the course told the News that they generally understood the policy banning AI but felt that allowing AI in a limited form would reduce its abuse. 

One student said that they felt ChatGPT should be very limited for introductory courses, but due to its perpetual availability — compared to the limited office hours hosted by teaching assistants — students should not be punished for using it to learn or for error correction.

“In my opinion, it would be better if they would just make everyone explain their code in comments [to] their code, and just explain why it does and what it does. I guess you could use [AI] to generate those too, but it would be a way to [encourage] more integrity,” another student said.

In the announcement sent through Canvas, CPSC 223 students were also warned that further use of AI on problem sets may result in a restructuring of the course design, such as placing more weight on grades received on in-class exams.

OpenAI debuted ChatGPT in 2022.

JERRY GAO
Jerry Gao covers Student Policy and Affairs as an Associate Reporter under the University Desk. He is a first year in Pauli Murray College.
OLIVIA WOO
Olivia Woo covers Faculty & Academics for the University desk. Originally from Brooklyn, New York, she is a first-year in Benjamin Franklin College majoring in Ethics, Politics & Economics.