Zoe Berg

Last month, students enrolled in Computer Science 365, “Algorithms,” received a Canvas announcement from lecturer Dylan McKay notifying them of concerns regarding AI use in their homework submissions. 

“I noticed there were some pretty obvious cases of using some LLM to generate parts of [their] homework. I just told everyone that there was a 48-hour window when they could go ahead and tell me that they did that,” McKay told the News. “I had quite a few students come forward over that.”

Students who responded to McKay’s offer were given a zero on the AI-generated submission but were able to resubmit the assignment for full credit a second time. According to McKay, class policy seeks to help students learn from their mistakes. A typical homework submission will receive both an initial grade and feedback, which students are expected to incorporate into their work before submitting a second version.

Class policies around artificial intelligence and homework submissions are created largely at the discretion of instructors within the Computer Science department. 

“There is a wide variation in what is appropriate in the variety of courses we offer,” wrote Holly Rushmeier, chair of the Computer Science department. “Our policy is to let instructors determine what resources can be used for assignments as appropriate for the material being covered in their course. The essential thing is that the allowed resources should be made clear to students.”

Professors within the department vary widely in terms of the strictness of their policies and how specifically they are enumerated in class syllabi. Many professors opt to cite the College’s Academic Integrity Regulations in place of writing their own policies. The regulations dictate that “inserting AI-generated text into an assignment without proper attribution is a violation of academic integrity,” stating that policies will vary widely from course to course as well as over time. 

McKay has long encouraged students to treat AI tools as they would a teaching assistant, asking for clarifications or for examples of code for reference. However, recent instances of AI use in homework submissions have motivated McKay to update the class syllabus.

“I have found that students will see the lax attitude and then either treat that as liberty to blatantly copy from AI or have it generate their homework entirely,” said McKay. “Maybe they actually think they’re allowed to do that.”

Now, McKay prohibits students from inputting any problem set text into a large language model, or LLM, or using the LLM to generate text that they will submit as part of their solution. 

According to McKay, detection of AI in homework submissions is mostly dependent on graders’ abilities, rather than on detection technologies that may not be reliable. In the case of the assignment that prompted the announcement to the class, McKay personally found evidence of LLM usage in certain submissions. 

“There’s a certain way that the AIs write things. There’ll be artifacts of the fact that the AI is not a student in the class,” said McKay. “They might cite something that the student shouldn’t be citing in the first place, or cited by a name that the students won’t be familiar with.”

Other instructors within the computer science department have adopted policies which differ from McKay’s. The syllabus for “Spectral Graph Theory,” a high-level computer science class, instructs students to “state both the platform and the prompt that provided useful results” when they opt to use AI as they complete their work. They are prohibited, however, from using AI tools for their first problem set, as it is “one is designed to help [students] learn to do the sort of math” that will be integral to the rest of the course.

Certain computer science courses employ practices that result in swift consequences for the use of AI. The syllabus for “Physics Simulation for Movies” states that the submission of “anything laundered through Chat GPT or Copilot will be considered theft” and will result in the offending student being automatically referred to the Executive Committee. The student will also receive a failing grade in the course.

“Full Stack Web Programming,” which is called a “collaborative course” in its syllabus, strongly discourages the use of generative AI. The course syllabus dictates that the use of AI will most likely result in a student’s failure to learn the material, or the chatbot’s inability to solve the problem correctly. 

“[AI] affects different disciplines in different ways, and also individual professors will have different approaches,” said Yale College Dean Pericles Lewis. “We’ve asked professors to be clear with students about what the policy is in their class. We’re not imposing any universal policy.” 

The Computer Science department joined the School of Engineering & Applied Science in 2015.

OLIVIA WOO
Olivia Woo covers Faculty & Academics for the University desk. Originally from Brooklyn, New York, she is a first-year in Benjamin Franklin College majoring in Ethics, Politics & Economics.