University leaders issue AI guidance in response to growing popularity of ChatGPT
Students, professors and administrators anticipate significant changes to teaching and learning at Yale as artificial intelligence technology continues to improve and develop.
Lucas Holter, Senior Photographer
The rise of ChatGPT has prompted new University guidance for faculty and staff regarding artificial intelligence and machine learning.
Just weeks after ChatGPT launched in late November 2022, the online chatbot exploded in popularity worldwide. By January, ChatGPT reached over 100 million active monthly users, making it the fastest-growing web platform ever. ChatGPT is a conversational AI: the bot provides advanced responses to requests and questions and can generate written compositions.
University Provost Scott Strobel and Associate Provost for Academic Initiatives Jennifer Frederick sent an email to faculty addressing the rise of AI and its implications for teaching and research at Yale on Jan. 24, just after the start of the spring semester.
“We write today to increase faculty awareness about the Artificial Intelligence (AI) and machine learning technologies that have been released recently,” the email read. “ChatGPT, for example, has made headlines in the past few months because of its ability to generate text and code with remarkable speed and coherence. We strongly encourage faculty to understand the implications of this emergent technology, including the opportunities and challenges it poses for teaching and learning in our community.”
This followed several faculty members contacting the Poorvu Center during the winter recess requesting guidance in light of the now-widely accessible AI software, according to University leaders.
“ChatGPT produces much better writing than the version of AI writing that was publicly available before,” Alfred Guy — Poorvu Center director of undergraduate writing and assistant dean of academic affairs — told the News. “Essentially, on Nov. 30, what AI could do in response to a written question jumped in quality. So the prospect of students being able to pass off AI work as their own went up.”
The email included a link to a new webpage aimed at providing AI guidance and resources for Yale instructors. The page, developed by the Poorvu Center in partnership with several faculty experts, contains perspectives on academic integrity, ideas for integrating AI into assignments and example syllabus statements addressing the use of AI technology by students.
Yale has not changed its undergraduate regulations regarding cheating and plagiarism, though professors can set course-specific AI policies when appropriate. Frederick, who is also the Poorvu Center’s executive director, told the News that the academic integrity concerns posed by ChatGPT are subsumed under existing regulations.
“You may have seen that some institutions have gone the direction of banning the use of ChatGPT; we’re not doing that,” Frederick said. “I think where the University leadership falls on this is that the considerations are going to be different for each school, each division, each discipline. So it needs to be a school-specific conversation.”
Frederick noted that the University’s response to the rise of accessible AI is still a work in progress, and policies are developing quickly.
To facilitate further discussion, the Poorvu Center will host an online panel, “Artificial Intelligence and Teaching: A Community Conversation” on Feb. 14.
“We’re not quite sure where this is all going,” Frederick told the News. “But we’re better as an educational institution to pay attention and to be really intentional about whatever happens.”
Students and faculty weigh in on ChatGPT
The News spoke to several students and faculty members about ChatGPT, and, as many anticipated, ChatGPT has already made its way into the classroom.
“At the beginning of the year, most if not all of my professors vocalized that ChatGPT would not help in the class,” Izzy Farrow ’26 told the News. “They claimed to have tried it themselves, and revealed that there were flaws in the AI responses that would cause a deduction of points on an assignment for lack of thoroughness or correctness.”
Faculty opinion on AI widely varies, and, as Frederick intimated, varies significantly by discipline.
“I encourage my students to play with ChatGPT as a study tool,” applied physics professor Owen Miller told the News. “In a course like APHY 110, which explores the physics of modern technology, students often have a lot of questions as they try to sort and organize foundational physics ideas … ChatGPT can serve as a tutor, helping the students probe, be curious, and cultivate interest in a subject.”
Miller added, though, that there is a major caveat: students need to fact-check answers generated by these programs. But, according to Miller, it is “easier to check answers than to generate them.”
Miller also noted that AI technology will only become more widespread in the future, so becoming well-versed with the technology now can give students a leg up.
Psychology professor Hedy Kober, while optimistic, expressed concerns about potential challenges to academic integrity.
“As a first step, we will need to all think about more creative solutions for papers and take home assignments so that students need to rely on their own thinking, argument, synthesizing and writing skills rather than on ChatGPT’s skills,” psychology professor Hedy Kober told the News. “I know others are working on tools that would be able to detect AI-generated text, so that might be the new ‘Turnitin’ tool we can use to avoid AI-plagiarism.”
Computer science professor Jay Lim said that ChatGPT is “definitely” a teaching concern of his.
He called ChatGPT a “double-edged sword” for its ability to enhance or detract from students’ learning. He also pointed out that ChatGPT, while usually correct, is sometimes wrong, which greatly hinders its utility for students seeking quick-and-dirty answers.
Nevertheless, Lim said that “we need to embrace technology” because there is no way to prevent students from using ChatGPT. Rather, Lim thinks that faculty members should focus their efforts on integrating ChatGPT into their courses to help students rather than harm them.
English professor Kim Shirkhani, who is also the ENGL 120 course director, said that while she hopes students would want to do their own writing, there are also effective ways to mitigate the risks of academic dishonesty.
“The AI does a good job of summarizing ideas and even generating parts of argument, but doesn’t yet create the kind of nuanced, alive, implicative writing we teach in 120,” Shirkhani told the News. “We also have a few bulwarks — in the drafting and workshopping aspects of the course, which help establish early on a given student’s writing characteristics.”
ChatGPT uses 175 billion language parameters, making it one of the largest and most powerful AI language models ever.
Correction 2/14: A previous version of this article misspelled Shirkhani’s surname.