Maria Arozamena, Illustrations Editor

Research suggests readers struggle to tell the difference between human and artificial intelligence-generated essays.

In a project organized by four researchers, including three from the School of Medicine, researchers tasked readers with blindly reviewing 34 essays, 22 of which were human-written and 12 which were generated by artificial intelligence. Typically, they rated the composition and structure of the AI-generated essays higher. However, if they believed an essay was AI-generated, they were less likely to rank it as one of the overall best essays. 

Ultimately, the readers only accurately distinguished between AI and human essays 50 percent of the time, raising questions about the role of AI in academia and education. 

“How would we even know, other than the word of the author, whether the paper was assisted by generative AI?” Dr. Lee Schwamm, associate dean of digital strategy and transformation at the School of Medicine, said. 

Schwamm, a researcher at the School of Medicine, was part of the team that conducted this research project. 

Given the increased prevalence of generative AI, Schwamm was curious about how similar it was to human work. While the project’s readers couldn’t effectively differentiate between AI and human writing, Schwamm and his colleagues noticed some unique characteristics.

“Human essays are very different from AI essays,” Schwamm said. “The AI essays are very predictable and consistent internally in terms of their sentence structure, the kinds of words they use, the tone that they adopt.”

According to Schwamm, AI writing will inevitably be part of today’s world and its role in society is up for debate.

Schwamm thinks that the combination of AI and human thought could be helpful, potentially leading to more desirable products in many aspects of life.

“We have to decide whether or not we think that there’s a new playing field,” Schwamm said. “Where do we incorporate AI?”

Generative AI is already being incorporated into Yale education.

Economics professor Tolga Koker incorporates generative AI into his introductory microeconomics class. He asks one essay question on his midterm and final exam ahead of time, and students are allowed to use AI to formulate their responses.

In this semester’s midterm, Koker asked students how they could use economic concepts, such as representative heuristics and framing, to succeed at their dream jobs. 

“It’s better to teach the students the new technology so that they will be prepared when they graduate,” Koker said.

Technology is advancing, Koker said, and it doesn’t make sense to ignore it when it will continue to be an influential part of students’ lives. However, AI can’t do everything for students, Koker said. 

Koker does not think AI inherently makes students lazy, either.

“There is [still] competition among the students,” Koker said. “Whoever uses the AI better will be a bit ahead.”

However, some educators do not think AI will play a part in their teaching. 

Biology lecturer Amaleah Hartman believes AI has a limited role in her classes. 

According to Hartman, students can’t use AI to fabricate lab results; they must do the work themselves to complete lab reports. 

“I simply acknowledge that AI tools exist and should be used responsibly,” Hartman wrote to the News. “It can be used to slim down their original writing to fit word limits or communicate more effectively, not to do their homework for them.”

Hartman hopes that students recognize the dangers of AI.

If students misuse AI, they might not learn as much, said Hartman, and their education will be less meaningful.

“I hope students fear more than just getting caught,” Hartman wrote. “[They need to be aware of] the temptation to let AI do their thinking for them and therefore not learn for themselves.”

According to Yale College, inserting AI-generated text into an assignment without proper attribution violates academic integrity.

ANYA GEIST