A Yale College Dean’s Office committee is working to enhance the current system of online course evaluations by putting together a set of recommendations that could be implemented as soon as this semester.
The Teaching and Learning Committee has been investigating the topic of online course evaluations since fall 2015 and presented its initial recommendations at a Yale College Faculty meeting in April. Faculty members offered feedback at a discussion in October, and the committee hopes to revise its suggestions by next month. Proposed changes include eliminating redundant questions, adding new questions and incorporating a numeric component for faculty ratings, in addition to the existing open-ended questions. Though the majority of feedback has been positive, this final suggestion has raised ire.
“Our goal is to make sure that the partnership between students and faculty, on how students experience the courses they take at Yale, be preserved and maintained,” committee chair and professor of molecular biophysics and biochemistry Scott Strobel said. “The feedback that students provide on the courses are valuable both to the faculty member teaching the class and to students using that material for course selection in the future. We don’t want to do anything that would disrupt that partnership, and the committee is looking for ways to enhance that partnership.”
The committee, which met once every two weeks during the 2015–16 academic year, read literature on the topic of course evaluations and looked at studies conducted at other universities before assembling recommendations of its own, Strobel said. He emphasized the importance of preserving and improving the “social contract” between faculty and students who rely on evaluations data.
The committee identified several aspects of the course evaluation system that could be improved, such as redundant questions and questions that may reflect personal biases about students’ experience in the class, Strobel said. In response, Strobel said the committee suggested replacing redundant or unhelpful questions, such as those asking students to declare whether they enrolled in the class to satisfy major or distributional requirements, with more introspective questions.
Zac Krislov ’16, one of three undergraduate members of the committee, said research shows that asking questions in the beginning of the evaluation form about how much a student learned and how much effort they put into a class generally yields higher quality evaluations.
“Course evaluations data is a big question of who is allowed to see what data, and what they are used for,” Krislov said. “The current framework for this was decided a number of years ago, and I think the student use of data has grown in ways that nobody could have really expected at the time with things like Yale Blue Book. Yale is remarkably open with its course evaluations data in terms of showing that to students.”
The proposed recommendations would offer students the chance to weigh in on issues such as the frequency with which they get feedback from professors and the degree to which they feel intellectually challenged, Strobel said, adding that the number of questions or the total time and energy required of students to fill out evaluations would not significantly increase.
These changes were piloted as a midterm evaluation for PSYC 200 last spring, according to committee member and professor of psychology Gregory Samanez-Larkin. Students reported they “did not feel it was excessively more burdensome,” Strobel said. Committee member Kelsi Caywood ’18 said the changes to the form are intended to help students and administrators get a more comprehensive sense of teaching quality across departments.
“While no set of evaluation questions can perfectly capture the quality of a given course, I do think the Teaching and Learning Committee’s recommendations are an improvement over our current questions and hope they can be adopted soon,” said committee member Marla Geha, a professor in the astronomy and physics departments.
Strobel said feedback from faculty has generally been positive and demonstrated growing support for the recommendations, though other faculty members expressed dissatisfaction with the suggestion to implement a numeric scale for rating professors and potentially make that data available to other faculty members.
Strobel said the reasoning behind this recommendation lies in the fact that being able to see how other faculty members are doing provides perspective for the quality of one’s own teaching. Krislov added that while open-ended questions provide more in-depth feedback, a quantifiable scale offers standardization and the easy comparability of numbers.
“[The existing system of] evaluations for professors are great,” history professor Glenda Glimore said. “They’re learning tools. Publishing them widely turns them from learning tools into marketing tools for the course, and I’m not sure if Yale College should be in that business.”
Gilmore also cited recent studies indicating that numeric evaluations may raise concerns about racial and gender inequality because the studies suggest that female professors and professors of color are consistently rated lower than their colleagues.
Existing literature shows that in general, women professors tend to be evaluated more positively in smaller courses, and male professors are often rated higher in larger courses, which might play into gender stereotypes about nurturing versus commanding an audience, Krislov said.
The committee is considering removing or amending the suggested numeric question based on faculty response, Strobel said.
Committee member and professor of molecular biophysics and biochemistry Michael Koelle said the proposal states that there are “inappropriate” uses of student evaluations, and it remains undecided if and how the publicizing of such data would affect issues of tenure or promotion. He added that students evaluations should not be the sole factor when considering promotion.
“Even in just considering the contribution to teaching by a faculty member, student evaluations should just be one component of the process,” Koelle said. “Everyone agrees that it would be inappropriate to reduce a faculty member’s contribution to teaching to a single number and that is not what the committee proposes.”
Numeric rating would help separate the evaluation of a course from the evaluation of a professor, particularly in classes that are co-taught by several faculty members, Strobel said. It would also provide clearer evaluations of professors teaching fixed curriculums such as the Directed Studies program, in which professors don’t have control over the curriculum, Strobel added.
The committee hopes to preserve the “fantastic richness” of open-ended, narrative questions, Strobel said. He added that the committee is meeting with Faculty of Arts and Sciences Senate members again on Tuesday before revising its recommendations, which could be implemented at the end of this semester if agreement is reached.
“I am impressed by the Teaching and Learning Committee’s report, which represents a year of careful thought by a faculty committee on an issue of central importance to both faculty and students,” Dean of the Faculty of Arts and Sciences Tamar Gendler said.