Profs debate field research

For Ian Shapiro, a political science professor and the director of the MacMillan Center, controversy over research methods in political science can be best understood as a new spin on an old joke.

“How many political scientists does it take to screw in a light bulb?” Shapiro asked. “Twenty — one to screw in the light bulb, one to hold the ladder and 18 to debate which methodology they should use to do it.”

At a debate billed as “Experimental Methods in the Social Sciences,” more than 200 people converged on Luce Hall Tuesday to hear three Yale political science professors and a Princeton University economics professor discuss the use of field experiments, which has become a contentious issue in the discipline over the past decade. In such experiments, researchers use small-scale trials in cities and villages to make predictions about large-scale phenomena.

For years, social scientists have relied on observational research, such as psychological studies and laboratory simulations. But in 1998, Donald Green, the director of the Yale Institution for Social and Policy Studies, started using field experiments, and the method began to grow in popularity among social scientists in 2000, Green said. The ISPS alone has completed more than 100 field research projects over the last decade.

“There are randomized field experiments in every volume of every leading journal,” Green said in an interview before the debate. “I consider it to be the best work that is being done in political science these days.”

But Angus Deaton, a professor of economics and international affairs at Princeton, criticized this widespread use of fieldwork in research.

He said researchers should think carefully about when it is appropriate to conduct experiments, rather than replacing observational methods altogether.

“The issue is not whether one should use randomized control trials, but rather what you should use them for,” he said.

Political science chair Susan Stokes expressed her support for the method as a whole but criticized the skeptical claim that field experiments are guaranteed to be flawed due to the nature of the method itself.

Dawn Teele GRD ’15, a first-year political science student and the debate’s organizer, said she got the idea for the event while working at Yale’s Economic Growth Center for Development Economics, where field experiments were extremely popular among researchers. Teele said that despite what she sees as the potential negative impact of fieldwork on research subjects, sponsors are pressing for experimental projects.

“The Millenium Challenge Corporation and the Gates Foundation want scientific evidence that their development programs are successful,” Teele said. “There is a trend, and the trend is towards experiments because there is a small contingent of very loud people who are saying this is the only way you can understand cause and effect.”

Teele said researchers do not always work to counteract the effects of their experiments on research subjects at the end of a study, citing the distribution of small business loans across a cluster of villages. If researchers do not evenly distribute funds to all participants, she said, they effectively create economic inequality.

Though most undergraduate and graduate students who study political science are still using observational methods, Green said opportunities for field experiments by Yale students are continually growing.

“Of the undergraduate projects nationwide that involve experimentation, I would say that half of them come out of Yale,” Green said.

David Broockman ’11, for one, took “Experimental Methods in Political Science,” Yale’s only undergraduate political science offering in experimental research, in spring 2008. Instructors Green and Alan Gerber encouraged Broockman to continue work on his final project after the course ended, and the result was published in Political Analysis, a political science journal, last month.

Despite Green’s advocacy of experimental research methods, Broockman said, Green himself advised him to try other methods to best suit his research question.

In an interview, Shapiro, director of the MacMillan Center and a panelist at the debate, said the most effective research begins with a question and tailors the method to suit it.

“I’ve long been an advocate of the view that political science should start with problems in the world,” Shapiro said.

At its heart, Shapiro said, the debate offered great insight into the mind of the modern political scientist.

The three audience members interviewed said the discussion, which Green described as the beginning of an ongoing debate, was thought-provoking.

Josh Simon GRD ’12, who is working toward his doctorate in political science, said he was impressed by the panel.

“We don’t do enough debates in political science,” Simon said. “It’s really enlightening to see people actually engage with the question and then respond to others’ answers.”

Correction: October 21, 2009

The original version of this article incorrectly used the term “field research” to refer to field experiments, or randomized controlled trials.

Comments

  • Political Scientist

    I would like to point out that the authors of this piece misuse the term “field research” several times during the article. When they say “field research,” what they mean are “field experiments” of the sort that Professor Green advocates.

    However, true “field research” consists of many research methods beyond experimental research, such as in-depth interviews, archival research, surveys, and others. In short “field research” means going out to “the field” (ie, where the processes one is interested in are happening) to do research.

    The panel, as this article’s authors mistakenly suggest, was not a debate over whether field research is a valuable method in the social sciences. Rather, it was a debate over whether field experiments, one type of field research, are the best way to conduct social scientific inquiry.

  • anonymous

    This is a very poor summary of the debate for several reasons, of which I will mention a couple here.

    First, the debate is about experimental vs. observational research and not about “field research”. Field research can be either observational or experimental. And experiments can be field experiments, laboratory experiments, internet experiments, survey experiments, etc. Even the title of this article is misleading, let alone its content.

    Second, the article mentions a point in the debate where one of the speakers claimed that the we should not be overly skeptical about the experimental method and say its inherently flawed. I think this comment must have been about observational research, not experimental research. At no point in the debate that I remember did anyone argue that experiments were inherently flawed. Professor Stoke’s arguments about the “radical skeptic” seemed to be more geared towards experimental researchers who claim that attempts at observational inference are inherently flawed due to the potential of unobserved heterogeneity.

    While it is important to summarize debates in social and policy sciences for the broader university community, this article demonstrates a lack of understanding of the basic topics under debate, let alone any details regarding specific points made by the panelists.

    The editing process at YDN should check articles both for content and for style. Otherwise, the news will continue to publish similar articles that confuse the university community by misinterpreting academic work clearly beyond the understanding of the authors.