People give others credit for the outcomes of their decisions only if the decision itself was rational, a new Yale study has found.

In a series of seven experiments, researchers examined how individuals attribute responsibility based on three factors — the options available to decision makers, individual differences in responsibility assignment and conflicting priorities in decision-making. The research paper appears in the March 2015 volume of Cognitive Psychology.

“What’s most interesting about the results is that, although people often don’t make optimal decisions on their own behalf, they assume that other people do,” said Lance Rips, senior author of the research paper and a psychology professor at Northwestern University.

According to Samuel Johnson GRD ’17, a PhD candidate in the Department of Psychology who carried out the research, the study’s most important finding is that people give others credit for the outcomes of their decisions only when the decisions are optimal. If a person makes a suboptimal decision, they are seen as less responsible — even if a positive outcome occurs.

Imagine a person wants a shrub to turn red, Johnson said. He chooses between fertilizer A — with a 50 percent chance of making the flower red — and fertilizer B, with a 30 percent chance. He chooses A, behaving optimally. Now take a different example, where A still has a 50 percent chance of making the flowers red, but B has a 70 percent chance. He chooses A and thus behaves suboptimally. Even if the flowers turn red in both cases, he would be seen as less responsible for the positive outcome in the latter case.

Though not every study participant employed an optimality theory for evaluating others’ decisions, two-thirds of participants did. The remaining one-third attributed responsibility based solely on outcome, and not quality of the decision itself.

The research shows just how complex of a psychological phenomenon causal reasoning is, said Jonathan Phillips GRD ’15, now a Harvard postdoctoral fellow whose research focuses on the intersection of morality, causation and social reasoning, in a Sunday email.

The research was inspired by competing theories of human decision-making. Classical economists argue that people behave optimally, while behavioral theorists see people as having sharp limitations that prevent optimal behavior.

“We thought that, even though most researchers in behavioral economics see people as being far from optimal, people might nonetheless have an intuitive understanding of human decision-making that is more like the classical theories of Adam Smith,” Johnson said. The findings confirmed that hypothesis — responsibility attribution, the study shows, is based on an expectation that people will be rational.

Rips said the finding has implications for the way people act in strategic contexts, and Johnson said people may expect others to behave more optimally than is warranted when dealing with business or government relations. The study may also show that people expect from others an unrealistic degree of perfection in moral decision-making, Johnson added.

Exploring the assumptions that are the base of high-level decision-making is important for understanding how our brains work in general, Phillips said. He added that the study has important implications for social cognition and economic theory.

One avenue for further work is exploring the reason for the link between optimality and responsibility attribution, Rips said. In other words, “Why should making a poor choice lead others to think that the decision maker wasn’t fully responsible?”

Johnson is in the process of conducting a follow-up study where he will examine situations in which decision makers are ignorant about the quality of their available options. He found that others still expect decision makers to behave optimally even when they are ignorant.

QI XU