Daylian Cain, assistant professor of organizational behavior at the Yale School of Management, is an expert in conflicts of interest, especially cases in which one person has interests at odds with another individual for whom he or she serves as an adviser. He spoke to the News about disclosure regulation in the financial industry.
Q. What drew you to research disclosure policies?
[ydn-legacy-photo-inline id=”2264″ ]
A. When I started grad school at Carnegie Mellon, Don Moore and George Loewenstein were researching the psychology of conflicts of interest. I recounted my thoughts on disclosure and the three of us started to design experiments to look into these issues. Prior research on the “anchoring bias” (the tendency for people to mentally stick to, or be affected by, what was in their heads just a moment ago) had already shown that even what are known to be randomly generated suggestions have a powerful effect on subsequent judgment. I worried that if a disclosure that some advice is totally random does not sufficiently warn an audience, then surely some vague disclosure that advice “may be biased by a financial conflict of interest” is also likely to fall short as a warning device. Years of research had long shown that biased advice is very difficult to ignore. Our research not only showed that disclosure can fail to sufficiently help matters, but it can also have perverse effects; for instance, it can morally license advisees to give even more biased advice “because the audience has been warned” (caveat emptor). With Sunita Sah, Loewenstein and I have also shown that disclosure can place inappropriate pressure on the audience to heed the advice — for example, in order to avoid insinuating that the doctor’s advice has been corrupted.
Q. Are there any other topics in the financial world that are drawing your interest?
A. Well, we still think that transparency is a good thing and agree that disclosure will surely be part of the solution. So now we are more focused on how to improve disclosure because the word is out that it is no panacea. Also, I am interested in the notion that it is not only the intentionally corrupt that give bad advice, but also the unintentionally biased. Well-meaning professionals often think that they are being objective when in fact their advice partly serves their own interest. If the public better appreciated this fact, perhaps disclosure would serve as a better warning. As it stands, most audiences think that their advisers would never intentionally mislead them, conflict or no conflict. Even if this were true, bad advice can be given unintentionally: good intentions do not ensure good advice.
Q. How do you collaborate with peers from other schools? Are most of the experiments conducted here at Yale?
A. I collaborate across several labs, at Duke, Carnegie Mellon, [the University of Pennsylvania], in Germany, and here at Yale. I also often use online subject pools, some maintained by Yale SOM (E-lab). We use many Yale community members in our experiments, as well as people from around the world. The Internet (and Skype) has widened my ability to collaborate and collect data across brick-and-mortar borders.
Q. What sort of policy recommendations would you make to avoid the pitfalls your research has uncovered?
A. We found positive results in presenting unbiased second opinions alongside advice that was disclosed as conflicted. But these unbiased opinions need to be basically put right in front of the decision-maker. In a paper led by Sunita Sah, we also found that “cooling-off periods” helped, as did having advisees make their choice away from the prying eyes of the adviser. Other research has shown that savvy repeat-players can learn to better use disclosure. On advice-taking more generally, research suggests that actively “considering the opposite” might help. In other words, when getting advice, first consider what might be wrong about the advice (especially when it is advice that you are happy to hear), rather than first considering all the evidence in favor of it. Although not always appropriate, using the “wisdom of the crowd” suggests averaging several pieces of advice rather than choosing one to run with. And, for example, when it comes to listening to advice from alumni interviewers on who to accept at Yale, keeping a clear sense of what your “prior” inclination was (based on standardized scores, GPA, etc.) and deciding ahead of time how much weight to put on the interviewer’s advice can also help protect against being overly affected by expert advice. Finally, linear models can sometimes help substitute for expert opinion, but that is another story.
Q. Do you think financial models have started to take behavioral factors into account?
A. Ours remains a new and growing field, but yes, heads are turning. Regulators have been surprisingly aware of and open to the findings of my disclosure research. On the other hand, it is sometimes (neoclassically) thought that irrationality can be beaten out of the market. The problem with this view is that there are “limits to arbitrage”; for instance, while the housing bubble might have been correctly seen as an irrational mispricing and deviation from fundamental values, it remained difficult to predict exactly when to “short” (or predict downturns in) the market. One can go bankrupt trying to outsmart the masses. Perhaps we ought to better understand ourselves first, and psychological insights can help us do just that. Also, Yale has one of the best psych programs on the planet, so we are fortunate to have so many sources of insight on campus. That said, knowing the many ways that the human mind is limited does not imply that those limits disappear once noticed. Clearly an exciting direction for future research is how to overcome these limits — or (as I tell the MBAs) at least how to “manage” them.