During my not-so-distant days as an undergraduate, I remember when our dean of judicial affairs said that the panel of students and faculty charged with deciding the fate and future of individuals charged with serious offenses, like sexual assault, had to be about 75 percent sure to convict. While this standard is several orders of magnitude more stringent than necessary to free the Duke lacrosse players charged with gang-raping an exotic dancer in March, it still produces an alarming number of false convictions and their undeserved consequences.

In 2004-’05 the Yale College Executive Committee and its curiously named subsidiary, the Coordinating Group, dispensed five suspensions, 16 probations, 84 reprimands, two withholdings of degree, 13 acquittals and one unidentified punishment for sexual assault. Yale, whose threshold for guilt is “by the clear preponderance of evidence,” has a prosecution bias. After a necessary dive into the American Heritage Dictionary I found that preponderance means “superiority in weight, force, importance or influence.” By the margin of mere superiority, one would expect a much higher false conviction rate than the already high 25 percent at Duke, where I was an undergrad.

Current options for those who are falsely convicted or accused are not encouraging. Polygraph tests, while not bad at identifying the “guilty” in experimental settings, are of limited utility because they are relatively bad at correctly classifying “innocent” people as innocent since they measure signs of sympathetic nervous system activity like increased breathing, sweating or heart rate that are sometimes associated with the act of deception. What they do not measure, however, is our organ of deceit — the brain.

Six years ago, Daniel Langleben launched a program at the University of Pennsylvania designed to do exactly that. Using a brain imaging technique called functional magnetic resonance imaging that measures oxygen consumption — a proxy for cellular activity — in different regions of the brain, he was able to discriminate average differences between people who were lying or telling the truth about playing cards they held. Subsequent refinements culminated in a paper published last year in which his group reported being able to tell lies and truth apart in individual subjects in a similar experiment with 99 percent accuracy.

Many neuroimaging experts argue that such an approach cannot work in the real world where much more is at stake than a prize for successfully concealing cards. While their point about the real world and laboratory being very different has merit, it is also uncontested and equally likely that detecting lies about committing rape will be much easier than ferreting out someone who denies the five of clubs.

Happily, these are all testable questions. Maybe the approach won’t work in practice or on sociopaths or necrophiliacs, but in the absence of suggestive data, it seems like a good idea to see how these tests turn out in No Lie MRI and Cephos, the two companies bringing the technology to commercial market later this year.

Yale’s unique position as a non-state administrator of justice allows it more flexibility in determining what kinds of evidence to consider during Executive Committee deliberations, opening up the possibility that this technique could be pioneered more easily in this setting than elsewhere. Sexual assault cases, in particular, might lend themselves well to this kind of analysis considering that the vast majority involve the testimony of one party against another. Provided that the negative control questions — e.g., “Did you take a trip together to Burkina Faso last night?” — and the positive control questions — “Did you have dinner together last night?” (assuming you did and have witnesses) — are reliably answered by the algorithm, then I think it would be a mistake to refuse to consider such evidence in light of the accuracy of our current system.

Matthew Gillum is a second-year graduate student in molecular and cellular physiology. His column appears on alternate Fridays.