The ability to publish competitive research often depends on the speed at which computers process data, but Yale’s technology system did not make the cut for a ranking of the 500 fastest supercomputers in the world.

The list, known as TOP500, comes out twice a year, and universities such as Harvard, Stanford, the University of Texas and Indiana University have all been included over the past five years. Computers that can run data faster increase a researcher’s chances of publishing a finding first, said Chuck Powell, associate chief information officer of Information Technology Services, adding that universities’ rankings in computing fluctuate frequently.

Depending on funding, one institution may purchase a large system and be a leader for a few years, but then new models will make that school slip out of the top spots, added David Frioni, manager of Yale’s High Performance Computing group. Still, Yale has never made the list.

“It’s a bit of the Old West and gunfighter thing,” Powell said. “There’s always the fastest kid in town.”

At Yale, high-performance computers are vital to a range of academic pursuits: 24 departments, ranging from geology and geophysics to linguistics, and about 1,000 professors and graduate students use the University’s eight supercomputer clusters located on central and west campus. Clusters are all named “Bulldog” and differentiated by letter of the alphabet. The newest cluster is “Bulldog N.”

Yale tries to buy a new cluster every year to improve its computing speed, Powell said, but typically does not purchase machines of top-500 caliber. He added that the Provost’s office, in particular Deputy Provost for Science and Technology Steven Girvin, has been investing more into high-performance computing to make sure Yale is competitive with peer institutions. Powell added that he is optimistic that Yale is on the rise.

“We’re always looking to expand and grow and try to keep up with our peers,” Frioni said. “I’m sure we’re ahead of some and behind some.”

Girvin was traveling and unavailable for comment.

Researchers using enormous figures sometimes need a supercomputer to examine a large amount of data or fit their findings to a proposed formula. For example, a biological physicist studying how a protein folds can use a supercomputer to watch protein dynamics play out atom-by-atom on the computer monitor.

Some professors have decided not to rely on the University’s computing systems for their research because Yale’s machines are not adequate for their projects.

Charles Baltay, a professor of physics and astronomy, has a team of researchers studying astrophysics and cosmology from a telescope in Chile that gathers about 50 gigabytes of data a night. The team has its own computing system in Gibbs Laboratory on Science Hill, and sends the data to it over the Internet. The computers crunch numbers that reveal the positions and brightness levels of supernovas, which Baltay said can give insight about the age and future development of the universe.

While he said the computers in Gibbs are fast enough to accomplish what he wants, if they were ten times faster, he could finish his work sooner and compute “more fancy analyses.” Baltay said he did not think the computing systems available through ITS would be as effective.

“We don’t work through ITS at all,” Baltay said. “We’re doing specialized computing, which is not their strength.”

Yale does not currently have any graphics processing units (GPUs) — new processors that can reach higher speeds than Yale’s current systems, Frioni said. GPUs can deliver more calculations per second using the same silicon area as a comparable microprocessor while expending less energy per calculation.

Professors at the School of Forestry & Environmental Studies have expressed interest in acquiring systems that would allow them to study centuries of climate data to assess global warming, Powell said.

The fastest supercomputer attains 2.57 quadrillion calculations per second and is located at the National Supercomputing Center in Tianjin, China.