Yale lags in supercomputing

The ability to publish competitive research often depends on the speed at which computers process data, but Yale’s technology system did not make the cut for a ranking of the 500 fastest supercomputers in the world.

The list, known as TOP500, comes out twice a year, and universities such as Harvard, Stanford, the University of Texas and Indiana University have all been included over the past five years. Computers that can run data faster increase a researcher’s chances of publishing a finding first, said Chuck Powell, associate chief information officer of Information Technology Services, adding that universities’ rankings in computing fluctuate frequently.

Depending on funding, one institution may purchase a large system and be a leader for a few years, but then new models will make that school slip out of the top spots, added David Frioni, manager of Yale’s High Performance Computing group. Still, Yale has never made the list.

“It’s a bit of the Old West and gunfighter thing,” Powell said. “There’s always the fastest kid in town.”

At Yale, high-performance computers are vital to a range of academic pursuits: 24 departments, ranging from geology and geophysics to linguistics, and about 1,000 professors and graduate students use the University’s eight supercomputer clusters located on central and west campus. Clusters are all named “Bulldog” and differentiated by letter of the alphabet. The newest cluster is “Bulldog N.”

Yale tries to buy a new cluster every year to improve its computing speed, Powell said, but typically does not purchase machines of top-500 caliber. He added that the Provost’s office, in particular Deputy Provost for Science and Technology Steven Girvin, has been investing more into high-performance computing to make sure Yale is competitive with peer institutions. Powell added that he is optimistic that Yale is on the rise.

“We’re always looking to expand and grow and try to keep up with our peers,” Frioni said. “I’m sure we’re ahead of some and behind some.”

Girvin was traveling and unavailable for comment.

Researchers using enormous figures sometimes need a supercomputer to examine a large amount of data or fit their findings to a proposed formula. For example, a biological physicist studying how a protein folds can use a supercomputer to watch protein dynamics play out atom-by-atom on the computer monitor.

Some professors have decided not to rely on the University’s computing systems for their research because Yale’s machines are not adequate for their projects.

Charles Baltay, a professor of physics and astronomy, has a team of researchers studying astrophysics and cosmology from a telescope in Chile that gathers about 50 gigabytes of data a night. The team has its own computing system in Gibbs Laboratory on Science Hill, and sends the data to it over the Internet. The computers crunch numbers that reveal the positions and brightness levels of supernovas, which Baltay said can give insight about the age and future development of the universe.

While he said the computers in Gibbs are fast enough to accomplish what he wants, if they were ten times faster, he could finish his work sooner and compute “more fancy analyses.” Baltay said he did not think the computing systems available through ITS would be as effective.

“We don’t work through ITS at all,” Baltay said. “We’re doing specialized computing, which is not their strength.”

Yale does not currently have any graphics processing units (GPUs) — new processors that can reach higher speeds than Yale’s current systems, Frioni said. GPUs can deliver more calculations per second using the same silicon area as a comparable microprocessor while expending less energy per calculation.

Professors at the School of Forestry & Environmental Studies have expressed interest in acquiring systems that would allow them to study centuries of climate data to assess global warming, Powell said.

The fastest supercomputer attains 2.57 quadrillion calculations per second and is located at the National Supercomputing Center in Tianjin, China.

Comments

  • BaruchAtta

    I suppose that the “**Yale Elite**” wouldn’t even notice **Yale’s supercomputer ranking**, nor care. You know who you are: you who were at the naked frat tap party. *Party* Garth.

  • physics14

    A better article would be about Yale lagging in supercomputing personnel. So there are 24 departments and about a thousand professors and grad students who use the systems, but how many people offer scientific support for those users? Two, I think. One per five hundred people. They’re both very helpful for solving problems, but they just don’t have the time to help where it would really be useful – working in tandem with students to translate new ideas into actual, efficient codes.

    I have a friend at Oak Ridge who recently wanted to enhance her model and even though she’s already much better with computers than most people they still paired her up with an expert who met with her three times a week for half a day for two months straight to ensure she was getting the most out of her time and the computing resources there.

    I would guess these systems cost millions of dollars, and Yale has a number of them. Spending the next million on a few people would be of far more use to many here than additional hardware.

  • dalet5770

    I can’t wait for the day when we see Yalies walking around with mirrors on there heads so they can project anywhere they want to be with one caveat – a little meditation :)

  • briand

    @physics14:

    As someone who used to work with ITS to provide that support, I agree completely – both the IT and the HPC side of things need additional support, and I’ve argued that this support would have a greater multiplier effect on productivity than simply acquiring new hardware. In ITS, the four main people doing the actual work on the systems are quite talented, but four people managing 6 – 10 active systems while also being tasked with basic user support, meetings, planning for future systems, investigating new technology, etc., … well, it gets to be a bit much. (I’m inclined to think the ‘thousand users’ bit is a bit over-stated, since many of them aren’t active, to my knowledge.) So, certainly, we need some additional technical people in ITS. And in addition to more of them, getting them some HPC-centric training (such as writing basic parallel codes) seems like a win/win – it helps Yale since they can then better address basic user questions, and it helps them by teaching them new skills and furthering their careers.

    The other side of things is the HPC experts – and we definitely need more of them, especially since the FAS community is down to one, I think. I can cite projects that I worked on where even tremendously simple changes -switching a compiler, for example- resulted in a two-fold difference in speed, which means that if that were a general rule-of-thumb then Yale could get by spending half as much on hardware and yet achieve the same scientific throughput simply by educating people on compiler choices, optimization levels and basic programming tips and tricks. Other changes, such as algorithmic ones, offer even greater benefits – for example switching up an MD-type code from an O(N^2) algorithm to an O(N) one. Then there’s the hardware challenges – at the moment, GPUs offer some incredible advantages for certain types of applications, but if you’re a professor in Economics, for example, then learning the underlying layout of the hardware of a GPU and then teaching yourself CUDA or OpenCL on your own is a daunting task. However, if there is an expert on campus who has already worked with three other people to translate their codes to CUDA, chances are he or she would be able to partner up with this professor and reduce the time needed to get this done to something manageable.

    [continued below]

  • briand

    The good news is that I think all of this is widely agreed upon by the faculty here… with the bad news being that the faculty seem to have almost no control over budgets and personnel, and with several different committees around, it’s unclear to me which one sits in a leadership role, if any. Meanwhile, the higher-level administration of the university clearly has its hands full with other issues facing the Yale community. Many places, even academic ones like TACC, have a director with a scientific background who has a good measure of control over all these things. Even if that isn’t the perfect solution for Yale, it certainly seems like a step in the right direction to me.

    Anyway, that’s my (long) two cents. I’d encourage you to speak with Meg Urry or Yoram Alhassid if you have ideas or concerns, since I think both are very interested in making sure that Physics is competitive when it comes to computational research here. I’ll get off my soapbox now with the standard disclaimer that the above thoughts are simply my own, as someone who would very much like to see Yale become a leader in HPC not because we have fast, large systems but because we have top-rate science being done on those systems.

    – Brian

  • dalet5770

    linking computers is always an option to super computing over a broad spectrum and it can be done with a relative budget. Sure you won’t be able to do the earth science but you can always look for alien life