At Yale, supercomputer drives you

Engineers from Yale and New York University have collaborated to create a supercomputer that can process visual information faster than ever before.

This system, called NeuFlow, uses technology that is based on how the human visual system comprehends its surroundings, said Selcuk Talay, a postdoctoral associate at the School of Engineering & Applied Science and one of the researchers on the project. This technology, he added, could one day be used to perform tasks such as driving a car.

Silicon chips serve as electrical conduits for the NeuFlow computer visual system, which can process images in real time.
Silicon chips serve as electrical conduits for the NeuFlow computer visual system, which can process images in real time.

Even though the workings behind the human visual system are not completely understood, Talay said NeuFlow uses existing technology in a more effective way. For instance, people are able to recognize objects, such as doors or tables, around them fairly quickly. But computers need optimal character recognition programs to take the image information and turn it into a text document for the computer to process, he said.

In the past, Talay said, a network structure similar to the one in the human brain would be incorporated into hardware in order to obtain visual information. But once this information was received, computers were unable to modify it until a new image was processed.

“It’s like you are scanning a document, but you cannot edit it,” he said.

The innovation behind NeuFlow, he added, is the rate it receives visual information. NeuFlow can process images about 100 times faster with 100 times less electricity than before. He added that this technology could make it possible for cameras to process images as big as five or six megapixels, which can be used for poster-size prints.

NeuFlow is able to process this image in real time, and, when put in a vehicle, can observe obstacles in its surroundings and react, Talay said. Furthermore, this technology could be used to call for help if an elderly individual falls, give a 360-degree synthetic view for soldiers in battle or improve robot navigation in dangerous situations.

“Once you’ve got the knowledge of more information, you can do so many different things you cannot imagine right now,” he said.

NeuFlow was developed by electrical engineering professor Eugenio Culurciello using vision algorithms developed by Yann LeCun of New York University. Other contributors to this project include New York University graduate student Benoit Corda, Clement Fabaret, a research scientist at both New York University and Yale University, and software developers Berin Martini and Polina Akselrod.

Comments