Twenty to 60 years from now, the advent of computers with above-human intelligence could transform civilization as we know it, according to Michael Vassar, president of the Singularity Institute for Artificial Intelligence. In a talk with around 35 students and faculty members in William L. Harkness Hall on Sunday, Vassar expounded the vision that his institute, featured in a Feb. 10 article in TIME Magazine, is working to make a reality. Known as the “singularity,” this futuristic scenario posits that artificial intelligence will surpass human intelligence within the next half-century. Once super-intelligent computers exist, they could generate even more intelligent and sophisticated machines, to the extent that humans would lose all control over the future, Vassar said.

“For the most important event in the history of events, it really should get a fair amount of buzz,” he said.

Vassar compared human and chimpanzee intelligence to argue that small changes in a system can represent large leaps in mental capacity. Just as a human is a small evolutionary step from other primates, a super-intelligent computer would be a natural progression as artificial intelligence approaches human intelligence, he said.

Our computers are not as smart as humans yet, but if technological progress continues at its current rate, one could expect to see them in the next 20 to 60 years, Vassar said. Probably the most well-known example of artificial intelligence right now is Watson, an IBM computer that competed alongside humans on the quiz show “Jeopardy!” this month.

“We would design [a super-intelligent computer] as an optimization program for fulfilling some human values, such as human happiness, or ending world hunger, just like we designed [computer] Deep Blue with the function to win at chess,” he said.

But the singularity could go horribly awry, Vassar added, if computer scientists are not careful what they program smart machines to do. If a super-intelligent computer in charge of humankind’s future were told humans value food, fun and sex, for example, but food turned out to be the cheapest of the three to procure, the machine might decide to give the world as much food as possible while sacrificing everything else — a dangerous solution that humans would have lost the power to prevent.

In another example of what sloppy programming could lead to, consider a hypothetical machine taught to value smiling faces, Vassar said.

“I could easily imagine scientists messing up the singularity, by creating something that looked human-friendly and as soon as it would be slightly more powerful than us it would decompose us into circles of our DNA made into smiley faces,” he said.

If the singularity happened smoothly, humans could conceivably “upload” their brains to computers, achieving virtual immortality, Vassar said.

Four of the six audience members interviewed at the talk said they were intrigued by the prospect of a singularity but doubted it would happen as soon as Vassar predicted.

“This sounds like a smart person’s version of the Second Coming,” Sirui Sun ’13 said. “Every generation thinks it’ll be the last. I’m skeptical, I guess.”

But Ben Wieland, a mathematician who came from out of town to hear the talk, called 60 years a “conservative” estimate for the creation of super-intelligent computers. He said the singularity is simply a consequence of Moore’s law — that the number of transistors in a computer circuit doubles every two years — and the modern conception of the brain as a physical object.

The term “singularity” originated in astrophysics to describe a scenario that defies ordinary physical rules, according to the TIME Magazine article.