The end of the world as we know it, Vassar says

Vassar claims that computers will revolutionize the world in 20-60 years.
Vassar claims that computers will revolutionize the world in 20-60 years. Photo by Victor Kang.

Twenty to 60 years from now, the advent of computers with above-human intelligence could transform civilization as we know it, according to Michael Vassar, president of the Singularity Institute for Artificial Intelligence. In a talk with around 35 students and faculty members in William L. Harkness Hall on Sunday, Vassar expounded the vision that his institute, featured in a Feb. 10 article in TIME Magazine, is working to make a reality. Known as the “singularity,” this futuristic scenario posits that artificial intelligence will surpass human intelligence within the next half-century. Once super-intelligent computers exist, they could generate even more intelligent and sophisticated machines, to the extent that humans would lose all control over the future, Vassar said.

“For the most important event in the history of events, it really should get a fair amount of buzz,” he said.

Vassar compared human and chimpanzee intelligence to argue that small changes in a system can represent large leaps in mental capacity. Just as a human is a small evolutionary step from other primates, a super-intelligent computer would be a natural progression as artificial intelligence approaches human intelligence, he said.

Our computers are not as smart as humans yet, but if technological progress continues at its current rate, one could expect to see them in the next 20 to 60 years, Vassar said. Probably the most well-known example of artificial intelligence right now is Watson, an IBM computer that competed alongside humans on the quiz show “Jeopardy!” this month.

“We would design [a super-intelligent computer] as an optimization program for fulfilling some human values, such as human happiness, or ending world hunger, just like we designed [computer] Deep Blue with the function to win at chess,” he said.

But the singularity could go horribly awry, Vassar added, if computer scientists are not careful what they program smart machines to do. If a super-intelligent computer in charge of humankind’s future were told humans value food, fun and sex, for example, but food turned out to be the cheapest of the three to procure, the machine might decide to give the world as much food as possible while sacrificing everything else — a dangerous solution that humans would have lost the power to prevent.

In another example of what sloppy programming could lead to, consider a hypothetical machine taught to value smiling faces, Vassar said.

“I could easily imagine scientists messing up the singularity, by creating something that looked human-friendly and as soon as it would be slightly more powerful than us it would decompose us into circles of our DNA made into smiley faces,” he said.

If the singularity happened smoothly, humans could conceivably “upload” their brains to computers, achieving virtual immortality, Vassar said.

Four of the six audience members interviewed at the talk said they were intrigued by the prospect of a singularity but doubted it would happen as soon as Vassar predicted.

“This sounds like a smart person’s version of the Second Coming,” Sirui Sun ’13 said. “Every generation thinks it’ll be the last. I’m skeptical, I guess.”

But Ben Wieland, a mathematician who came from out of town to hear the talk, called 60 years a “conservative” estimate for the creation of super-intelligent computers. He said the singularity is simply a consequence of Moore’s law — that the number of transistors in a computer circuit doubles every two years — and the modern conception of the brain as a physical object.

The term “singularity” originated in astrophysics to describe a scenario that defies ordinary physical rules, according to the TIME Magazine article.

Comments

  • BaruchAtta

    There will never be a “singularity” as proposed.

    We have been developing systems for years. We haven’t even gotten to the mouse brain stage. We haven’t even had a clue how to approach the mouse brain, let alone a human.

    Computers are good at: computation and pattern matching. Pattern matching is the frontier of Comp Sci; the state of the art in pattern matching will improve and improve. We can expect Watson type voice recognition and Google type information lookup to be improved. We will see these technologies come down in price and be installed in robots. Robots will be able to do amazing things autonomously, such as drive cars and trucks, operate mining and construction projects, and even build other robots and even factories to build other robots. In other words, robots will be able to reproduce. The implications are immense, but for a later discussion.

    But to give Watsons or Googleplexes or robots the ability to think and dream, well, that’s science fiction. There is no evidence available that this ability will be granted to machines. It is a jump of faith to believe it. Discussing it is like discussing angels on the head of a pin. I am looking for real scientific evidence that real thinking has ever been emulated in a machine. Sorry, but I haven’t seen any evidence of this. Even a little bit of thinking, even a mouse level bit of self awareness. Even just emulated in software. Anybody?

    Anthropomorphism is a term coined in the mid 1700s to refer to any attribution of human characteristics…to non human object. (Wikipedia) Any talk about a “singularity” is just that. You can be anthropomorphizing your machine. But that doesn’t mean it is so. My grandchild thinks that his stuffed bear is his best friend, too. So don’t confuse talking and seaching with thinking.

    Baruch Atta

  • nobidon

    @BaruchAtta Yes there is always an element of faith when considering future accomplishments that have not happened yet, but any new project requires a belief that it can be accomplished before people start working on it and spending money/resources to develop it. The key here is the exponential doubling of capability going on and how much better computer systems are becoming due to that exponential nature of progress. Even a complete pessimist must believe that computer systems trillions of times more powerful than today’s computers will eventually reach a point of being smarter than we are. Even if that took thousands of years from now, it will get here eventually and it will be like singularity. What Ray Kurzweil is describing is just 1 billion times more capable than computer systems are today and how fast we can reach that level of capability. Certainly another billion beyond that level would get us there, and that would be less than 100 years from now. This type of progress represents an explosion no less disruptive than a physical bomb going off and having an impact on everything around it, but the explosion is in intelligence capabilities. It’s easy for us to model physical explosions and how they effect various materials around them, but it is more challenging to model an intelligence explosion so we speculate and we project various scenarios and assign our best guess at the probability of them happening. Most of Ray’s predictions have been very accurate when they pertain to exponential growth in capabilities of various physical systems so I’m placing my “faith” that he is pretty close to getting the dates right. Even if he is wrong by a factor of 10x, it will mean our lifetimes will miss it, but he certainly isn’t wrong about the fact that it will get here and explode our society in ways that we can not predict. The event horizon is already expanding towards us and when it eventually transforms our society in the ways the singularity people are talking about, we will be too far into it to ever go back. That’s a type of faith that I find very exciting and is why I became a computer engineer in the first place.
    - nobidon