Ashlyn Oakes

Societies don’t progress linearly. Rather, growth builds exponentially, with success feedbacking even more success.

This theory is a tenet of the technological age we live in. As posited by computer scientist Ray Kurzweil, more progress will happen in the next 10 years than in the previous 150. By 2050, the world may well be unrecognizable to an earthling of 2015.

The promise of that tomorrow intoxicates, thanks to the imminent era of technological revolution. It’s a period that will usher in the most fundamental changes in the history of the genus homo. The tendrils of this technology will touch every realm and totem of modern life, from the office to the home to even the hallowed halls of universities like Yale.

The advancements in artificial intelligence will yield the most tremendous change. Scientists divide AI into two categories: narrow and general. Artificial narrow intelligence, or “weak AI,” specializes on one specific task with clear parameters and targets. ANI has already become familiar in modern life: iPhone knowledge-navigator Siri, computer competitors in chess or video games like Call of Duty, self-operating vacuum cleaners like Roomba and self-driving vehicles.

Weak AI enhances productivity and improves efficiency and, across many industries, obviates the need for low-skill laborers to putter over menial tasks — especially when they are better completed by a specialized computer. So what, then, happens to the worker displaced by mechanization?

Unemployment.

Not for everyone, at least not initially, but for millions in the coming years. Researchers at Oxford predict that by 2035, technological unemployment will threaten at least 47 percent of all U.S. workers — a Luddite nightmare suddenly vaulted to the forefront of international consciousness. The risk of vocational obsolescence imperils more than the likely suspects, such as the 9.3 million workers in transportation acutely at risk to be automated. Work in finance, education, architecture, engineering and health care will all see massive job transformations and human unemployment as well, according to figures gathered by NPR earlier this year.

Thanks to weak AI already on the market, an additional 15.4 million jobs, representing 10 percent of the workforce, will not exist in the next decade, according to an article from The Atlantic earlier this year. As Yale’s own ethicist Wendell Wallach noted in June, the inflection point has been reached: Technology now permanently displaces more jobs than it creates.

Yet the consequences of weak AI pale in comparison to those of artificial general intelligence, or “strong AI.” AGI represents the apex of technological innovation, at which a computer’s intellect can think abstractly, judge probabilistically, recognize uncertainty and reason as well as any human — if not better. Most importantly, like any human brain, strong AI is able to learn. And if a machine can learn, then it can teach itself.

This capability unravels the human monopoly on intelligence. Once a machine reaches human-level general intellect, it can teach itself to rewrite the computer’s coding framework, producing a computer that endlessly builds a better machine. Minutes, hours, days or months after the arrival at this general intelligence, the computer will have stumbled upon “superintelligence,” the state of brilliance far beyond that of any human who has ever lived.

At this stage, the speculation begins and the expert consensus devolves. We, mortals, know the ceiling of human intelligence. But what about an IQ of 300? Or 1,000? Or 100,000? Superintelligence destroys comparisons of scale, like a human playing chess against a beetle, or Einstein explaining relativity to an earthworm.

A superintelligent computer potentiates breakthroughs of unmistakable importance: the end to climate change; the eradication of cancer, illness and disease; the unlocking of intergalactic exploration and travel; and perhaps even the realization of human immortality itself.

In the words of writer James Barat, superintelligence may well be our final invention.

But the rosy future darkens dramatically when we abandon the optimists’ predictions. Some, like Oxford philosopher Nick Bostrom, aren’t so convinced about a post-AI utopia. A superintelligent computer, after all, might adroitly override human rules. It might craft rules of its own, regulations for how the inferior minds ought to kowtow before their intelligent masters. These theorists argue this may even culminate in a grand strategic power play to control the entire world’s financial, military, political and social systems.

Whatever the outcome, experts generally agree that superintelligence will emerge eventually, perhaps within our lifetimes. In one recent poll, technologists estimated a 90-percent probability of human-level strong AI emerging by 2075. Superintelligence could follow some short months or years later.

There is still time left to understand the nascent technology, halt the exit music through policy and swing the inexorable momentum toward a future we’d peaceably live in rather than one in which we don’t live at all. Whether we place limits on legal development of AI technology, such as we have with cloning, or standardize industry protocol, there are ways we can better protect ourselves from our future.

There is time left — just not that much. And until we as members of the human race take seriously these issues of existential consequence, so-called progress may well lap us into oblivion.

graham ambrose is a sophomore in Jonathan Edwards College. Contact him at graham.ambrose@yale.edu .

GRAHAM AMBROSE