The metaverse. Virtual Reality. Sweet Baby Ray’s BBQ Sauce. We live in a time where such technologically advanced concepts are more than just facets of a futuristic fantasy. Thinly veiled Mark Zuckerberg slander aside, this is a remarkable fact. Since the advent of the digital age, human ingenuity seems to have kicked into overdrive. However, when the average American spends roughly seven hours a day staring at a screen, this updated mode of existence feels like the norm. We forget that, in comparison to other humans throughout history, our habits are strikingly novel. The technology that makes these habits possible, while replete with shiny allure, threatens to render human nature indistinguishable from that of the tools we have created.

Computer science revised the popular understanding of both machine behavior and human behavior, as well. We saw ourselves in our creation, using anthropomorphic language to describe, for example, coding as “giving commands” to a computer. In a departure from the historically popular belief in free will, we have begun to map the deterministic laws that underlie machine learning onto the human psyche. The philosophy of the digital age is becoming such that humans’ actions are believed to be mere outputs which mechanistically follow certain fixed laws, much like the actions of a computer. This change is embedded within our modern vernacular: We now say that we are “hardwired” to perform a certain action or that we “crash” whenever our mind is no longer capable of functioning properly and needs rest. The concept of determinism predated this way of speaking, but the two achieved widespread adoption in tandem. The lesson here is that the more we use machines, the more we are convinced that we are not all that different from them.

Some would say that this is not a problem. We still treat each other like free agents in our everyday interactions. We are still instinctually averse to the belief that our actions are determined by factors beyond our control. But the metaphysics of advanced industrial society has rendered people “one-dimensional,” according to 20th-century philosopher Herbert Marcuse. Marcuse argues that technological advancement changes people such that they can only see themselves as factors in the production function of the capitalist regime, nothing more. While I don’t subscribe to Marcuse’s Marxist framework, I think his point that dehumanization follows modern material progress is a salient one.

Think of how many people admit to being addicted to their phones. Perhaps you are among that number; I am. The biology of addiction has been thoroughly documented, and the general consensus among the scientific community is that it is a damn hard thing to combat. Psychologically, one’s chances of beating addiction are lowered by the lack of belief in free will. If you are convinced that your choices are actually determined by your neurochemistry, it’s much harder to imagine yourself breaking the neurochemical cycle of addiction. Suddenly, we find ourselves in a positive feedback loop with negative consequences: some technology use perpetuates further technology use, and the more technologically dependent our experiences become, the less we feel like authentic humans.

It’s because of this vicious cycle that I am hesitant to celebrate our movement into the metaverse. After all, the recent tech-driven shift towards a deterministic philosophy teaches us that our perceptions of the external world influence our understanding of our own nature. Before we created computers that could act with no will of their own, a science that could explain our own actions with no reference to a will was unthinkable for many people. But now, we live in a world where countless moments of our lives unfold within the confines of a digital screen. When you cannot fully participate in the modern world without immersing your mind in technology, this technology begins to define your identity. Because so much of our human experience is mediated by mechanistic objects in this way, the call to make the obvious comparison and objectify ourselves seems irresistible.

The truth is, though, we are more than mere objects. Computers do not have values like dignity, responsibility and autonomy that they regard as essential to their unique metaphysical experience. They cannot ask themselves morally binding questions like “What should I do?” and employ these values in their answers. Only people can. But the more we accept axiomatically that technological advancement is a good thing, we are at greater risk of forgetting this truth. A move beyond the metaverse to full-brain emulation would be the final nail in the coffin of our humanity, and it is already being discussed. To prevent this calamity, we must collectively acknowledge that the defining characteristic of our first-person experience is that we can view ourselves as self-determining subjects and not causally enslaved objects. To write this off as clickbait, and claim that our decisions are determined not by ideals and values but ones and zeros, is to lose everything by losing ourselves. 

Elijah Boles is a sophomore in Ezra Stiles college. His column runs every other Tuesday.

ELIJAH BOLES