Since the start of the year, University President Peter Salovey has redoubled his efforts to enact his “Academic Strategy.” To this end, he has announced a focus on data-intensive social science and discussed a new public policy school. As Yale prepares for a new capital campaign, these initiatives may define the University’s future scholarly output.

But the banality of terms like “data” and “policy” elides the blind spots they entail. And unless students and scholars recognize their limits, the future will read like a dystopian novel, if not a downright farce.

Recent events illustrate the limitations of quantitative research. For decades, economists increasingly have relied on mathematical modeling at the expense of qualitative approaches like economic history, yet economic models failed to predict the Great Recession of 2008. In political science, advanced polling techniques similarly forecast neither Brexit nor Donald Trump. Meanwhile, psychology confronts a replication crisis, with nearly two-thirds of published results failing the test of reproducibility. In short, disciplines that underwent a “quantitative revolution” in the middle of the 20th century are experiencing a counterrevolution in the 21st.

Some might argue that the solution is more data, but the problem is not just technical. Data privileges the aggregate over the particular, presuming coherence in an often-contradictory world. It entrenches our worst biases, as evidenced by a Google program that tagged black faces as gorillas or a risk assessment algorithm that wrongly identified black defendants as more likely to reoffend.

Most critically, data cannot answer the hard, normative questions that elites are all too keen to avoid. In a recent piece in Scientific American, Salovey championed “data-driven decision-making over ideology.” Indeed, many Yalies believe that the “masses” will come around to our positions, if only they were better informed. But data alone cannot adjudicate trade-offs between competing interests, which are invariably ideological. What is the appropriate balance between promoting economic development today and preserving the environment for tomorrow? How do we reconcile the fact that trade and immigration create growth but also depress the wages of low-income earners? Data can inform our value judgments but cannot make them on our behalf.

The cousin of data, “policy,” shares many of these pitfalls. Policy research answers the “how” but not the “what” or “why.” The conceit of data-driven policy-making is that it is ideologically neutral, when all it does is mask its own ideological tenor. Good policy cannot exist without robust politics, which ultimately requires an interrogation of political philosophy. A public policy school trains students in technocratic governance, but a university like Yale must also prepare students for citizenship in a deliberative democracy.

To be clear, I am no Luddite: Policy, technology and quantitative data are all integral to the work of the University. But we must not presume their authority over other ways of knowing. Individual stories, historical experiences and ethnographic insights are data, as much as large-scale statistical datasets. Similarly, technology does not have to be digital to be fit for purpose. To use a simple illustration, students learn better when they take notes with pen and paper rather than a laptop. That’s not because technology is bad, but because paper itself is a technology, with benefits that are not immediately apparent but nonetheless material.

As we embrace the promise of big data, we must never lose sight of the value of imagination, intuition and inspiration. Salovey’s own career evinces the importance of spontaneous ideation. In the summer of 1987, John Mayer — who studied human intelligence — was helping Salovey — who studied emotions — paint his new house in New Haven. While applying a second coat to the living room wall, the duo developed the idea of “emotional intelligence.” In their seminal 1990 paper, they outlined their vision of emotional intelligence without a single statistic, opening with a quotation from the first-century-B.C. writer Publilius Syrus. That’s the kind of work we stand to lose if we prize quantification over all else.

We have been here before. In 1965, IBM sponsored a conference at Yale on “Computers for the Humanities.” A poster from the conference hangs in the Digital Humanities Lab, which will move into a swanky new space in Sterling Memorial Library once renovations are complete. To me, the poster is an omen. The computational turn in the 1960s and 1970s ultimately faltered, as scholars devoted their time to creating comprehensive datasets and running analyses that bore little relation to reality. As a reviewer of the conference wrote, “Some foolish studies are being done and many will be done.” We would do well to remember that admonition.

At Yale, our task is to build a future that is technologically guided but people-focused; data-informed but values-driven. Unlike that school up north, our motto is more expansive than a statistically significant Veritas. It is Lux et Veritas.

Jun Yan Chua is a senior in Saybrook College. His column runs on alternate Tuesdays. Contact him at junyan.chua@yale.edu .