I like the Internet. I do a lot of Internet-based work, research, activism, communication and plain fooling around. It’s been good to me.
So, generally, when I read articles or books by people like Nicholas Carr or Sherry Turkle — labeled, by some, as “Internet pessimists” — my first instinct is to be mad. These are people who denounce the information age, criticize social networks and balk at remix culture.
But as I read their pieces a little more closely, I realize that, hey, they might have a point. After all, their arguments are attractive and the studies they cite seem credible.
Then I open up some new tabs.
There, I load up my favorite tech blogs and, lo and behold, there are numerous articles by “optimists” shooting down the “pessimists” point by point. And I get angry again. After all, the optimists’ arguments are attractive and the studies they cite seem credible.
This column is not about whether the Internet or the digital age is good or bad. It’s about my general frustration with the discourse.
How much of it, for example, is simply boiled down to what studies one is citing? Carr, in an article in the Wall Street Journal, cites a study done at Stanford that correlates heavy multitaskers with poor cognition skills. Jonah Lehrer of the New York Times, in a review of Carr’s book, responds with a study from UCLA about how Google searches actually improve brain function.
Well, maybe these studies aren’t the best things to look at. Yes, they are of immense importance, and we should keep paying close attention to how new technologies are affecting our brain’s hardwiring or our social interactions (or lack thereof). But, because the optimists’ and pessimists’ arguments are so full of cherry-picked citations anyway, using these studies as a rhetorical tool doesn’t really seem effective anymore.
Instead, history may lend us some comfort.
There are many extreme “Internet optimists” out there — and I find myself filling that role, sometimes purely for the sake of argument — but one relatively more moderate “optimist” whose work I admire quite a bit is media professor Clay Shirky.
Shirky, in a counterpoint to Carr’s Journal article, doesn’t cite any scientific studies to defend the Internet, but instead pulls out the printing press. Back in 1440, when good ol’ Joe Gutenberg made his mass media machine, many intellectuals cried foul, claiming the impending evil of books. And the truth is, at first, things weren’t all that rosy. Now that more people were literate and could mass produce, trashy and lewd literature spread like wildfire.
Eventually, though, society did its “social norms” thing, and novels and mass journalism and science publications were born.
The point is, this has all happened before. It happened with the television, it happened with the telephone and the telegraph, it happened with the phonograph, it happened with the printing press, and heck, it probably happened with scrolls and wall paintings too. New media (and there have been a lot of “new” media) depends on these technology shifts, and technology tends to shift much faster than the society around it.
Some people wince at all the crappy YouTube videos and recycled culture. Some people marvel at creations like Wikipedia and social networks. There’s a bunch of good stuff and a bunch of bad stuff, and with them there’s a bunch of good implications and bad implications.
The Internet as a technology and as a medium is here to stay; I trust we’ll figure out how to shape our own views and norms to use it the best.
Adi Kamdar is a junior in Calhoun College.