In the past few decades, computers have steadily insinuated themselves further and further into our lives, with the process taking on an air of inevitability. They took over offices and then they found their way into most homes, and now many people carry small computers around with them at all times. More and more of the media we consume has become digitized, more and more information is instantly accessible and open to reprocessing, our means of communication has become digitized, and most recently, the hub of many of our social lives seems more and more dependent on online networking capabilities. Soon, if we don’t post our plans or our experiences to Twitter or Facebook, it may seem as though they haven’t happened at all.
At this point, one might be tempted to brandish an iPhone and say, So what? Computerization has inarguably made our lives more convenient and our work more efficient. (Sure, that productivity boost may not have been passed through to workers in the form of higher wages yet, but eventually …) We can access more stuff, thanks to virtually infinite digital inventory space and such services as automated recommendation systems, and get more done with that stuff than ever before. We’ve never before had such tools to organize, manipulate, and transform the material of our lives, and we’ve never before had such apparently equal access to the most powerful means of production and distribution. With real-world advantages seemingly “flattened” by the internet, the meritocracy of ideas shimmers within our grasp. Also, we can stay in touch with the people in our lives with less effort and a greater, more granular sense of control of how intimate we become. Besides, it would be curmudgeonly, standoffish, to opt out of social systems that rely on network effects for their benefits. You’re only hurting yourself by not sharing more.
The way in which technology disrupts lives and businesses has come to seem unavoidable, so it figures that optimists would conclude that therefore the process must be benevolent, tending to aid human progress toward the realization of universal freedom and fulfillment. If anything, they might argue, we need more computers, and fast, to extend their munificence to even more people in the developing world. (Never mind the power the machines consume or the way they standardize English as the lingua franca.)
But there’s no good reason to accept the computer’s hegemony as inevitable, or to reflexively adopt a technoutopian perspective, which assumes that any technology we adopt must automatically improve our lives in the aggregate — otherwise we wouldn’t have adopted it. As David Golumbia, a media studies professor at the University of Virginia, argues in The Cultural Logic of Computation, these assumptions are thoroughly ideological, and as with all ideological notions, it’s hard not to mistake them for common sense. The degree to which the benefits of computers are assumed is precisely the degree to which this ideology has served its purpose. It seems self-evident that computers have benefited everyone; arguing to the contrary tends to make people think you are a Luddite, a Unabomber-like crank with a grudge against society.
Rewiring Rational Thought
Rewiring Rational Thought
Golumbia is no Luddite; he readily admits that computers have brought a wide range of benefits to society. His chief purpose, though, is to demonstrate that these benefits come at the cost of accepting the technophilic ideology, and changing how we perceive our own essence as human beings. Deploying a hodge-podge of theoretical tools, Golumbia asserts that computers (or computationalism, as he calls the phenomenon of using computers’ binary processing as the template for rational thought in general) mainly cater to our illegitimate fantasies of omnipotence, presenting us with a powerful tool over which we seem to have complete command.
“The computer seems easily to inspire dreams of individual domination and mastery, of a self so big that no other selves would be necessary, of a kind of possession of resources that could and would eclipse that of any other entity… Rather than the community participation some evangelists advertise, computers and the web seem to bring out an especially strict and fierce individualistic competition, as if the web is on the verge of an extreme concentration from which only a few super-individuals will emerge.”
That may sound like a paranoid vision, but the ease with which we can now quantify our social successes and needs with online tools may be orienting us in this direction — how many followers do I have? how many posts have I written? how many songs are in my library? While connecting us more thoroughly to the world, computers also dupe us into thinking we control all the connections, that we can turn aspects of the world on and off as it suits us. We don’t need to wrangle with the contingencies and uncertainty that other people introduce; we can mediate our relations with the world entirely through the computer and deal with everyone else’s crap when it is convenient to us. In that fashion, we are invited to regard the world itself and the other people in it as kinds of computers, subject to our manipulation, and this only stokes our competitive fire to make sure we are not a puppet but one of those “super-individuals” pulling the digital strings.
Apologists for computerization tout it as democratizing and leveling, though in effect, the main function of the machines is to supply ever-increasing amounts of control over data. This power, in turn, inspires those who have tasted it to try to see more and more of human experience turned into data. This plays out — as Golumbia details at length in the book’s dry, early chapters, which will most likely interest only specialists — in the way computationalism comes to function as a master metaphor in a variety of academic disciplines. Neoclassical economics has most notoriously regarded humans as little more than hyperefficient calculators of marginal utility, and it’s no surprise that certain Taylorist enthusiasts in the business management field fetishize numbers and regard workers as so many cells in a spreadsheet.
But Golumbia points out how Chomskyite linguistics and various offshoots of neuroscience and political science have also pushed the metaphor of the computer as a fundamental model of human cognition, seeking, for instance, to reduce the complexity and ambiguity of human language to programming code — utterly predictable and entirely soulless. Golumbia argues that this species of rationalism is dangerously reductive, that it “carries with in at least two repressed historical formations: the absolutist leader whose will in fact transcends all rational calculation, and disdain for the ‘illogical’ historical and social fabric of the human world.”
The utopian rhetoric around the internet revolution, then, merely serves to mask its potential to unleash latent forces of domination and intolerance. As Golumbia explains, “Much of the rhetoric advancing the computer as a solution to our social problems joins together the inherent technological progressivism of our form of capitalism” — which relies on technology to foster growth and ground-clearing creative destruction — “with the desire to be the master.” Computers function as our slaves and, Golumbia suggests, reconcile us to the notion of slavery overall.
While we unleash our inner dictators, those who actually rule over us — the CEOs and politicos and financiers who make up the strata that sociologist C. Wright Mills dubbed the “power elite” — use the same computers to more thoroughly micromanage, surveil, and exploit us while eradicating all traces of resistance to the machine. After all, any possibilities computers extend to us are also available to them, only they hold sway over a greater dominion over which to leverage them, with access to the cutting-edge technologies that will only later filter down to us plebes. Though computers may seem inherently liberating, they are only as empowering as the network administrator allows.
So, far from being the great liberator, computers, Golumbia insists, actually serve to fix us in the grid of global capitalism while concentrating power and shifting it upward to those who control the networks we are enmeshed in. And for those of us trapped within the system, the fact that much of our decision-making scope has been automated away makes for a ready alibi when our corporate masters have us perpetrating such banal everyday evils as denying legitimate health-insurance claims or providing purposely oblique and frustrating customer service or implementing discriminatory pricing schemes: “The computer told me to do it.” And with that, the ideological assumption that computers can’t possibly make mistakes becomes even more convenient to subscribe to.
Humans are displaced as ethical subjects and become the willing robotic extensions of the machine network. We idealize the computer-think that makes capitalism’s necessary exploitations at once both easier to implement and easier to disown.
Computationalism, Golumbia suggests, defies a fundamental human quality. He argues that “it is critical … to our social practice itself that we as social beings can escape whatever formalization we manufacture.” That is to say, to remain human we must always remain able to respond to situations in a way that we could not have already anticipated. Computers threaten us not so much in the way science-fiction films imagine — that they will become sentient and seek to destroy their masters — but in a much more mundane way, making it too easy to gratify lazy, obvious desires and rendering us permanently incapable of surprising ourselves.