Introduction
Let’s Get Lost
I’m wasting time on the Internet. I click to the New York Times front page to see the latest headlines and today a major nuclear deal with Iran was signed. The banner headline screams history and even though I haven’t really been following the story, I click on it. I’m taken to a page with an embedded video that features Thomas Friedman asking Obama to explain what he thinks the United States gained from the nuclear deal with Iran. I check the time on the video — three and a half minutes — and figure that’s not too long to listen to the president speak. He speaks; I watch. He continues to speak; I scroll through my Twitter feed but I still listen. I click back on the Times window and watch again. Somewhere about the three-minute mark, I start to think, Am I really wasting time on the Internet? This is important stuff that I’ve stumbled on to. I’m struggling to see what’s so shameful about this. The video ends and, impressed by what the president was saying, I start to read Freidman’s lengthy article about this beneath the video. I read the first few paragraphs carefully, then scroll down and read some more. It’s starting to get too granular for me. But my interest is piqued. Although I’m not going to read this piece to the end, I’m going to start following this story as it unfurls over the next few days. I stumbled on it and got hooked. Is my engagement deep? Not right now. But judging by the way these things tend to go, as I start to follow the story, my appetite for the topic will most likely become voracious. I can’t see this event — one that happens several times a day — as being anything other than good. Because of it, I’m better informed, more engaged, and perhaps even a bit smarter.
After I finish with this article, I click over to Facebook and find myself watching a video of Keith Richards discussing how he gets ideas for his songs. He says that when he’s in restaurants and overhears conversation coming from the next table, he simply writes down what they’re saying. “Give me a napkin and a pen,” he says, smiling. “You feel that one phrase could be a song.” Although the video is only a minute long, it’s packed with wisdom. Really? Could his process be that simple, that pure? After listening to Keith, I feel inspired. After all, I feel like I spend tons of time eavesdropping on Facebook conversations. Might I be able to wring a song or a poem out of those as well?
I’m back on Facebook, and the next thing I know I’m looking at this incredible black-and-white photo from 1917 of a full-size battleship being built in New York’s Union Square. The picture is huge and brimming with details. I click on it and I’m taken to a website. As I scroll down, there’s a short explanatory text about how this came to be, followed by a dozen more giant, rich photos of the ship being built in progress. It’s fascinating. I just wrote a book about New York City and I’m floored that I somehow missed this but grateful to know about it. I bookmark the page and move on.
What is wasting time on the Internet? It’s not so easy to say. It strikes me that it can’t be simply defined. When I was clicking around, was I wasting time because I should’ve been working instead? But I had spent hours working — in front of the same screen — and quite frankly I needed a break. I needed to stop thinking about work and do a bit of drifting. But, unlike the common perception of what we do when we waste time on the Internet, I wasn’t watching cat videos — well, maybe one or two. I was actually interested in the things that I stumbled on: the president, the rock star, and the battleship. I had the choice not to click on these things, but I chose to do so. They seemed to me to be genuinely interesting. There were many more things that I didn’t click on.
Album image from Pexels (modified) Pixabay License / Pixabay – Sparkly album
Listening to Internet pundits tell it, you’d think we stare for three hours at clickbait — those web pages with hypersensational headlines that beg you to click on them — the way we once sat down and watched three hours of cartoons on Saturday morning TV. But the truth is most of us don’t do any one thing on the Internet for three hours. Instead, we do many things during that time, some of it frivolous, some of it heavy. Our time spent in front of the computer is a mixed time, a time that reflects our desires — as opposed to the glazed-eyed stare we got from sitting in front of the television where we were fed something we ultimately weren’t much interested in. TV gave us few choices. Naturally, we became “couch potatoes” and many of us truly did feel like we wasted our time — as our parents so often chided us — “rotting away” in front of the TV.
I’m reading these days — ironically, on the web — that we don’t read anymore. People often confess this same thing to me when they hear I’m a poet. The other day, I was opening up a bank account and the associate working at the bank, when he found out what I did, sighed and admitted that he doesn’t read as much as he used to. I asked him whether he had a Facebook account, which he did, and a Twitter, which he also did. I asked him whether he sent and received e-mails. Yes, he said, many every day. I told him that he was, in fact, reading and writing a lot. We’re reading and writing more than we have in a generation, but we are doing it differently — skimming, parsing, grazing, bookmarking, forwarding, and spamming language — in ways that aren’t yet recognized as literary, but with a panoply of writers using the raw material of the web as the basis for their works it’s only a matter of time until it is.
I keep reading that in the age of screens we’ve lost our ability to concentrate, that we’ve become distracted, unable to focus. But when I look around me and see people riveted to their devices, I’ve never seen such a great wealth of concentration, focus, and engagement. I find it ironic that those who say we have no concentration are most bothered by how addicted people are to their devices. I find it equally ironic that most of the places I read about how addicted we are to the web is on the web itself, scattered across numerous websites, blog posts, tweets, and Facebook pages.
On those blogs, I read how the Internet has made us antisocial, how we’ve lost the ability to have a conversation. But when I see people with their devices, all I see is people communicating with one another: texting, chatting, IM’ing. And I have to wonder, In what way is this not social? A conversation broken up into short bursts and quick emoticons is still a conversation. Watch someone’s face while they’re in the midst of a rapid- re text message exchange: it’s full of human emotion and expression — anticipation, laughter, affect. Critics claim that even having a device present acts to inhibit conversation, and that the best antidote to our technological addiction is a return to good old-fashioned face-to-face conversation. They say, “Conversation is there for us to reclaim. For the failing connections of our digital world, it is the talking cure.” But this seems to ignore the fact that smartphones are indeed phones: two-way devices for human-to-human conversations, replete with expressive vocal cadence and warmth. Is conversation over the telephone still — 140 years after the phone was invented — somehow not considered “intimate” enough, lessened because it is mediated by technology?
But beyond that, life is still full of attentive, engaged face-to-face conversations and close listening, be it at the many conferences, lectures, or readings I attend where large audiences hang on every word the speakers say, or my own therapy sessions — nothing more than two people in a room — the tenor and intensity of which hasn’t changed in decades despite several technological revolutions. When a student comes and finds me during office hours, that student — normally tethered to their device — can still go deep without one. Even my seventeen-year-old son, awash in social media, still demands that we “talk” in the darkness of his bedroom each night before he goes to sleep, just as we have done his entire life. It’s a ritual that neither of us are willing to forgo in spite of our love of gadgets. Everywhere I look — on the street, in restaurants and cafés, in classrooms, or waiting in line for a movie — in spite of dire predictions, people still seem to know how to converse.
Our devices, if anything, tend to amplify our sociability. Sometimes we converse face-to-face, other times over our devices, but often, it’s a combination of the two. I’m in a hotel lobby and I’m watching two fashionable women in their twenties sitting next to each other on a modernist sofa. They are parallel with one another: their shoulders are touching; their legs are extended with their feet resting on a table in front of them. They’re both cradling their devices, each in their own world. From time to time, they hold their phones up and share something on-screen before retreating into their respective zones. While they peck away at their keyboards, shards of conversation pass between them, accompanied by laughter, head nods, and pointing. Then, at once, they put their phones in their purses, straighten up their bodies, angle toward one another, and launch into a fully attentive face-to-face conversation. They’re now very animated, gesticulating with their hands; you can feel the words being absorbed into their bodies, which are vehicles for augmenting what they’re saying. It’s fascinating: just a moment ago it was parallel play; now it’s fully interactive. They continue this way for several more minutes until, as if again on cue, they both reach into their purses, take out their phones, and resume their previous postures, shoulders once again touching and legs outstretched. They’re no longer conversing with each other, but are now conversing with someone unseen. Our devices might be changing us, but to say that they’re dehumanizing us is simply wrong.
The Internet has been accused of making us shallow. We’re skimming, not reading. We lack the ability to engage deeply with a subject anymore. That’s both true and not true: we skim and browse certain types of content, and read others carefully. Oftentimes, we’ll save a long form journalism article and read it later offline, perhaps on the train home from work. Accusations like those tend to assume we’re all using our devices the same way. But looking over the shoulders of people absorbed in their devices on the subway, I see many people reading newspapers and books on their phones and many others playing Candy Crush Saga. Sometimes someone will be glancing at a newspaper one moment and playing a game the next. There’s a slew of blogs I’ve seen recently which exhaustively document photos of people reading paper books on the subway. One photographer nostalgically claims that he wanted to capture a fading moment when “books are vanishing and are being replaced by characterless iPads and Kindles.” But that’s too simple, literally judging a book by its cover. Who’s to say what they’re reading? Often we assume that just because someone is reading a book on a device that it’s trashy. Sometimes it is; sometimes it isn’t. Last night I walked into the living room and my wife was glued to her iPad, reading the Narrative of the Life of Frederick Douglass. Hours later, when I headed to bed she hadn’t moved an inch, still transfixed by this 171-year-old narrative on her twenty-first-century device. When I said good night, she didn’t even look up.
And while these critics tell us time and again that our brains are being rewired, I’m not so sure that’s all bad. Every new media requires new ways of thinking. How strange it would be if in the midst of this digital revolution we were still expected to use our brains in the same way we read books or watched TV? The resistance to the Internet shouldn’t surprise us: cultural reactionaries defending the status quo have been around as long as media has. Marshall McLuhan tells us that television was written off by people invested in literature as merely “mass entertainment” just as the printed book was met with the same skepticism in the sixteenth century by scholastic philosophers. McLuhan says that “the vested interests of acquired knowledge and conventional wisdom have always been by-passed and engulfed by new media … The student of media soon comes to expect the new media of any period whatever to be classed as pseudo by those who have acquired the patterns of earlier media, whatever they may happen to be.”
I’m told that our children are most at risk, that the excessive use of computers has led our kids to view the real world as fake. But I’m not so sure that even I can distinguish “real” from “fake” in my own life. How is my life on Facebook any less “real” than what happens in my day-to-day life? In fact, much of what does happen in my day-to-day life comes through Facebook — work opportunities, invitations to dinner parties, and even the topics I discuss at those dinner parties often comes from stuff I’ve found out about on Facebook. It’s also likely that I met more than a few of my dinner companions via social media.
I’m reading that screen time makes kids antisocial and withdrawn, but when I see my kids in front of screens, they remind me of those women on the couch, fading in and out, as they deftly negotiate the space of the room with the space of the web. And when they’re, say, gaming, they tend to get along beautifully, deeply engaged with what is happening on the screen while being highly sensitive to each other; not a move of their body or expression of emotion gets overlooked. Gaming ripples through their entire bodies: they kick their feet, jump for joy, and scream in anger. It’s hard for me to see in what way this could be considered disconnected. It’s when they leave the screens that trouble starts: they start fighting over food or who gets to sit where in the car. And, honestly, after a while they get bored of screens. There’s nothing like a media-soaked Sunday morning to make them beg me to take them out to the park to throw a football or to go on a bike ride.
* * *
Since when are “new” and “interesting” pejorative?
It’s Friday night and my teenage son has invited about a dozen of his buddies — boys and girls — over to the house. They’re sprawled out on the couch, mostly separated by gender, glued to their smartphones. Over by the TV, a few kids are playing video games that along with their yelps and whoops are providing the soundtrack for the evening. The group on the couch are close, emotionally and physically; they form a long human chain, shoulders snuggled up against their neighbor’s. Some of the girls are leaning into the other girls, using them as pillows. The boys are physical with each other, but differently: they reach out occasionally to fist bump or high-five. One couple, a boyfriend and girlfriend, are clumped in the middle of the couch, draped on top of one another, while at the same time pressed up against the others.
There’s an electric teenage energy to the group. They’re functioning as a group, yet they’re all independent. They spend long periods in silence; the only noises emanating from the gang are the occasional sounds that are emitted from their devices — pings, plonks, chimes, and tinny songs from YouTube pages. Bursts of laughter are frequent, starting with one person and spreading like wild fire to the others. As they turn their devices toward one another, I hear them saying, “Have you seen this?” and shrieking, “Oh my god!” Laughter ripples again, dying out quickly. Then they plunge back into concentrated silence. Out of the blue, one of the kids on the couch playfully says to the other, “You jerk! I can’t believe you just sent me that!” And it’s then that I realize that as much as they’re texting and status updating elsewhere on the web, a large part of their digital communication is happening between these kids seated on the same couch.
They’re constantly taking pictures of themselves and of each other. Some are shooting videos, directing their friends to make faces, to say outrageous things to the camera, or to wave hello. And then, it’s right back to the devices, where those images are uploaded to social media and shared among the group, as links are blasted out — all within a minute. Suddenly, the girls shriek, “I look so ugly!” or “You look so pretty!” and “We need to take this one again.” I hear someone say, “That was so funny! Let’s watch it again.” They count likes and favorites as they pile up and read comments that are instantly appearing from both inside and outside the room. This goes on for hours. In a sense, this is as much about creativity as it is about communication. Each photo, posed and styled, is considered with a public response in mind. They are excited by the idea of themselves as images. But why wouldn’t they be? From before the moment they were born, my kids have been awash in images of themselves, beginning with the fuzzy in utero sonograms that they now have pinned to their bedroom walls. Since then, our cameras — first clumsy digital cameras and now smartphones — have been a constant presence in their life, documenting their every move. We never took just one picture of them but took dozens in rapid-fire fashion, off-loaded them to the computer, and never deleted a single one. Now, when I open my iPhoto album to show them their baby pictures, the albums look like Andy Warhol paintings, with the same images in slight variations repeated over and over, as we documented them second by second. Clearly we have created this situation.
There is no road map for this territory. They are making it up as they go along. But there’s no way that this evening could be considered asocial or antisocial. Their imaginations are on full throttle and are wildly engaged in what they’re doing. They are highly connected and interacting with each other, but in ways that are pretty much unrecognizable to me. I’m struggling to figure out what’s so bad about this. I’m reading that screen addiction is taking a terrible toll on our children, but in their world it’s not so much an addiction as a necessity. Many key aspects of our children’s lives are in some way funneled through their devices. From online homework assignments to research prompts, right on down to where and when soccer practice is going to be held, the information comes to them via their devices. (And yes, my kids love their screens and love soccer.)
After reading one of these hysterical “devices are ruining your child” articles, my sister-in-law decided to take action. She imposed a system whereby, after dinner, the children were to “turn in” their devices — computers, smartphones, and tablets — to her. They could “check them out” over the course of the evening, but only if they could explain exactly what they needed them for, which had to be for “educational purposes.” But if there was no reason to check them out, the devices stayed with my sister-in-law until they were given back the next day for their allotted after-school screen time, which she also monitors. Upon confiscating my nephew’s cell phone one Friday night, she asked him on Saturday morning, “What plans do you have with your friends today?” “None,” he responded. “You took away my phone.”
On a family vacation, after a full day of outdoor activities that included seeing the Grand Canyon and hiking, my friend and her family settled into the hotel for the evening. Her twelve-year-old daughter is a fan of preteen goth girl crafting videos on YouTube, where she learns how to bedazzle black skull T-shirts and make perfectly ripped punk leggings and home-brewed perfumes. That evening, the girl selected some of her favorite videos to share with her mother. After agreeing to watch a few, her mother grew impatient. “This is nice, but I don’t want to spend the whole night clicking around.” The daughter indignantly responded that she wasn’t just “clicking around.” She was connecting with a community of girls her own age who shared similar interests. Her mother was forced to reconsider her premise that her daughter wasn’t just wasting time on the Internet; instead, she was fully engaged, fostering an aesthetic, feeding her imagination, indulging in her creative proclivities, and hanging out with her friends, all from the comfort of a remote hotel room perched on the edge of the Grand Canyon.
In theorizing or discussing our time spent online, we tend to oversimplify what is an extraordinarily nuanced experience, full of complexity and contradiction. The way we speak about technology belies our monolithic thinking about it. During his recent run for president, a number of Donald Trump’s legal depositions were scrutinized by the New York Times, which intended to show how Trump spoke when he wasn’t in the spotlight. During a series of questions about the ways he used technology, he was asked about television, to which he replied, “I don’t have a lot of time for listening to television.” I was struck by the phrase “listening to television.” You don’t really listen to television; you watch it. You listen exclusively to radio. Born in 1946, it’s safe to assume that Trump spent his formative years listening to radio. My father, roughly the same age as Trump, says similar things. Growing up, he used to berate us kids for watching TV, saying that it took no imagination. Waxing nostalgic, he’d say, “When I was a boy listening to radio, you had to make up everything in your mind. You kids have it all there for you.” For my father — and I can imagine Trump, too — although they watched television, I don’t think they really understood it. Certainly, Trump’s statement belies a basic misapprehension of the medium.
Trump’s comment is a textbook example of Marshall McLuhan’s theory which states that the content of any medium is always another medium: “The content of writing is speech, just as the written word is the content of print, and print is the content of the telegraph.” For Trump, the content of TV is radio. It’s common for people to pick up everything they know about a previous medium and throw it at a newer one. I’m often reminded of Trump’s comment when I hear complaints about how we’re wasting time on the Internet. To them, television is the content of the web. What they seem to be missing is that the web is not monolithic, but instead is multiple, diverse, fractured, contradictory, high, and low, all at the same time in ways that television rarely was.
* * *
It’s a Sunday morning and I go downstairs to get the New York Times. In the travel section is a piece entitled “Going Off the Grid on a Swedish Island.” It’s about a woman who takes a digital detox on a remote island as a reminder that she is not, in fact, “merely the sum of my posts and tweets and filter-enhanced iPhone photos.” She checks herself into a “hermit hut” — an isolated cabin without electricity or running water — and gives her phone to her husband who locks it with a pass code. As she settles into the hut, bereft of her technology, she suddenly discovers herself connected to nature, listening to the sound of waves folding by the nearby shore. She also rediscovers the pleasure of reading books. She becomes introspective, remarking, “Now, disconnected from the imposed (or imagined) pressures from followers and friends loitering unseen in the ether of the Web, I found myself reaching for a more authentic, balanced existence for myself, online and off.”
She takes long walks. But each natural experience she has is filtered through the lens of technology. While listening to the sounds of nature, she muses, “Without a Spotify playlist to lose myself in … What else had I been blind to while distracted by electronics, I wondered?” She sees marvelous things: towering wind turbines, whose “graceful blades whoosh audibly overhead,” and congratulates herself when she resists the urge to record and share the scene on social media. She conveniently forgets the fact that these turbines are wholly designed and driven by digital interfaces. She nostalgically finds older, pre-digital technologies — ironically littering the landscape — charming. Seeing an upturned rotting car that “looks like a bug,” she can’t resist: “I pulled out my camera and took a photo, one that I knew would never get a single ‘like’ from anyone but me. And that was just ne.” On these sojourns, she mechanizes nature, describing it with tech metaphors: “Along the way, the only tweets I encountered were from birds.” On her final evening on the island, she has a cosmic epiphany whilst musing on the stars in the night sky, one that is served with a dose of self-flagellation for her previous misdeeds: “Those spellbinding heavens are always hiding in plain sight above us, if only we would unplug long enough to notice.”
Even in such lighthearted Sunday morning fare, her words are laced with an all-too-pervasive, unquestioning guilt about technology. Try as she might, the writer is enmeshed with technology to the point that she is unable to experience nature without technological mediation. She may have left her devices at home, but she’s still seeing the world entirely through them. Her brain, indeed, has become differently wired and all the nature in the world on a weekend digital detox won’t change that. What was accomplished by this trip? Not much. Far away from her devices, all she did was think obsessively about them. Returning from her trip, it’s hard to imagine that much changed. I can’t imagine that in the spirit of her adventure she wrote her piece out long-hand in number 2 pencils on legal pads by candlelight, only to sit down at a Remington typewriter bashing out the final draft, and filing it via carrier pigeon. No. Instead, the morning her piece appeared, she retweeted a link to the article: “.@ingridkwilliams goes off the grid on a charming Swedish island.”
Why Not Embrace This Medium That Defies Singularity?
When I used to watch TV, “likes” weren’t really part of the game. Sure, I liked one show better than another, but I was forced to choose from a tiny set of options, seven channels, to be specific. Today, “like” has come to mean something very different. We can support something, expressing ourselves by clicking Like or we can download something we like. In this way, we build a rich ecosystem of artifacts around us based on our proclivities and desires. What sits in my download folder — piles of books to be read, dozens of movies to be watched, and hundreds of albums to be heard — constitutes a sort of self-portrait of both who I am in this particular point in time, and who I was in earlier parts of my life. In fact, you’ll find nestled among the Truffaut films several episodes of The Brady Bunch, a show I really “liked” back in the day. Sometimes I’m in the mood to watch Truffaut; other times I’m in the mood to watch The Brady Bunch. Somehow those impulses don’t contradict one another; instead, they illuminate the complexities of being me. I’m rarely just one way: I like high art sometimes and crap others.
While I could discuss any number of musical epiphanies I’ve personally experienced over the past half century, all of them would pale in comparison to the epiphany of seeing Napster for the first time in 1999. Although prior to Napster I had been a member of several file-sharing communities, the sheer scope, variety, and seeming endlessness of Napster was mind-boggling: you never knew what you were going to and and how much of it was going to be there. It was as if every record store, flea market, and thrift shop in the world had been connected by a searchable database and flung their doors open, begging you to walk away with as much as you could carry for free. But it was even better because the supply never exhausted; the coolest record you’ve ever dug up could now be shared with all your friends. Of course this has been exacerbated many times over with the advent of torrents and MP3 blogs.
But the most eye-opening thing about Napster was the idea that you could browse other people’s shared files. It was as if a little private corner of everyone’s world was now publicly available for all to see. It was fascinating — perhaps even a bit voyeuristic — to see what music other people had in their folders and how they organized it. One of the first things that struck me about Napster was how impure and eclectic people’s tastes were. Whilst browsing another user’s files, I was stunned to find John Cage MP3s alphabetically snuggled up next to, say, Mariah Carey files in the same directory. It boggled the mind: how could a fan of thorny avant-garde music also like the sugary pop of Mariah Carey? And yet it’s true. Everyone has guilty pleasures. But never before have they been so exposed — and celebrated — this publicly. To me, this was a great relief. It showed that online — and by extension in real life — we never have been just one way, all the time. That’s too simple. Instead, we’re a complex mix, full of contradictions.
***
The web is what Stanford professor Sianne Ngai calls “stuplime,” a combination of the stupid and the sublime. That cat video on BuzzFeed is so stupid, but its delivery mechanism — Facebook — is so mind-bogglingly sublime. Inversely, that dashboard cam of the meteor striking Russia is so cosmically sublime, but its delivery mechanism — Facebook — is so mind-bogglingly stupid. It’s this tension that keeps us glued to the web. Were it entirely stupid or were it entirely sublime, we would’ve gotten bored long ago. A befuddling mix of logic and nonsense, the web by its nature is surrealist: a shattered, contradictory, and fragmented medium. What if, instead of furiously trying to stitch together these various shards into something unified and coherent — something many have been desperately trying to do — we explore the opposite: embracing the disjunctive as a more organic way of framing what is, in essence, a medium that defies singularity?
Shattered by technology, modernism embraced the jagged twentieth-century media landscape and the fragmentation it brought, claiming it to be emblematic of its time. Not to overstretch the analogy — it’s a new century with new technologies — but there are bits and pieces salvageable from the smoldering wreckage of modernism from which we might extract clues on how to proceed in the digital age. In retrospect, the modernist experiment was akin to a number of planes barreling down runways — cubist planes, surrealist planes, abstract expressionist planes, and so forth — each taking off, and then crashing immediately, only to be followed by another aborted takeoff, one after another. What if, instead, we imagine that these planes didn’t crash at all, but sailed into the twenty-first century, and found full flight in the digital age? What if the cubist airplane gave us the tools to theorize the shattered surfaces of our interfaces or the surrealist airplane gave us the framework through which to theorize our distraction and waking dream states or the abstract expressionist airplane provided us with a metaphor for our all-over, skein-like networks? Our twenty-first-century aesthetics are fueled by the blazing speed of the networks, just as futurist poems a century ago were founded on the pounding of industry and the sirens of war.
Literary modernism provides insights as well. Could we theorize our furious file sharing through Freud’s ideas about the archive, our ROM and RAM through his perception-consciousness system? Could we imagine the web as the actualization of Jorge Luis Borges’s infinite library of Babel, as described in his famous 1941 short story of the same name? Could we envision Twitter’s 140-character constraint as being a direct descendent of Hemingway’s brilliant one-line novel: “For sale: baby shoes, never worn.” Are Joseph Cornell’s boxes palm-sized, handheld pre-Internet devices, replete with icons and navigational systems? Is Finnegans Wake a wellspring of hashtags? Postmodernism’s sampling and remixing — so predominant in mainstream culture from karaoke to gaming to hip-hop — are also foundational to the mechanics of the web. If the Internet is one big replication device, then every artifact owing through it is subject to its bouncy reverberatory gestures (the retweet, for example), a situation where an artifact’s primary characteristic, to quote Roland Barthes, is “a tissue of quotations drawn from the innumerable centers of culture,” while at the same time remaining a container of content.
When futurist poet F. T. Marinetti famously wrote in a 1909 manifesto that “we will destroy the museums, libraries, academies of every kind,” he could not have foreseen the double-edged sword of web-based structures. On one hand, artists are embracing the meme’s infinitesimal life span as a new metric (think: short attention span as a new avant-garde), constructing works not for eternity but only for long enough to ripple across the networks, vanishing as quickly as they appear, replaced by new ones tomorrow. On the other hand, our every gesture is archived by search engines and cemented into eternally recallable databases. Unlike Marinetti’s call to erase history, on the web everything is forever. The Internet itself is a giant museum, library, and academy all in one, comprised of everything from wispy status updates to repositories of dense classical texts. And every moment you spend wasting time on the Internet contributes to the pile — even your clicks, favorites, and likes. Read through a literary lens, could we think of our web sojourns as epic tales effortlessly and unconsciously written, etched into our browser histories as a sort of new memoir? Beyond that, in all its glory and hideousness, Facebook is the greatest collective autobiography that a culture has ever produced, a boon to future sociologists, historians, and artists.
This accretion of data is turning us into curators, librarians, and amateur archivists, custodians of our own vast collections. The web’s complex ecosystem of economies — both paid and pirated — offer us more cultural artifacts than we can consume: There are more movies on Netflix than I will ever be able to see, not to mention all the movies I’ve simultaneously downloaded from file-sharing which languish unwatched on my hard drive. The fruits of what’s known as “free culture” — the idea that the web should be a place for an open exchange of ideas and intellectual materials, bereft of over-restrictive copyright laws — create a double-edged sword. Abundance is a lovely problem to have, but it produces a condition whereby the management of my cultural artifacts — their acquisition, filing, redundancy, archiving, and redistribution — is overwhelming their actual content. I tend to shift my artifacts around more than I tend to use them. And all of those artifacts — jaggy AVIs, fuzzy PDFs, lossy MP3s — are decidedly lo-res. I’ve happily swapped quality for quantity, uniqueness for reproduction, strength for weakness, and high resolution for super compression in order to participate in the global cornucopia of file sharing and social media. And what of consumption? I’ve outsourced much of it. While I might only be able to read a fraction of what I’ve downloaded, web spiders — indexing automatons — have read it all. While part of me laments this, another part is thrilled at the rare opportunity to live in one’s own time, able to reimagine the status of the cultural object in the twenty-first century where context is the new content.
The web ecology runs on quantity. Quantity is what drove the vast data leaks of Julian Assange, Aaron Swartz, Chelsea Manning, and Edward Snowden, leaks so absurdly large they could never be read in their entirety, only parsed, leaks so frighteningly huge they were derided by the mainstream media as “information vandalism,” a critique that mistook the leak’s form for function — or malfunction — as if to say the gesture of liberating information is as important as what’s actually being moved. To Assange, Swartz, Manning, and Snowden, what was being moved was important — a matter of life and death. But then again to many of us, our devices are a matter of life and death. The ubiquity of smartphones and dashboard and body cams, combined with the ability to distribute these images virally, have shed light on injustices that previously went unnoticed. When critics insist we put down our devices because they are making us less connected to one another, I have to wonder how the families of Tamir Rice or Laquan McDonald might react to that.
This book attempts to reconcile these contradictions and embrace these multiplicities as a means of reenriching, reenlivening, recuperating, and reclaiming the time we spend in front of screens — time that is almost always dismissed as being wasted. Scrawled across the walls of Paris in May 1968, the slogan “live without dead time” became a rallying cry for a way of reclaiming spaces and bureaucracies that suck the life from you. I’d like to think our web experience can be nearly bereft of dead time if only we had the lens through which to see it that way. I don’t mean to paint too rosy a picture. The downsides of the web are well known: trolling, hate, flame wars, spam, and rampant stupidity. Still, there’s something perverse about how well we use the web yet how poorly we theorize our time spent on it. I’m hearing a lot of complaints, but I’m not getting too many answers, which makes me think perhaps our one-dimensional approach has been wrongheaded. Betting a complex medium, one that is resistant to singularities, let’s consider a panoply of ideas, methods, and inspirations. The word “rhizomatic” has been used to describe the web to the point of cliché, but I still find it useful. The rhizome, a root form that grows unpredictably in all directions, offers many paths rather than one. The genie will not be put back in the bottle. Walking away is not an option. We are not unplugging anytime soon. Digital detoxes last as long as grapefruit diets do; transitional objects are just that. I’m convinced that learning, interaction, conversation, and engagement continues as it always has, but it’s taking new and different forms. I think it’s time to drop the simplistic guilt about wasting time on the Internet and instead begin to explore — and perhaps even celebrate — the complex possibilities that lay before us.
* * *
- Wasting Time on the Internet - Kenneth Goldsmith - E-book
- Wasting Time on the Internet: Kenneth Goldsmith: 9780062416476 ...
- U B U W E B :: Kenneth Goldsmith
- Kenneth Goldsmith | Poetry Foundation
- Kenneth Goldsmith (@kg_ubu) | Twitter
- Kenneth Goldsmith | Department of English
- Kenneth Goldsmith - Wikipedia