Note: This piece flows from a conversation I participated in with my fellow PopMatters writers and editors. I thank Timothy Gabriele, AJ Ramirez, Alan Ranta, John Grassi, Chris Conaton, Zach Schonfeld, David Bloom, and Sarah Zupko for their help. I also thank AJ for his idea to have me write this piece.
As an avid fan of making lists, I participated in Pitchfork’s “The People’s List” experiment, the results of which were released last week. If any of you have read past lists I’ve written for PopMatters, you’ll know my choices tend to be the opposite of what people expect, and what ended up becoming my top 85 albums from 1996-2011, the current length of Pitchfork’s existence, reflects my penchant for bizarre picks. You won’t find many lists that include critically reviled albums like Oasis’ Heathen Chemistry amongst modern classics like LCD Soundsystem’s Sound of Silver. While awaiting the results of the poll, I really didn’t think much about the list. I knew Radiohead and Animal Collective would likely dominate, since the participants would almost universally be Pitchfork readers influenced by the e-zine’s opinions. But upon I seeing people share their lists and thoughts about the list overall via Twitter and Facebook, some criticisms began to arise that were definitely worth considering.
The first criticism I found came via the Twitter feed of folk songwriter Will Stratton, who called the name of the list “patronizing.” That thought hadn’t entered my head when Pitchfork had announced the list, but Stratton’s claim certainly isn’t without reason. Whenever one hears politicians talk about the “plights of the people” or “what the American people want”, “condescending” is the perfect word to describe their talking points. But while the power of the American government isn’t anything like Pitchfork’s sizable influence amongst fans of independent and “indie” music (yes, there’s a difference), one could easily hear that tone out of the title “The People’s List”. Many e-zines that write on pop culture and music, including this one, employ many young writers (this one included) for the very reason that younger and at times less experienced (though nonetheless culturally savvy) voices have a unique ear on the topics. Thus to create this divide between Pitchfork’s writers (many of whom are young) and “the people” (many of whom might be just as interested in music as Pitchfork’s writers but don’t have the time to commit to writing for a daily e-zine) can easily be seen as problematic.
Now, I certainly don’t want to create a conspiracy theory about Pitchfork’s influence or role in pop culture. Writers better than myself have put out their analysis and theories, so there’s little reason to repeat their arguments. But the results of “The People’s List”, while entirely unsurprising, do give credence to the tastemaking power attributed to Pitchfork. With Radiohead taking three out of the six top spots, the collective “duh” has already been uttered. Yet for all its predictability, the various sub-lists and demographic info about those who participated in this 24,000-plus vote experiment reveal plenty of things not just about Pitchfork, but about fans of music everywhere. The following five lessons we can draw from the end result of “The People’s List” are arguments, not definitive statements. They’re open for interpretation. So use these as a starting point for conversations about music criticism, Pitchfork and other e-zines like it, the role of gender in listening to and ranking music, or anything at all, really.
5. Demographics, Stereotypes, and Surprises
The last album I would expect people ages 51 and older to love is Fountains of Wayne’s Welcome Interstate Managers. Yet there it is at number two on the Distinction Index for that demo group; not only that, it’s also on the list for those ages 41-50. While my fellow writers and I agreed on how boring and predictable the big list was, we all found several statistical tidbits in the many Distinction Indexes that revealed things we weren’t expecting. In the end, what became the truly valuable part of “The People’s List” experiment was not the actual list itself, but instead the many variables affecting it. The Distinction Index measures how much a given variable — ranging from sex to age to genre preference — would correlate to a particular album. For instance, as a metal fan, I was 30% more likely to rank Opeth’s Blackwater Park high on my list (I put it at number four). These Distinctions showed some preferences that were very interesting. There’s the trend showing the older one gets, the more likely s/he is likely to enjoy Drive-By Truckers (by your 50s, it’s in the top five). Then there’s the trend showing the big love for up-and-coming rappers like Tyler, the Creator, Earl Sweatshirt, and Childish Gambino amongst the younger population, specifically ages 10-20. But for all of the interesting and/or surprising trends, there were plenty of predictable ones: since we men love our music heavy and testosterone-drenched, Mastodon usually ranks high on our list, especially the album Remission, which we’re 88% more likely to include.
4. How Much Time Is Enough Before Making a List?
In the top ten of the big list, there are two albums from the last three years of the time span the list covered: Animal Collective’s Merriweather Post Pavilion (2009) and Kanye West’s My Beautiful Dark Twisted Fantasy (2010). If you look to the top 20, another 2009 record is present, the xx’s self-titled debut. Overall, there’s a striking presence of LPs that came out in the last two years; one strange example is the ranking of M83’s Hurry Up, We’re Dreaming, released late in 2011, over its more acclaimed albums Saturdays=Youth (2008) and Dead Seas, Red Cities, and Lost Ghosts (2003). Even though “The People’s List” is meant to examine a specific time period, the prevalence of these still-fresh records makes one wonder if there’s some breathing time necessary before pursuing such an undertaking.
Of course, this isn’t a problem just for “The People’s List”, or for Pitchfork alone. When I read the litany of year-end lists come each December, I always think that despite how easy it may be to choose favorites in a particular year, it’d be better to wait to examine a whole year of music. One criterion of judging an LP’s excellence is its value over time. Yet when Pitchfork published its best-of-the-decade list in 2009, it didn’t hesitate to put Merriweather Post Pavilion in the top 20, despite the fact the website published the list in September, awhile before that year had even finished. Given that particular album’s placement in “The People’s List”, I can say that most would disagree with me in my opinion: I don’t think that album has aged like a classic. But whether I’m right or wrong in this particular case isn’t the point; the point is that conversations about albums like Merriweather Post Pavilion between people like myself — “dissenters” — and those who firmly uphold the Contemporary Canon of Rock Music should happen more often. However, these conversations can become difficult to have when definitive lists by respectable publications are put out; it’s a lot harder to come to a consensus on a band or album when someone has already jumped the gun. “The People’s List” isn’t being upheld as definitive by any means, but how closely it mirrors past Pitchfork lists is a clear sign that most didn’t have a new discussion amongst themselves or with others about their favorite LPs. Instead, most deferred to the expected choices, resulting in the bland list we see now.
As far as moving forward is concerned, it’d be difficult to gauge exactly how long is necessary before trying to make any sweeping statements about a particular year or decade. I stand by the argument that a best-of-the-decade list would be more accurate if published ten years later rather than one year or a few months after, but at the same time I don’t think the discussion necessarily has to wait that long. The concern now isn’t about drawing bright lines, but rather that those lines need to be drawn in the first place.
3. How Closed Is the New Canon?
OK Computer, Kid A, Funeral, Yankee Hotel Foxtrot, and Illinois are expected choices for a top ten organized by Pitchfork. The others are known favorites that aren’t unexpected in the upper tier, but lower rankings for them wouldn’t have been shocking. With half of the top ten almost entirely certain, the question of canonicity becomes very important. As with most canons, whether they be musical, literary, or anything else, there’s no specifically delineated standards for when a particular work of art becomes part of the elite class. Over time, after just enough critical and popular love, an album’s spot in “The Top Ten of (Insert Relevant Time Period or Genre Here)” just seems to happen. In the short pieces that usually accompany these lists, some time is usually taken to say why the album in question is significant in X, Y, and Z ways. As ten years have come and passed, however, the sheer number of paragraphs written about LPs like Kid A or Yankee Hotel Foxtrot are almost mind-numbing, to the point that acknowledging either of their greatness is to most like affirming our need for oxygen. Recognizing their merit has become a banal exercise, due in large part to the many different magazines, both on the web and in print, that have come to a particular consensus on the best contemporary music has to offer. This of course isn’t unique to Pitchfork; Rolling Stone’s love for anything Bob Dylan or the Beatles led to an equally boring top ten in their 500 Greatest Albums of All Time list.
Since these albums are so ingrained in the DNA of the indie music list-maker, one has to wonder how set in stone this fledgling canon is. When Spin published an article suggesting Radiohead isn’t deserving of the constant sea of critical adoration it receives with each new LP, the comment section (which has since been removed) blew up with enraged responses. It’s one thing to show love for an album; it’s another to suggest that dissenting opinions are ridiculous from the get-go. (Of course, I realize I am talking about an Internet comment section, which is far from the bastion of logical thought.) The thesis of the article was simple: Radiohead’s music in the 1990s was far better than its experimentation in the Aughts, wherein the band’s excellence declined. The article was given an unfortunate, incendiary subtitle (“Radiohead kinda blow”) that distracted from the crux of the argument, which was far from unreasonable. I don’t say this often, but I actually agree with Spin on this one; I believe OK Computer to be the group’s strongest hour, though my favorite remains the comparatively straightforward The Bends. Kid A, while not a bad album, is in my view far from the seismic work of experimental rock it’s claimed to be. I’d like to have a conversation about the OK Computer/Kid A battle—or, God forbid, a discussion about whether or not Radiohead actually is the greatest thing ever—but with list after list pre-empting any possibility of trying to counteract the established canon, this will undoubtedly become only more difficult.
2. Gender Disparities in List-Making
Without a doubt the most glaring statistic released with “The People’s List” was the breakdown of the gender demographics in voter participation. Only 12% of the total vote came from women, a startling fact that sent many into a frenzy. Cries of sexism were documented on many blogs and professional articles. On one hand, the result is hardly surprising; for all the inclusiveness indie music and its fans try to claim, the aforementioned canon is largely dominated by white men (they make up nine of the top ten album placings). Most Western canons of any art form are male-dominated.
Now, do I wish more women had participated in the list? Absolutely. I highly doubt it would have changed the end result; Pitchfork’s readers tend to like certain albums, and as the startling country-to-country comparison graph included in the list, even different cultures seem to arrive at similar conclusions. To assume a 1:1 gender ratio (as impossible as that might be) would have meant a better list is a sexist assumption. I also wish both the aggregate list and my own were more diverse — I openly admit I’m a victim of American popular culture, even though I don’t buy into all of its ideas. For instance, Pitchfork’s weird love affair with Kanye West, who it seems to view as hip-hop’s wunderkind, is something I don’t see as even remotely true, despite my major love for 808s and Heartbreak. (On the lack of good rap and hip-hop on the list, one writer smartly commented, “As for rap, this is Pitchfork. Someone hand these blind people canes.”) I firmly believe that as lovers of good art, we ought to expose ourselves to art from experiences and worldviews that are not our own.
But I don’t view “The People’s List” as failing because of the gender disparity. Lindsay Zoladz, a writer for Pitchfork, posted a very enlightening series of interactions she had with her female friends concerning their non-participation. The experiences of these women reveal that there are likely many reasons why the divide is the way it is, and it doesn’t have to merely be that women who read Pitchfork are weaklings who just can’t handle the magnitude of male domination on their musical choices. The reductive oppressor/oppressed narrative ensconced by such analysis undermines any real attempt to increase the participation of women in music journalism as well as music itself. In case anyone hasn’t noticed, the biggest stars on the pop charts right now are women — the release of Adele’s 21 last year was a testament to the power of women in pop culture — and it’s only likely to get better here as things progress. So instead of simplistic, unsubstantiated claims about who is or isn’t sexist, it’s prudent for music writers and fans to figure out the myriad individual causes that make participation in things like “The People’s List” weighted heavily toward male participants. Saying that the gender divide is a result of men just shutting women up is too easy, and there are no easy answers when talking about gender in popular culture.
So yes, the gender divide is something to be concerned about. I’m all for trying to increase participation of those who want to hear their voices heard, and I know there are millions of women across the world who are passionate about music in all of its forms. Their voices command attention. But “The People’s List” isn’t bad because 88% of its participants were men: it’s bad because the results suggest Radiohead made three of the top ten best albums in the past 15 years.
1. The List Was Rigged From the Start
The number one thing we can draw from “The People’s List” is also the simplest. Upon making my list, I noticed how aesthetically unpleasing it was — just about half of my picks were write-ins, and as a result I didn’t get to see the pretty cover art for any of them. This may seem like a minor complaint, but it ends up being hugely important in the end result. While write-in albums did appear on the various Distinction Indexes, not a single one was present in the top 200. This should come as no surprise not just for the reason that readers of Pitchfork are all likely to have overlap in their top picks. The main reason has to do with the structure of the list-making. When making your picks for “The People’s List”, Pitchfork gave you a big chunk of its favorite albums from 1996-2011. If you wanted any of those on your list, all you had to do was click a button and it went straight to your ballot. If any of your choices weren’t in Pitchfork’s database, you had to write it in yourself. The site’s database did also include albums that didn’t make its favorites but were reviewed by it, save for some older ones it had deleted. This meant for someone like me, who has a sizeable LP collection, I had to rummage through all of my music to get a refresher on all of my own favorites. This was undoubtedly a common experience for those who don’t necessarily keep up with everything Pitchfork reviews. Bluegrass, country music, and some popular rock have played a big role in my listening experience, and for the most part Pitchfork doesn’t review stuff like that. By making the easy choice to just click on albums it has already reviewed, Pitchfork inherently biased the end result of “The People’s List”. The write-in function, while helpful in producing the results seen in the Distinction Indexes, became meaningless in the grand scheme of things.
Some have correctly pointed out that since Pitchfork writes for and appeals to a certain audience, the broadly titled “The People’s List” would in the end only reveal the tastes of a certain percentage of The People. But since that’s the case, then “The People’s List” comes off not as a meaningful examination of “the people’s” favorites, but instead of seeing which of the pre-established favorites they would rank higher. A list like that is even more boring than what we saw in the final tally. Fun though it may be for awhile, arguing over whether Merriweather Post Pavilion is better than Yankee Hotel Foxtrot is basically splitting hairs. We know they’re going to be in the top ten; the only question is where.