In a year of highly-hyped movies that were supposed to be great – that by all accounts should have been great – Looper is an anomaly, a comparatively small-budget (at $30 million) high-concept sci-fi that was almost under-marketed and that landed with a soft bang last weekend to universal praise and decent box office. It’s been drawing frequent comparisons to Inception, and with good reason: differences in scale aside, we’re not used to our action movies being intellectually challenging, and both films tackle mind-bending subject matter with similarly mind-bending directorial deftness. Just as notably, they’re both original creations in a world where the familiar is king.
Hollywood presents us with about 400 movies to choose from over the course of a given year. Of this number it’s safe to bet that there will be a few that will be pretty good, a somewhat larger number that will be atrocious, and a vast majority that will range from pretty bad to mostly competent.
Usually, it’s not hard to tell which of those categories a movie falls into: you and I might disagree about the relative merits of, say, True Grit and Black Swan, but for the most part we’ll agree that they were both halfway decent. Similarly, not many people are going to walk out of Bucky Larson: Born to be a Star talking about how they’ve had a profound artistic experience, and if they do it’s a safe bet that they’re joking or should never be allowed to watch another movie ever again.
Invariably, however, there’s at least one movie that claims to be good and is not: a bad movie that masquerades as a good movie, or, as we’ll call it for the purposes of this essay, a ‘Faker.’ These films for some reason strike a chord with audiences – and sometimes critics – despite being terrible. It’s not about being overrated, in the sense that people will talk about a movie, director, or actor being overrated: for something to be rated too highly, it usually has to start out with some kind of merit. Nor is it about being popcorn – people may enjoy watching Transformers, but no one is under any illusions that it’s a cinematic masterpiece. It’s about movies with little or no narrative merit being viewed as genuine triumphs – movies that seduce their audience into judging them wrongly. Crash, 300, Avatar, and (it pains me to say) Requiem for a Dream are all good examples; I’m worried that Drive (which I loved) might be one, too.
To me, the Faker par excellence – or at least the one, having been recently re-watched, most immediately on my mind – is 2006’s Matrix-lite V for Vendetta, which mixes facile political pronuncionados with stylized special effects to create a particularly noxious concoction. Vendetta, along with movies like Children of Men and dystopian literary adaptations of 1984 and Brave New World, fits into a peculiar dramatic subgenre of British apocalypticism, where the rest of the world has somehow fallen to pieces while Britain trudges forward as a lone bastion of (debased) civilization. America, we soon learn, has been engulfed in some sort of civil war, while a plague in Britain has led to the rise of a fascist government that rules through fear. A mysterious masked man known only as V (Hugo Weaving) wants to start a revolution. A young woman named Evey (Natalie Portman) is, by chance, dragged into his campaign at the beginning. The movie is both about her personal journey and about V’s campaign to bring about the death of the film’s shuttered antagonist, the villainous Chancellor Adam Sutler.
Not exactly a mega-hit on release, V for Vendetta still resonated with audiences, earning over $130 million against a $54 million budget. It also managed to attract for itself the type of enthusiastic following that have turned Donnie Darko and, most notably, The Rocky Horror Picture Show into cult classics. Generally, audiences have embraced its strange brand of anarchism wholeheartedly: the movie currently carries an 8.2/10 approval rating on IMDb, putting it among the site’s top 500 rated films.
Yet there is almost nothing to like about Vendetta. Leaving aside the film’s troubled, troubling politics – we will get to that in a moment – it is a movie characterized by ludicrous plotting, unbelievable characters, and clumsy exposition. Unlike Children of Men, which was released in the same year and which wisely transmits as little backstory as it can afford, Vendetta is saddled with a high concept and too much plot to get through. The only way it can find to explain why things are happening the way they are is to present a series of overdramatized montages, each more groan-inducing than the last. Worst of all, we are forced to support the masked V by default, because the totalitarian government offered by the movie is so plainly horrible, yet there is no clear reason why our protagonist is any better than the people he seeks to bring down. Indeed, this is the greatest sin in a litany of unforgivable ones: V for Vendetta wants to replace personal sympathy with political ideology as a reason to care about its characters.
So, if V for Vendetta and other Fakers are so terrible, why do audiences like them so much? Looking at Vendetta and its relation to other such movies, I think there are two essential components: high production values – in particular with reference to striking production design – and an illusion of intelligence. For all its dramatic atrocities, V for Vendetta is, technically speaking, a well-made movie, with strong editing, a definite ‘look,’ and production design that creates a believably off-kilter, fascist Britain. Its action scenes are charged, spectacular, and satisfyingly brutal, and it’s hard to deny that V is, if nothing else, a total badass. It’s the total opposite of a Capra film: where It Happened One Night is a good movie despite a total lack of technical polish, Vendetta and other Fakers have to get by on their looks.
That’s not enough, though. To go back to the example I used at the beginning of this essay, Transformers – and, really, every other Michael Bay movie – has great production design and very high production values. (Where else do you think that $150 million budget went? Paying actors?) Lots of people will pay to go see Transformers; very few of them will ever say that it’s a good movie. Entertaining? Sure – but only entertaining. Why? Because it’s so obviously silly. It’s meant to be good summer fun and nothing more, and it succeeds totally. (At least, I assume. Somehow I’ve never actually watched the movie.)
V for Vendetta and other Fakers, though, make a claim to be far more profound – to have something worthwhile to say. In the case of Vendetta, that comes in the form of the muddled political message that it espouses, a sort of populist anarchism that we are supposed to believe originates from a deep compassion for other people. The political angle of Vendetta is both troubled, in the sense that it’s incoherent, and troubling, in that it essentially amounts to an uncomplicated endorsement of terrorism. A good film would have striven to bring out the moral ambiguity of its protagonist, creating a sort of dystopian sci-fi Battle of Algiers in the process. Vendetta, though, is content to paint V as a superhuman folk hero, fighting against a regime that must be dismantled at all costs. It is, in other words, narratively lazy; its revelations are deliberately unsubtle, fit for sound bites but having no interest in anything that is actually true. Lines like “People shouldn’t be afraid of their governments, governments should be afraid of their people” are seductive when spoken so seriously, and backed up with such spectacular pyrotechnics, but they don’t seem to have any meaning beyond being an excuse for a masked Hugo Weaving to blow shit up.
Other Fakers are equally facile. Requiem for a Dream is a virtuosically-directed, fantastically depressing movie, but its final conclusion – which is, basically, that drugs are bad for you – offers nothing of substance about any of its characters or, really, about what the real consequences of drug use are; Trainspotting is a far more effective film on the same topic. Avatar, as previously discussed at some length, makes no effort to truly explore what it means to leave everything that you are behind. It’s gorgeous, but hollow. Crash wants us to believe that it has something weighty to say about race, but in the end all it manages to come up with is, more or less, that we’re all racist. That’s probably true, but we didn’t need Ryan Phillippe and Matt Dillon to make a point we’ve all consumed in fourth grade Social Studies classes.
It’s a commonplace that the simplest explanation is almost always the truest one. Such is the main thrust of the Faker: it offers a simple solution to a complex problem. Sadly, that commonplace is rarely true. Even when an apparently simple solution proves to be correct, it frequently needs a lot of sophisticated analysis to understand why, or even what that ostensibly simple solution means. That’s why we always need to be wary of the Faker. It plays to our desire that movies be able to tell us something real, without actually having anything new to say.
Or, The AFI List Project, #24 – E.T.: The Extra-Terrestrial
This is long overdue and will, unfortunately, be extremely short, because frankly I don’t have a whole lot to say about E.T. It’s one of the movies on the list that I had already watched, but so long ago that I had almost no memories of it.
A few shots had stayed in my head: somehow not the iconic scenes, of bicycles flying and flowers reviving, but the shots in the dark at the beginning and end, with the alien spaceship. And, of course, the first sight of the alien’s heart beating again through the glass (if that is indeed what it was).
Watching it again, I have to admit that I was unmoved – and it may be precisely because I was watching it for academic reasons more than anything else that that was the case. It may also be the case that I just didn’t love the movie.
Is it okay to not be in love with a work of art that for whatever unknown reason is dear to the hearts of many others? The individualist in me wants to say yes, and I’ve been musing on reasons why. A central question of aesthetic philosophy has long been what constitutes taste, and if a person is somehow lacking if they are unmoved by a work of art that others love. The argument has a certain logic to it: if I like something, and you do not, then it would seem that you lack some undefined capacity to appreciate that work of art (and probably other works as well). Or, to put the question more broadly: if something is beautiful, shouldn’t it be unambiguously and universally beautiful? How can an aesthete accept a suggestion of aesthetic relativism?
It’s a question I’ve struggled with for some time, but my nearest answer right now is this. We know that the brain is much like any other muscle: you use some areas of it more than others, and so those areas become more developed. For instance, as you practice a foreign language more, the area of the brain responsible for language learning and retention becomes more developed and speaking the language comes increasingly naturally.
Shouldn’t aesthetic taste be exactly the same? That is, you live your life and have experiences that are entirely unique to you. It seems to me that that should mean that you develop different areas of your brain differently from others; you become more aware of some things than others, and so you are able to appreciate certain things more than other people can (just as they are more able to appreciate other things). The result, it seems, should be widely differing tastes.
This is the briefest sketch, and I need to think more about it because I can see where there might be some obvious rejoinders from aesthetic absolutists. Nonetheless, I’m going to offer it up as something to consider when you’re perplexed by a movie that your best friend loves – or when you love something that someone else thinks is drivel.
I have to start this post with some embarrassing facts about me. To summarize: I watch the following TV shows: The Office, Glee, Entourage, True Blood, Community, Californication, Mad Men,and Hung. Before they concluded, I also watched: Battlestar Galactica, The Tudors, and Rome.
Almost all of these shows were, at one time or another, quality programming (only Hung was never any good). [EDIT: on further thought, Californication has never been all that great either.] Yet almost all of them sooner or later deteriorated into shows that were at best mediocre and worst downright preposterous. The only exceptions are Rome, which was saved only because it lasted only two seasons, and Community and True Blood, which haven’t really had enough time to go bad.
What I want to enquire, therefore, is this: Is there something inherent in the format of television that dooms TV programming to eventual mediocrity? Or is this more a problem of how viewers interact with programming?
As, it seems, with everything that I write about on this blog, the answer appears to me to be both. I’ll be more interested today in what I see as the problems of the television format, but at least some of the problem almost certainly lies with the level of investment that we, the audience, make in these characters. That we do invest, of course, is demonstrated by the fact that we continue to watch shows like Entourage or The Office that stopped being funny years ago: we feel we have some stake in what happens between Jim and Pam or in Vince’s now-great, now-floundering career. When we build up that level of investment, we develop some chimerical belief in our right to have some say over what happens to the characters – hence our dissatisfaction when something happens that we didn’t want to happen.
Be that as it may, the format of TV seems to me to present a set of unique challenges that so far no show I’ve watched has succeeded in working around.
First of all, there’s a basic problem in the scope of stories that are developed for television. TV shows, if they’re successful, will run for years, meaning that there are several years’ worth of people’s lives that need to be developed and explored. At the same time, though, television settings are relatively limited, with a circumscribed cast that can’t accommodate extensive use of new characters for a long period of time. This leads to a level of incestuous plotting that renders shows preposterous. Why doesn’t a single one of the kids on Glee have a significant other that isn’t another one of them? Why does almost every one of the regulars on True Blood have some sort of dark secret in their past? Because they’re the people the producers have to work with and the show has to be kept interesting, that’s why.
Beyond the plot structure of television series, however, there’s also the problem of the way that television series are produced. Where in film production the producers and director usually (though not always) work from a finished script towards the construction of a story with a pre-determined ending, television shows usually have no such clear endpoint. When a show gets a pilot made, the producers are hoping to get the studio to order enough episodes for a half or full season; then, if all goes well, they’re hoping that it gets renewed for further seasons. Often, shows aren’t renewed until after the last episode of the previous season has already aired.
What this means is that, even if producers have a general idea of where they want a show to go, their focus isn’t on constructing an overarching product so much as on making the immediate future of the show entertaining enough that it’ll keep getting renewed. And, indeed, the very idea of shows being able to be indefinitely renewed is inimical to the development of long-lasting storylines: what do you do once you’ve reached the end of the story you want to tell but you still have an audience? Similarly, why map out a five-season plan when you might get cancelled after only three?
Let’s look at Glee as an example of this. Beyond the club’s competitive dimension and the running rivalry with Sue Sylvester, the first season had three fairly involved plotlines: Quinn’s teen pregnancy, its mirror in Terri Schuester’s faked pregnancy, and Will’s ongoing non-romance with Emma. There was, in other words, some real serious shit going on, all of which got resolved, more or less satisfactorily, by the end of the season. In the second season, by contrast, there’s been – what? Curt’s problems with the football player thug? Sure, but even that was little more than a brief story arc. And, in the absence of any such thematic content to complement the more light-hearted aspects of the show, Glee has become little more than a series of loosely narrative public service announcements. Once it resolved the heavy plot issues of the first season, it had effectively spent itself; it had nowhere new to go. There had been no forethought about what would come after that first season.
Finally, television programming faces a challenge that is inherent in any narrative endeavor predicated on installments – that is, things like television series, film series, or book series; more abstractly one might also think of ongoing photographic or artistic projects. That is, such endeavors must find a way to balance what makes them effective and entertaining with innovation and evolution. With any artistic endeavor – indeed, with any long-term endeavor whatsoever – there comes a time when, no matter how good the product has been, one begins to want to stretch beyond it and achieve something more.
There’s strong reasoning behind this. How many shows have we seen that started off great but after not too long a time became stale? Think, for instance, of The Office. Initially, the mockumentary style and loose, situation-based style made it fresh and charming and funny. Once that style became familiar, however, the show found that it needed to find new ways to amuse, so it began to try to lean on increasingly tired plot-driven stories to keep its audience invested. This strategy made perfect sense. The mine of humor in Jim and Pam’s disguised pining for one another, for instance, could only run so deep.
At the same time, the main reason that we were drawn to the show in the first place was that it was funny, and it was funny precisely because of those things that producers were compelled to move away from in trying to keep the product fresh. Thus we come to the other side of the problem: in demanding artistic and stylistic evolution, the need to keep the product fresh often demands (or is understood to demand) a move away from, perhaps even the abandonment of, principles that were from the outset fundamental to that product. In other words, keeping a show good seems mean moving away from all the things that made it good in the first place. And there is a term for this, coming, appropriately, from an event in a television series: ‘jumping the shark.’
I don’t think this is a necessary fate for all television programming, but it is an extremely likely one. Without a set idea of how long something is going to last, how it’s going to end, and how it’s going to get there, innovation is both necessary and doomed. It’s the only way to keep people interested, but it’s also like throwing darts at a dartboard with a blindfold on. You might score a bulls-eye, but you’re much more likely to end up pinning your buddy who’s standing by with the beers.
So how can you avoid this? The answer is simple: don’t start producing a TV show until you know how it begins, how it ends, and have a rough road map of how you’re going to get there and in what time. Then, have faith in the version of you that made that plan and carry it out as planned. Alternately, know when to quit.
Unfortunately, this is all much easier said than done; indeed, this sort of system is both impossible in the current system and financially impractical for the people who are putting up the money. Like movies, as discussed in my post on comic book adaptations, television series are as much commercial investments as they are artistic projects. And, realistically, it’s the viewers, not the money men, who necessitate this system. I still watch The Office. I still watch Glee. What reason have I given the producers of these shows to walk away and start a new project that would be as good as these shows used to be? What reasons have I given studios to rethink the production process?
Exactly. None. On which note, it’s time to go back to slapping my head in frustration every time Hank Moody has another absurdly unlikely sexual conquest.
Gentleman of the Day:
I just began a six-month program as an assistant teacher in a high school here in Novara, so the past week-and-a-half I’ve been traipsing around to different teachers’ classes to meet the students I’ll be working with and see how Italian teachers operate. That’s involved a lot of introductions, learning students’ names, and letting them ask me questions; the result of one such occasion was the following exchange:
STUDENT: “What did you study at university?”
JENTLEMAN: “I studied history. Are you interested in history?”
STUDENT: “No, I don’t like it.”
JENTLEMAN: “Why not?”
STUDENT: “Because it doesn’t have anything to do with us. It’s all…” And he waved his hand over his shoulder to suggest that the past was the past. Why worry about it?
I was unprepared for this challenge. I’ve thought a little bit about the relative merits of studying history over the years, but rigorously, and the reason that I chose history instead of anything else was because I was interested in it. Of course, I’ve always had the intellectual’s belief that the humanities are worth studying in themselves: that studying history or literature or the classics can in some way improve a person. That line of thinking, though, has always been a little vague, and so I didn’t have any concrete rebuttal to this student and his dismissal of my concentration. And, so, I’ve been thinking about it ever since: Why study history?
The obvious and clichéd response is George Santayana’s famous truism, that ‘those who cannot remember the past are condemned to repeat it.’ But to use that as an apology for history seems like the easy way out to me. As such: I present the basics of a ‘theory of history’ to try to justify why the past is worth knowing something about.
First of all, I must emphatically state that history is a humanity, not a social science. The fundamental project of history is to relate truly the events of the past. The application of social scientific methods to the study of history has, without question, broadened the scope and capability of the historian’s project: in adding new sources and modes of thought to the monolith of political action, the so-called ‘new history’ has allowed historians to more approximately reach the chimerical truth of the past. The social sciences, however, are fundamentally geared towards uncovering and exploring social ‘laws’ governing ways in which humans interact. There is no doubt, I think, that understanding the cause-and-effect processes of the past can illuminate specific problems in the present. That said, historical narrative is too specific, and too dependent on the attitudes and actions of particular men and women in particular times, to be a source of general laws.
As I think about it, there are four central reasons to study history.
The first reason is the most obvious: history teaches us where we come from. This is true from micro- to macro-level. It’s true of families and genealogies – the sorts of things that are interesting only to ourselves and our relatives (the Jentleman refers the reader to his Family Portraits photography project). Expanding outward from that, history teaches us about our town, our state, our country; about the trials overcome by our ancestors; about their accomplishments, their failures, and their motivations. That in turn can illuminate our own characters, both individually and collectively. It’s easy and tempting to believe that the world exists today quite apart from the struggles and parameters of the past. Yet is any other assertion so clearly foolish? We live a world filled with institutions that are older by far than we, and however much these institutions may change over the course of time they have still grown from what they were when they born, at which point they were initially generated by yet older institutions. No, you’re not going get thrown out of Boston for challenging the authority of the local Puritan elite, as happened to Anne Hutchinson in 1637 – but Boston retains a distinctly ‘blue’ character.
As such, history illuminates the problems of the present. I studied early modern history, focusing on England between 1500 and 1700. That’s a long time ago. Yet tensions that existed or originated then still affect the world today. Henry VIII effectively created the Church of England by his split with Rome in 1534. The struggle over the identity of Anglicanism has shaped history from then ‘til now, leading to the burning of heretics, the settlement of British North America by the Pilgrims, a bloody civil war in England, and the development of the ideas of British exceptionalism that would help to motivate the spread of the Empire, among others; it can still be felt today in the conflict within the 80 million-strong Anglican Communion over homosexuality. Similarly, to use a more famous example, the conflicts between Christians, Jews, and Muslims in the Middle East (I caught myself as I was about to write ‘Holy Land’) can be dated back to at least AD 600. These are problems that echo through history. Understanding them is to understand history.
From a more utilitarian perspective, history, as a sort of ‘text’ that can be ‘read,’ presents an alienated framework through which human behavior can be analyzed and understood; in this it is similar to literature and art. The fundamental value of art, as I have argued before and probably will again, is based in its ability to represent the world truly and, therefore, to allow us to examine the world objectively – that is, we can examine the lives and actions of fictional characters with an objectivity that must necessarily be absent from examinations of our own lives. Historical narrative is no different. Yes, the circumstances of the lives of men and women living hundreds and thousands of years ago were entirely different from ours, but do we have any reason to believe that the fundamental motivations of their lives were different too? ‘A little flesh, a little breath, and a reason to rule all: that is myself,’ wrote Marcus Aurelius. That was in the second century – and times have not changed so much, it seems. Food, shelter, and companionship; happiness and wealth: these preoccupations are not unique to our time.
That understanding is itself one of the most important lessons to be learned from history. Beyond it, though, history offers a non-speculative prism through which to examine how people go about reaching for their particular goals, how certain situations engender certain values, how personalities and ideas influence outcomes as much as any broad theory of history. (This is, incidentally, quite separate from social scientific aims. Social scientific aims would look to understand certain universal laws governing historical progression. I’m talking here about understanding at once both the simplicity and the complexity of individual human action.)
Perhaps most of all, the process of learning how to study history – what is scholastically termed the ‘historical method’ – is a way of thinking that, done well, can be abstracted to any task requiring analysis and presentation of information. I’ve touched on this idea before, by putting the writing of history into the same category as editing – see my ‘thought for the day for 7/29,’ where I claimed that ‘historians are just editors, not really writers: we aggregate information and figure out the most compelling and intelligible way to present it.’ The historical method is a process of gathering information (usually printed, but there are any number of kinds of materials that can be used as a historical source), analyzing each piece of information for what it suggests, and then placing those pieces of information in relation to each other to most coherently present what the data points to as the approximate truth. (And, indeed, it is almost always approximate.) There were times when I was writing my thesis – the times when I was really in the zone – where I would sit at my computer for four hours at a time, hopping from quotation to historical datum to another quotation and then filling in the analysis. It felt like all I was doing was arranging information and explaining why it needed to be arranged that way.
And guess what? That’s exactly what I was doing. Fundamentally, that’s all that writing history is about. Once you realize that, the abilities that allow a person to write history well can be abstracted and applied to almost anything. The ability to look at seemingly unrelated or irreconcilable pieces of information and see what those pieces of information are in fact trying to tell you – that’s the gift that studying history gives you. History isn’t all about dates and names. It’s about solving problems. Understanding how Charles I was written and thought about at the Restoration will probably never be useful to me again, and I doubt that it will be useful to anyone else. The relative utility of the knowledge of the facts of history aside, though, the skills that enabled me to reach that understanding will, I think (and hope), serve me throughout my life.
Of course, there’s one final reason to study history that in a way is more important than any of the preceding three. That is to say: it’s fucking interesting.
Gentleman of the Day: