On the face of it, there is something alarmingly cliche about the new Star Trek movie, which clones a bunch of now-familiar Hollywood tropes into the comforting confines of the Starship Enterprise. From a storytelling standpoint, what I found most troubling was the ongoing Hollywood slavery to the concept of the ‘arc.’ Anyone who has read anything about the way screenplays are structured will be familiar with this concept, which states, essentially, that characters must start in one place and end in another; there must be a personal change as well as a physical one.
It seems like, everywhere I look, pundits reviewing 2012 in cinema are nodding their heads in approval and talking about how it was a “great year” for the movies. In comparison to what was, by any metric, a dismal 2011, they’re justified in doing so: at least this year the Best Picture Oscar won’t go to a creampuff French silent film about a Hollywood that never existed. Still, as I survey the year, I can’t help but feel that those pundits are letting their relief that things were better in 2012 cloud their understanding of what the year really represents.
Now that we’re fully into the swing of the awards season, with Oscar nominations and the Golden Globe winners already announced, it’s clearly past time for the most important and definitive account of the past year in movies: the Jentleman’s analysis of what went wrong, what went right, and what to take away from the year in cinema. As with last year, I’m kicking off with my ‘Worst Of’ list, mostly because it’s reliably my favorite essay to write in the year. Here at JFJ, the focus is usually on the thoughtful, constructive analysis of cinema, meaning there’s little room for vitriol and bloviating (though who knows how my readers construe the Journal in general…) Still, if there’s anything more pleasurable than the thoroughgoing love of a great film, it’s the experience of pure, unadulterated contempt for the depths of the cinematically inane.
Or, The AFI List Project #15: 2001: A Space Odyssey
For a movie so championed as a chilling parable of the final and necessary opposition between man and its mechanical creations, 2001: A Space Odyssey devotes remarkably little time to fleshing out the conflict between computer and the astronauts that it is trying to kill. “Open the pod bay doors, HAL,” has entered the lexicon as the most memorable line from cinema’s most celebrated piece of science fiction, but the movie is fixated on far more cosmic themes than Dr Bowman’s derring-do in dismantling his ship’s microchip brain. The origins of human behavior; the insignificance of man in the infinite scale of the heavens; birth, death, and resurrection – it is The Tree of Life, but better, in spaceships, and shot half a century earlier.
At work, we’re deep in post on our latest project, and the last thing to polish off before it’s more or less in the can is the title sequence. That means that we’ve been knee deep in archival footage, font choices, and crawl edits, all towards figuring out what our sequence is going to say about the movie. For me, it’s also been a rare opportunity to reflect on an element of the movie narrative that stands outside of its normal rules but that can be used to great effect in enhancing helping the audience to understand what they’re seeing.
If you’re not aware, there’s a significant body of work done towards examining the motivations and processes of individual title sequences; in particular, I highly recommend taking a look at the work published at The Art of the Title, which first gave me the inkling that there might be more going on in these sequences than a simple announcement of who the Executive Producers of the movie were. My goal in this space is more general than anything there: to use the next thousand or so words to sketch out a couple of different ways that the title sequence can be used to enhance the movie that we’re watching.
WARNING: DO NOT READ IF YOU HAVE NOT SEEN THE DARK KNIGHT RISES.
Given the open ending and overwhelming critical and financial success of its predecessor, there may not be a movie in the history of cinema that was more certain to be made than The Dark Knight Rises. And, short, perhaps, of George Lucas’s second Star Wars trilogy, it may be that no previous movie has ever been the subject of such high expectations from its producers and its audience alike. On July 19th, a day before the movie opened, the possibility for both a Best Picture nomination and the title of highest-grossing film of all time were legitimately in play. And why not? Batman Begins, released in 2005, was by itself one of the best superhero movies that we’d seen to that point. The Dark Knight, three years later, redefined the model of what a superhero movie could be, and even led directly to a change in the structure of the Academy Awards. Meanwhile, director Christopher Nolan, in his breaks between movies, had directed a well-received Victorian magician drama in The Prestige and a Best Picture-nominated blockbuster in Inception. Reasons for optimism, in other words, were everywhere.
I just read Damon Winter’s post at the New York Times photography blog concerning his recent receipt of a third-place prize for feature picture story from Pictures of the Year International, meaning, essentially, that the organization deemed his project “A Grunt’s Life” the third-best photojournalism piece of 2010. Published way back in February (yes, I am a little late to the party, as usual), Winter addresses a controversy within the photojournalism community concerning his methods and whether he could be given the prize given his approach to this particular project.
So what is this controversy? Let’s begin instead with the project: to produces “A Grunt’s Life,” Winter spent several months in Iraq, sharing the lives of the soldiers. In doing so, he created a series of photographs aimed at giving a sense of what the day-to-day life of soldiers in modern war zones is like: fraught and tense, to be sure, but mixed in with moments of genuine levity (see, for example, the photo of one soldier adapting an old bedspring for use as a trampoline) and, frequently, intense boredom.
So far, so good, right? One would assume – except that Winter shot the entire thing on his iPhone, a decision which has had many other photojournalists in an uproar. In general, they’re not objecting to the fact that the project was shot on a cell phone so much as to Winter’s use of the Hipstamatic iPhone application. Hipstamatic is an app that allows users to choose certain options – film type, lens type, etc – and, when the user takes a picture, applies filters to the photograph to generate a square-format image that has the particular color and contrast as if it had actually been shot on that kind of film. It’s sort of like the photographic equivalent of throwing all of yesterday’s leftovers into a stew: you know what all the ingredients are, but how it’s going to taste is anybody’s guess.
Even if at first blush the uproar over the use of such an application may seem strange, the argument is, I think, at least understandable. For casual users, the appeal of Hipstamatic lies in its inherent gimmickiness – in the randomness and particularity of the image therein created. More importantly, the photographer can claim little responsibility for how the final image ‘feels.’ Beyond composition and settings, he makes no choice about how the photograph will look: that is determined by a computer’s arbitrary application of preselected filters. How much of the work, then, is genuinely the photographer’s, especially considering how the lack of the touch-up and refinement that usually go into a finished photo long after the shutter has actually closed?
Under ordinary circumstances, given the way I think about art, I would have expected to be quite opposed to this approach to Serious Photography. Yet, in reading Winter’streatment of the subject, I was surprised to find myself nodding in agreement most of the time. Beyond any question of gimmickry or authenticity is one far more fundamental to the pursuit of art, which is that of autonomy of method.
In any medium, method is important, and every artist in every medium has his own approach. Stephen King famously tries to write 3,000 words each and every day, while Hemingway supposedly stopped working when he got into a good place so that he would be in a good place to start the next day. I would argue, however, that method is especially important in photography, because the recording of the image is such a tiny and in a way trivial event. Unlike in poetry, music, novel writing, or painting, the artifact itself takes no time at all to produce: less than a thirtieth of a second, most of the time. Getting to that image is the hard part, the result of each photographer’s individual process, and is at the heart of why photographers achieve vastly different and interesting results despite the apparent triviality of the medium.
My own experience with photography, though admittedly limited, may nonetheless be useful for explaining just what I mean when I say that process is essential in producing results. When I first set out on my Family Portraits photography project, I was shooting on the more-than-serviceable Canon Rebel XS, an entry-level digital SLR which, in the right hands, is capable of producing wonderful results. Yet my first efforts produced results that were frustratingly boring: family members standing around, perhaps, or awkwardly caught in mid-sentence. For some reason, what I was doing wasn’t working. I needed to rethink my process.
Seeing this, my photography professor at the time suggested a change: instead of using the Rebel, why not see what happened on large-format film? At his suggestion, I swapped out my digital SLR for a 4x5 film camera. The 4x5 is a pain to use – I had to carry a tripod with me everywhere, it takes forever to set up and execute a photograph, and getting film processed and developed is an operation unto itself. It was exactly this level of complication, though, that allowed me to arrive at the highly formal, composed style that is, to me, one of the best things about the finished project. (For a more credible example of what I mean, check out Thomas Struth’s large-format portraits of families.) The 4x5 camera, both as an intimidating physical object and as an involved process of production, introduced enough of a barrier between myself and the subject that it was possible to achieve a high level of formality: they really felt like they were having their picture taken, and I really felt like I was conducting business. There was a clear divide between when we were involved in the business of doing the portraits and when we were interacting socially, which made it easier for me to fall into the role of the photographer when it came time to make the actual image.
For what I was trying to do, then, the large-format approach was the right one – and, in general, the formality of that approach, the way it forces you to think and compose and put yourself into the role of the Photographer, is a good one for me. Some of my friends, however, would go crazy trying to adapt to the constraints you have to deal with in large-format photography, opting instead for the greater freedom provided by the digital camera. That is emphatically not to say that their work is less interesting or well executed. Rather, the spontaneity of the digital format, so difficult for me to work with, becomes a strength of the images they produce.
All of which brings us back to Damon Winter and his use of Hipstamatic. As Winter himself points out in his article, the unobtrusiveness of the iPhone was one reason that he was able to produce the images that he ended up with. I used the large-format camera because it demanded a certain level of formality and seriousness; conversely, for Winter, the iPhone allowed the soldiers to relax and not feel as if they were on camera, which was essential to the project he was trying to execute. I sympathize with concerns about the loss of artistic autonomy when it comes to apps like Hipstamatic. However, in the end, as Winter notes, the computer cannot create the content of the picture. It helps to think of the different adjustment options as film stocks: as with film, you don’t know for sure how the picture will look until it comes out, but it’s possible to choose materials that will come as close as possible to the look you’re trying to achieve. Above all, it seems to me, the important thing must be to trust that photographers like Winter understand the equipment that they are working with and know what they’re doing. The image is arrived at through a much more significant process than pushing the button and then seeing what happens. Those objecting to the use of programs like Hipstamatic have, I think, in their minds reduced the photographic process to the final result. That is understandable in the sense that, to be sure, the artifact itself can give the illusion of being nothing more than a lens pointed at a certain arrangement that has then been colored in a certain way. Not so, I say: the making of the photograph is instead but the final, in a way least significant action in a much longer and more sophisticated process.
What about the objection that heavily colorized and stylized images can’t be photojournalism, because they may not objectively represent the object involved? This is a trickier argument, and one that may deserve its own debate. For the moment, though, I would give the brief response that it is a limiting conception of truth in the image to say that the world can only be truly, objectively represented when color is adjusted to exactly match what our eyes see (an impossible demand anyway, especially with night photography. And does that argument mean that we should never use black and white photography?). Still, this question goes to a more fundamental one about what photography can and should do, and will demand a more complete treatment on its own.
Gentleman of the Day:
The other day, I watched Titanic for what was, incredibly, only the first time – I was a little too young for it when it came out in theaters, and I guess I’ve avoided it since then because I was convinced that it couldn’t possibly all that good. However, following my negative feelings about Avatar, and being sick and tired of being constantly told that I just had to see Titanic, I thought it was time to give it a shot.
Surprisingly, I didn’t hate it, though I have a feeling that I could pretty easily talk myself into hating it if I spent a couple solid hours thinking about it. More interesting than any review of the film, though (because, really, what is there to say about it that hasn’t already been said?) is how it reflects, and is reflected by, Avatar, which shares fundamentally the same preoccupations. That in turn reflects the interests and efforts of writer / director Cameron, and – maybe – can tell us something about what about these fundamentally mediocre efforts so connected with audiences.
On the face of it, saying that Avatar and Titanic bear the same fundamental structure will probably seem like lunacy: the former is a space opera about exploitation and traditionalism, while the latter is a period drama about forbidden love on a really big ship. Ostensibly, the only shared elements are huge budgets, even huger grosses, and spectacular special effects.
If we move beyond this, though, I think a more essential connection between the two movies is in what becomes the central theme of both: the transformative power of love. Corollary to that is the way that Cameron develops that theme. In some way, both Avatar and Titanic are about encounters with other worlds and resultant loves that are forbidden. And, ultimately, the thing that interests Cameron as a filmmaker the most is not the love story but the vessel which allows the love story to take place. The Titanic itself – the physical ship – is what allows class structures to be condensed and, therefore, causes Jack and Rose to collide, like two molecules which, suddenly placed into a constrained space, may bump into each other quite unexpectedly. In the same way, the need to explore and understand Pandora in Avatar is the only reason that Jake ever encounters Neytiri. The developing romances, in turn, allow Cameron to explore these literal and figurative worlds in greater detail.
Maybe this, then, is the central problem: Cameron’s stories serve his world, not the other way around, and his interest is primarily in inhuman operations rather than human relationships. It’s a credit to his talent as a director that he’s able to make the movies work as well as he does: certainly, during Titanic, I was never bored, which is no small feat for a three-hour movie. Still, in both cases, his interest in creating the world causes the truth of his stories to take a backseat, where they are condensed down to their simplest forms optimized to produce the maximum possible emotional effect.
What do I mean by this? The simple fact is that, in both movies, Cameron wants to make love into a singularly transformative force – one so transformative that a person can be figuratively (and, in the case of Avatar, literally) reborn. In order for this to be possible, he turns conflicting parties into monolithic objects, immutable and unmixable, where choice is binary: you can be one or the other, and if you leave one behind, it’s so different that you neither want nor need to look back.
In a certain way, the problem is worse in Avatar, because the plot is structured such that there’s no pain whatsoever in Jake’s decision to go native. Still, I was more frustrated at Cameron’s monolithic portrayal of class in Titanic than I was by anything in Avatar, perhaps because as a period piece there was some promise of a realism that, in the end, wasn’t there.
It isn’t that I objected to the portrayal of Rose’s upper-class upbringing as an entirely different world from Jack’s live-as-he-will poverty: that is a dichotomy that has been suggested, with success, in other films and other stories. No, the problem is Cameron’s clear-cut favoring of one of these worlds over the other – his populist portrayal of life below decks as vibrant and warm, while that of the wealthy as sterile, contemptuous, and cruel. When Jack rescues Rose from jumping over the side of the ship, he also rescues her from the life of the wealthy: a life, it is suggested, that is no life at all. And, indeed, other than Rose, the only one of the wealthy that we encounter who appears to have any shred of humanity is Molly Brown, who we are told at the beginning is ‘new money.’
And so, Rose is rescued from her wealth and learns to ride like a man; her love for Jack is a transformative power, just as Jake’s love for Neytiri is transformative (albeit in a much more literal way). It is so transformative, in fact, that she never even considers looking back at her own life; as far as we are aware, for instance, she never sees her mother again. But the transformation is so extreme – as it must be, given how Cameron paints his picture of class and lifestyle – that we cannot help but see it as false, just as Jake’s transformation is false, and that is because there was never really a choice in the first place. The former life was hollow and cruel, the new one bold and beautiful and adventurous. Which would you take?
This may be precisely what so endears Cameron’s films to audiences, however. Film is, after all, a visual medium, and there’s no denying that Cameron creates visually awe-inspiring movies. At the same time, there’s something beautiful in his desire to be able to transcend that which we are. This, perhaps, is where Titanic succeeds and Avatar fails: in Titanic, at least, that transcendence is paid for; the ship sinks, Jack drowns, the band plays until the end. All of us, I think, have had moments where we long to slough off all that we are and rise newly-made. Cameron’s films allow us to believe that it could even happen. There may be something beautiful in that.
None of that, however, changes the fact that Cameron’s movies are fundamentally inhuman, and their portrayal of transcendence false. Strive as we might, there is always a cost, and we can never leave everything fully behind; as we go forward, the desire grows in proportion to the things preventing us. In Brief Encounter, Laura indulges this fantasy, but in the end she realizes that it is just a fantasy, and that is what makes that film so sad at the same time that it is so true. Titanic and Avatar are more like fantasies, that try to eat their cake and have it, too. That may make them the ultimate form of the popcorn movie – but it’s still just popcorn.
On March 17th, the New York Times published a letter to its readers announcing that, starting from March 28th (and earlier in Canada), it would be implementing a digital subscription system – the idea being, essentially, that beyond a set monthly limit, people would have to pay for content that they consumed on the Times website. When I read the letter, I was a little bemused and a little disappointed, but I wasn’t surprised. For all that I’d enjoyed reading the Times online for free all through my years in college, I’d read too many articles gloomily reporting on the crisis that the Internet had created for the newspaper industry to think that such an effort was anything but inevitable. Too bad for me, I thought, but it was the logical thing to do. And I assumed that most people of my age and circumstances would share that point of view.
I was rudely awakened from this amicable dream a few days later, when I discovered that a friend of mine from college had written a blog post for the Canadian brand company Kaldor to express her dissatisfaction with the change. I won’t go into exhaustive detail, but her basic argument is that the Times decision is regressive, will drive readers away to free content sources, and will ultimately prove to be bad for business. That the first and second points are true is indisputable in the short run; that they lead to the third is, I think, open to debate, as is the question of whether the Times really had a choice.
For the purposes of this entry, however, the practical merit of the Times paywall isn’t particularly of interest: prognostications aside, we’ll find out sooner or later whether it works out for them. More important – indeed, more troublesome – is my friend’s quite earnest statement that she is a member of “a generation that doesn’t expect to pay for digitally accessible content.” In other words, it doesn’t make sense for me to pay for high-quality content that isn’t categorically different from content that I can get for free elsewhere. The implication of this isn’t really that it’s surprising to be expected to pay for content. Content, rather, should be free.
Should it, though? Content is, after all, something that we consume; how does that make content creation anything but a service? And if content creation is a service, how is it not a service that should be paid for like any other?
Just so, a free-content apologist would reply: of course content – especially high-quality content – has value; that it does is evident from the fact that so many people consume it. And, indeed, we do pay for it, though by buying Coca-Cola and pharmaceuticals and signing up for E-Trade rather than by paying for the content directly. If the product isn’t categorically different, though, why should I pay for something that I can get almost as good elsewhere for free?
Economically, of course, this perspective is unimpeachable. There are, however, two major problems with it in the context of this argument. First, very often content being free today means not being paid for by anyone: think of movies and mp3s downloaded illegally or episodes of The Office watched on Megavideo, where there is no advertising and nothing that you pay for. Second, the truce that the free-content apologist proposes simply doesn’t function in the digital age. Before the Internet, newspaper publishers could count on advertising revenue to cover most of their costs because the capital investment it took to start up and run a newspaper was prohibitive: in a local market, everyone was choosing between no more than four or five options, and if they wanted to read one of those options, they had to buy, borrow or steal a physical copy. Now, I can go online and have access to any newspaper in the world in a matter of seconds – not to mention content on blogs and social media. The Internet has made more content accessible to more people than ever before; that means, as far as I can tell, the devaluation of any individual content source for advertising purposes. Why should Ford advertise at a premium rate on the New York Times site when it can advertise for significantly less on an array of less trafficked sites that are just happy to have the business?
Maybe, then, the question isn’t whether content should be free but whether it can be free. As I touched on in a previous post, the Internet is a disruptive technology that has changed the way that we interact with information. As people discover that advertising revenue can no longer support content-based endeavors by itself, what can happen but for people to begin looking for further ways to make content pay? And in what way does it make sense to argue that that’s not the thing they should be doing?
Unusually short post this week. I’ll try to bring something more full-bodied when we resume…
Gentleman of the Day:
I have to start this post with some embarrassing facts about me. To summarize: I watch the following TV shows: The Office, Glee, Entourage, True Blood, Community, Californication, Mad Men,and Hung. Before they concluded, I also watched: Battlestar Galactica, The Tudors, and Rome.
Almost all of these shows were, at one time or another, quality programming (only Hung was never any good). [EDIT: on further thought, Californication has never been all that great either.] Yet almost all of them sooner or later deteriorated into shows that were at best mediocre and worst downright preposterous. The only exceptions are Rome, which was saved only because it lasted only two seasons, and Community and True Blood, which haven’t really had enough time to go bad.
What I want to enquire, therefore, is this: Is there something inherent in the format of television that dooms TV programming to eventual mediocrity? Or is this more a problem of how viewers interact with programming?
As, it seems, with everything that I write about on this blog, the answer appears to me to be both. I’ll be more interested today in what I see as the problems of the television format, but at least some of the problem almost certainly lies with the level of investment that we, the audience, make in these characters. That we do invest, of course, is demonstrated by the fact that we continue to watch shows like Entourage or The Office that stopped being funny years ago: we feel we have some stake in what happens between Jim and Pam or in Vince’s now-great, now-floundering career. When we build up that level of investment, we develop some chimerical belief in our right to have some say over what happens to the characters – hence our dissatisfaction when something happens that we didn’t want to happen.
Be that as it may, the format of TV seems to me to present a set of unique challenges that so far no show I’ve watched has succeeded in working around.
First of all, there’s a basic problem in the scope of stories that are developed for television. TV shows, if they’re successful, will run for years, meaning that there are several years’ worth of people’s lives that need to be developed and explored. At the same time, though, television settings are relatively limited, with a circumscribed cast that can’t accommodate extensive use of new characters for a long period of time. This leads to a level of incestuous plotting that renders shows preposterous. Why doesn’t a single one of the kids on Glee have a significant other that isn’t another one of them? Why does almost every one of the regulars on True Blood have some sort of dark secret in their past? Because they’re the people the producers have to work with and the show has to be kept interesting, that’s why.
Beyond the plot structure of television series, however, there’s also the problem of the way that television series are produced. Where in film production the producers and director usually (though not always) work from a finished script towards the construction of a story with a pre-determined ending, television shows usually have no such clear endpoint. When a show gets a pilot made, the producers are hoping to get the studio to order enough episodes for a half or full season; then, if all goes well, they’re hoping that it gets renewed for further seasons. Often, shows aren’t renewed until after the last episode of the previous season has already aired.
What this means is that, even if producers have a general idea of where they want a show to go, their focus isn’t on constructing an overarching product so much as on making the immediate future of the show entertaining enough that it’ll keep getting renewed. And, indeed, the very idea of shows being able to be indefinitely renewed is inimical to the development of long-lasting storylines: what do you do once you’ve reached the end of the story you want to tell but you still have an audience? Similarly, why map out a five-season plan when you might get cancelled after only three?
Let’s look at Glee as an example of this. Beyond the club’s competitive dimension and the running rivalry with Sue Sylvester, the first season had three fairly involved plotlines: Quinn’s teen pregnancy, its mirror in Terri Schuester’s faked pregnancy, and Will’s ongoing non-romance with Emma. There was, in other words, some real serious shit going on, all of which got resolved, more or less satisfactorily, by the end of the season. In the second season, by contrast, there’s been – what? Curt’s problems with the football player thug? Sure, but even that was little more than a brief story arc. And, in the absence of any such thematic content to complement the more light-hearted aspects of the show, Glee has become little more than a series of loosely narrative public service announcements. Once it resolved the heavy plot issues of the first season, it had effectively spent itself; it had nowhere new to go. There had been no forethought about what would come after that first season.
Finally, television programming faces a challenge that is inherent in any narrative endeavor predicated on installments – that is, things like television series, film series, or book series; more abstractly one might also think of ongoing photographic or artistic projects. That is, such endeavors must find a way to balance what makes them effective and entertaining with innovation and evolution. With any artistic endeavor – indeed, with any long-term endeavor whatsoever – there comes a time when, no matter how good the product has been, one begins to want to stretch beyond it and achieve something more.
There’s strong reasoning behind this. How many shows have we seen that started off great but after not too long a time became stale? Think, for instance, of The Office. Initially, the mockumentary style and loose, situation-based style made it fresh and charming and funny. Once that style became familiar, however, the show found that it needed to find new ways to amuse, so it began to try to lean on increasingly tired plot-driven stories to keep its audience invested. This strategy made perfect sense. The mine of humor in Jim and Pam’s disguised pining for one another, for instance, could only run so deep.
At the same time, the main reason that we were drawn to the show in the first place was that it was funny, and it was funny precisely because of those things that producers were compelled to move away from in trying to keep the product fresh. Thus we come to the other side of the problem: in demanding artistic and stylistic evolution, the need to keep the product fresh often demands (or is understood to demand) a move away from, perhaps even the abandonment of, principles that were from the outset fundamental to that product. In other words, keeping a show good seems mean moving away from all the things that made it good in the first place. And there is a term for this, coming, appropriately, from an event in a television series: ‘jumping the shark.’
I don’t think this is a necessary fate for all television programming, but it is an extremely likely one. Without a set idea of how long something is going to last, how it’s going to end, and how it’s going to get there, innovation is both necessary and doomed. It’s the only way to keep people interested, but it’s also like throwing darts at a dartboard with a blindfold on. You might score a bulls-eye, but you’re much more likely to end up pinning your buddy who’s standing by with the beers.
So how can you avoid this? The answer is simple: don’t start producing a TV show until you know how it begins, how it ends, and have a rough road map of how you’re going to get there and in what time. Then, have faith in the version of you that made that plan and carry it out as planned. Alternately, know when to quit.
Unfortunately, this is all much easier said than done; indeed, this sort of system is both impossible in the current system and financially impractical for the people who are putting up the money. Like movies, as discussed in my post on comic book adaptations, television series are as much commercial investments as they are artistic projects. And, realistically, it’s the viewers, not the money men, who necessitate this system. I still watch The Office. I still watch Glee. What reason have I given the producers of these shows to walk away and start a new project that would be as good as these shows used to be? What reasons have I given studios to rethink the production process?
Exactly. None. On which note, it’s time to go back to slapping my head in frustration every time Hank Moody has another absurdly unlikely sexual conquest.
Gentleman of the Day: