Of Pale Kings and Paris

 

I saw pale kings and princes too …
—John Keats.
“La Belle Dame Sans Merci” (1819).

… and the pale King glanced across the field
Of battle, but no man was moving there …
Alfred Tennyson.
Idylls of the King: The Passing of Arthur (1871).

 

“It’s difficult,” the lady from Boston was saying a few days after the attacks in Paris, “to play other courses when your handicap is established at an easy course like this one.” She was referring to the golf course to which I have repaired following an excellent autumn season at Medinah Country Club: the Chechessee Creek Club, just south of Beaufort, South Carolina—a course that, to some, might indeed appear to be an easy course. Chechessee measures just barely more than 6000 yards from the member tees and, like all courses in the Lowcountry, it is virtually tabletop flat—but appearances are deceptive. For starters, the course is short on the card because it has five par-three holes, not the usual four, and the often-humid and wet conditions of the seacoast mean that golf shots don’t travel as they do in drier and more elevated locations. So in one sense, the lady was right—in precisely the same sense, as I suspect the lady was not aware, that Martin Heidegger, writing at an earlier moment of terror and the movements of peoples, was right.

Golf course architecture of course might be viewed as remote from the preoccupations of Continental theory as the greens of the Myopia Hunt Club, the lady’s home golf course, are from, say, the finish line of the Boston Marathon. Yet, just as Martin Heidegger is known as an exceptionally, even historically, difficult writer, Myopia Hunt Club is justly known to the elect as an exceptionally, even historically, difficult golf course. At the seventh U.S. Open in 1901—the only Open in which no competitor managed to break 80—the course established the record for highest winning score of a U.S. Open: a 331 shot by both Willie Anderson (who died tragically young) and Alex Smith that was resolved by the first playoff in the Open’s history. (Anderson’s 85 just edged Smith’s 86). So the club earned its reputation for difficulty.

The nature of those difficulties are, in fact, the very same ones those who like the Chechessee Creek Club trumpet: the deeper mysteries of angles, of trompe l’oiel, the various artifices by which the architects of golf’s Golden Age created the golf courses still revered today and whose art Coore and Crenshaw, Chechesee’s designers, have devoted their careers to recapture. Like Chechessee, Myopia Hunt isn’t, and never was, especially long: for most of its history, it has played around 6500 yards, which even at the beginning of the twentieth century wasn’t remarkable. Myopia Hunt is a difficult golf course for reasons entirely different than difficult golf courses like Medinah or Butler National are difficult: they are not easily apparent.

Take, for example, the 390-yard fourth: the contemporary golf architect Tom Doak once wrote that it “might be the best hole of its length in the free world.” A dogleg around a wetland, the fourth is, it seems, the only dogleg on a course of straight holes—in other words, slightly but not extraordinarily different from the other holes. However the hole’s green, it seems, is so pitched that a golfer in one of the course’s Opens (there have been four; the last in 1908) actually putted off the green—and into the wetland, where he lost the ball. (This might qualify as the most embarrassing thing that has ever happened to a U.S. Open player.) The dangers at Myopia are not those of a Medinah or a Butler National—tight tee shots to far distant greens, mainly—but are instead seemingly-minor but potentially much more catastrophic.

At the seventh hole, according to a review at Golf Club Atlas, the “members know full well to land the ball some twenty yards short of the putting surface and allow for it to bumble on”—presumably, players who opt differently will suffer an apocalyptic fate. In the words of one reviewer, “one of the charms of the course” is that “understanding how best to play Myopia Hunt is not immediately revealed.” Whereas the hazards of a Butler or Medinah are readily known, those at Myopia Hunt are, it seems, only revealed when it is too late.

It’s for that reason, the reviewer goes on to say, that the club had such an impact on American golf course design: the famed Donald Ross arrived in America the same year Myopia Hunt held its first Open, in 1898, and spent many years designing nearby courses while drawing inspiration by visiting the four-time Open site. Other famous Golden Age architects also drew upon Myopia Hunt for their own work. As the reviewer above notes, George Thomas and A.W. Tillinghast—builders of some of the greatest American courses—“were influenced by the abundant placement and penal nature of the hazards” (like the wetland next to the fourth’s green) at Myopia Hunt. Some of America’s greatest golf courses were built by architects with first-hand knowledge of the design style pioneered and given definition by Myopia Hunt.

Coore and Crenshaw—the pale kings of American golf architecture—like to advertise themselves as champions of this kind of design: a difficulty derived from the subtle and the non-obvious, rather than simply by requiring the golfer to hit the ball really far and straight. “Theirs,” says the Coore and Crenshaw website, “is an architectural firm based upon the shared philosophy that traditional, strategic golf is the most rewarding.” Chechessee, in turn, is meant to be a triumph of their view: according to their statement on Chechesee’s website, Coore and Crenshaw’s goal when constructing it “was to create a golf course of traditional character that would reward thoughtful, imaginative, and precise play,” and above all to build a course—like a book?—whose “nuances … will reveal themselves over time.” In other words, to build a contemporary Myopia Hunt.

Yet in the view of this Myopia Hunt member, Coore and Crenshaw failed: Chechessee is, for this lady, far easier than her nineteenth-century home course. Why is that? My speculation, without having seen Myopia Hunt, is that whereas Coore and Crenshaw design in a world that has seemingly passed by the virtues of the past, the Massachusetts course was designed on its own terms. That is, Coore and Crenshaw work within an industry where much of their audience has internalized standards that were developed by golf architects who themselves were reacting against the Golden Age architects like Tillinghast or Ross. Whereas Myopia Hunt Club can have a hole—the ninth—whose green is only nine yards wide and forty yards deep, the following generation of architects (and golfers) rejected such designs as “unfair,” and worked to make golf courses less “odd” or “unique.” So when Coore and Crenshaw come to design, they must work against expectations that the designer of Myopia Hunt Club did not.

Thus, the Golden Age designers were in the same position that, according to the German philosopher Martin Heidegger, the Pre-Socratic philosophers were: in a “brief period of authentic openness to being,” as the Wikipedia article about Heidegger says. That is, according to Heidegger the Pre-Socratics (the Greek philosophers, like Anaximander and Heraclitus and Parmenides, all of whom predated Socrates) had a relationship to the world, and philosophizing about it, that was unavailable to those who would come afterwards: they were able, Heidegger insinuates, to confront the world itself in a way different from those who came afterwards—after all, the latecomers unavoidably had to encounter the works of those very philosophers first.

Unlike his teacher then, Edmund Husserl—who “argued that all that philosophy could and should be is a description of experience”—Heidegger himself however thought that the Pre-Socratic moment was impossible to return to: hence, Heidegger claimed that “experience is always already situated in a world and in ways of being.” So while such a direct confrontation with the world as Husserl demands may have been possible for the Pre-Socratics, Heidegger is seemingly willing to allow, he also argues that history has long since closed off such a possibility, and thus forbade the kind of direct experience of the world Husserl thought of as philosophy’s object. In the same way, whereas the Golden Age architects confronted golf architecture in a raw state, no such head-on confrontation is now possible.

What’s interesting about Heidegger’s view, as people like Penn State professor Michael Berubé has pointed out, is that it has had consequences for such things as our understanding of, say, astronomical objects. As Berubé says in an essay entitled “The Return of Realism,” at the end of Heidegger’s massive Being and Time—the kind encyclopedic book that really emphasizes the “German” in “German philosophy”—Heidegger’s argument that we are “always already” implicated within previous thoughts implies that, for instance, it could be said that “the discovery of Neptune in 1846 could plausibly be described, from a strictly human vantage point, as the ‘invention’ of Neptune.” Or, to put it as Heidegger does: “Once entities have been uncovered, they show themselves precisely as entities which beforehand already were.” Before Myopia Hunt Club and other courses like it were built, there were no “rules” of golf architecture—afterwards, however, sayings like “No blind shots” came to have the weight of edicts from the Almighty.

For academic leftists like Berubé, Heidegger’s insight has proven useful, in a perhaps-paradoxical way. Although the historical Heidegger himself was a member of the Nazi Party, according to Berubé his work has furthered the project of arguing “the proposition that although humans may not be infinitely malleable, human variety and human plasticity can in principle and in practice exceed any specific form of human social organization.” Heidegger’s work, in other words, aims to demonstrate just how contingent a lot of what we think of as necessary is—which is to say that his work can help us to re-view what we have taken for granted, and perhaps see it with a glimpse of what the Pre-Socratics, or the Golden Age golf architects, saw. Even if Heidegger would also deny that such would ever be possible for us, here and now.

Yet, as the example of the lady from Myopia Hunt demonstrates, such a view has also its downside: having seen the original newness, she denies the possibility that the new could return. To her, golf architecture ended sometime around 1930: just as Heidegger thought that, some time around the time of Socrates, philosophy became not just philosophy, but also the history of philosophy, so too does this lady think that golf architecture has also become the history of golf architecture.

Among the “literary people” of his own day, the novelist and journalist Tom Wolfe once complained, could be found a similar snobbishness: “it is one of the unconscious assumptions of modern criticism,” Wolfe wrote, “that the raw material is simply ‘there,’” and from such minds the only worthy question is “Given such-and-such a body of material, what has the artist done with it?” What mattered to these critics, in other words, wasn’t the investigatory reporting done by such artists as Balzac or Dickens, Tolstoy or Gogol, but rather the techniques each artist applied to that material. The human misery each of those writers witnessed and reported, this view holds Wolfe says, is irrelevant to their work; rather, what matters is how artfully that misery is arranged.

It’s a conflict familiar both to literary people and the people that invented golf. The English poets, like Keats and Tennyson, who invented the figure of the Pale King were presumably drawing upon a verse well-known to King James’ translators; literary folk who feared the cost of seeing anew. The relevant verse, imaginably the source of both Keats and Tennyson, is from the James translation of the Book of Revelations (chapter 6, verse 8):

And I looked, and behold a pale horse:
and his name that sat on him was Death,
and Hell followed with him.

But opponents of the Auld Enemy saw the new differently; as novelist John Updike once reported, according the “the old Scots adage,”

We should be conscious of no more grass …
than will cover our own graves.

To the English, both heirs to and inventors of a literary tradition, the Pale King was a terrible symbol of the New, the Young, and the Unknown. But to their ancient opponents, the Scots, the true fear was to be overly aware of the past, at the expense of welcoming in the coming age. As another Celt from across the sea, W. B. Yeats, once put the same point:

Be not inhospitable to strangers,
lest they be angels in disguise.

Parisians put the same point in the aftermath of the shootings and bombings that Friday evening on Twitter by using the hashtag “#PorteOuverte”—a slogan by which, in the aftermath of the horror, thousands of Parisians offered shelter to strangers from whatever was still lurking in the darkness. To Parisians, like the Scots before them, what matters is not whether the Pale King arrives, but our reaction when he does.

Advertisements

Talk That Talk

Talk that talk.
“Boom Boom.”
    John Lee Hooker. 1961.

 

Is the “cultural left” possible? What I mean by “cultural left” is those who, in historian Todd Gitlin’s phrase, “marched on the English department while the Right took the White House”—and in that sense a “cultural left” is surely possible, because we have one. Then again however, there are a lot of things that exist but yet have little rational grounds for doing so, such as the Tea Party or the concept of race. So, did the strategy of leftists invading the nation’s humanities departments ever really make any sense? In other words, is it even possible to conjoin a sympathy for and solidarity with society’s downtrodden with a belief that the means to further their interests is to write, teach, and produce art and other “cultural” products? Or, is that idea like using a chainsaw to drive nails?

Despite current prejudices, which often these days depict “culture” as on the side of the oppressed, history suggests the answer is the latter, not the former: in reality, “culture” has usually acted hand-in-hand with the powerful—as it must, given that it is dependent upon some people having sufficient leisure and goods to produce it. Throughout history, art’s medium has simply been too much for its ostensible message—it’s depended on patronage of one sort or another. Hence, a potential intellectual weakness of basing a “left” around the idea of culture: the actual structure of the world of culture simply is the way that the fabulously rich Andrew Carnegie argued society ought to be in his famous 1889 essay, “The Gospel of Wealth.”

Carnegie’s thesis in “The Gospel of Wealth” after all was that the “superior wisdom [and] experience” of the “man of wealth” ought to determine how to spend society’s surplus. To that end, the industrialist wrote, wealth ought to be concentrated: “wealth, passing through the hands of the few, can be made a much more potent force … than if it had been distributed in small sums to the people themselves.” If it’s better for ten people to have $100,000 each than for a hundred to have $10,000, then it ought to be that much better to have one person with a million dollars. Instead of allowing that money to wander around aimlessly, the wealthiest—for Carnegie, a category interchangeable with “smartest”—ought to have charge of it.

Most people today, I think, would easily spot the logical flaw in Carnegie‘s prescription: just because somebody has money doesn’t make them wise, or even that intelligent. Yet while that is certainly true, the obvious flaw in the argument obscures a deeper flaw—at least if considering the arguments of the trader and writer Nassim Taleb, author of Fooled by Randomness and The Black Swan. According to Taleb, the problem with giving power to the wealthy isn’t just that knowing something about someone’s wealth doesn’t necessarily guarantee intelligence—it’s that, over time, the leaders of such a society are likely to become less, rather than more, intelligent.

Taleb illustrates his case by, perhaps coincidentally, reference to “culture”: an area that he correctly characterizes as at least as, if not more so, unequal as any aspect of human life. “It’s a sad fact,” Taleb wrote not long ago, “that among a large cohort of artists and writers, almost all will struggle (say, work for Starbucks) while a small number will derive a disproportionate share of fame and attention.” Only a vanishingly small number of such cultural workers are successful—a reality that is even more pronounced when it comes to cultural works themselves, according to Stanford professor of literature Franco Moratti.

Investigating early lending libraries, Moratti found that the “smaller a collection is, the more canonical it is” [emp. original]; and also, “small size equals safe choices.” That is, of the collections he studied, he found that the smaller they were the more homogenous they were: nearly every library is going to have a copy of the Bible, for instance, while only a very large library is likely to have, say, copies of the Dead Sea Scrolls. The world of “culture” then is just is the way Carnegie wished the rest of the world to be: a world ruled by what economists call a “winner-take-all” effect, in which increasing amounts of a society’s spoils go to fewer and fewer contestants.

Yet, whereas according to Carnegie’s theory this is all to the good—on the theory that the “winners” deserve their wins—according to Taleb what actually results is something quite different. A “winner-take-all” effect, he says, “implies that those who, for some reason, start getting some attention can quickly reach more minds than others, and displace the competitors from the bookshelves.” So even though two competitors might be quite close in quality, whoever is a contest’s winner gets everything—and what that means is, as Taleb says about the art world, “that a large share of the success of the winner of such attention can be attributable to matters that lie outside the piece of art itself, namely luck.” In other words, it’s entirely possible that “the failures also have the same ‘qualities’ attributable to the winner”: the differences between them might not be much, but who now knows about Ben Jonson, William Shakespeare’s playwriting contemporary?

Further, consider what that means over time. Over-rewarding those who might happen to have caught some small edge, in other words, tends to magnify small initial differences. What that would mean is that someone who might possess more over-all merit, but that happened to have been overlooked for some reason, would tend to be buried by anyone who just happened to have had an advantage—deserved or not, small or not. And while, considered from the point of view of society as whole, that’s bad enough—because then the world isn’t using all the talent it has available—think about what happens to such a society over time: contrary to Andrew Carnegie’s theory, that society would tend to produce less capable, not more capable, leaders, because it would be more—not less—likely that they reached their position by sheer happenstance rather than merit.

A society, in other words, that was attempting to maximize the potential talent available to it—and it seems little arguable that such is the obvious goal—should not be trying to bury potential talent, but instead to expose as much of it as possible: to get it working, doing the most good. But whatever the intentions of those involved in it, the “culture industry” as a whole is at least as regressive and unequal as any other: whereas in other industries “star” performers usually only emerge after years and years of training and experience, in “culture” many times such performers either emerge in youth or not at all. Of all parts of human life, in fact, it’s difficult to think of one more like Andrew Carnegie’s dream of inequality than culture.

In that sense then it’s hard to think of a worse model for a leftish kind of politics than culture, which perhaps explains why despite the fact that our universities are bulging with professors of art and literature and so on proclaiming “power to the people,” the United States is as unequal a place today as it has been since the 1920s. For one thing, such a model stands in the way of critiques of American institutions that are built according to the opposite, “Carnegian,” theory—and many American institutions are built according to such a theory.

Take the U.S. Supreme Court, where—as Duke University professor of law Jedediah Purdy has written—the “country puts questions of basic principle into the hands of just a few interpreters.” That, in Taleb’s terms, is bad enough: the fewer people doing the deciding implies a greater variability in outcome, which also means a potentially greater role for chance. It’s worse when it’s considered the court is an institution that only irregularly gains new members: appointing new Supreme Court justices depends whoever happens to be president and the lifespan of somebody else, just for starters. All of these facts, Taleb’s work suggests, implies that selecting Supreme Court justices are prone to chance—and thus that Supreme Court verdicts are too.

None of these things are, I think any reasonable person would say, desirable outcomes for a society. To leave some of the most important decisions of any nation potentially exposed to chance, as the structure of the United States Supreme Court does, seems particularly egregious. To argue against such a structure however depends on a knowledge of probability, a background in logic and science and mathematics—not a knowledge of the history of the sonnet form or the films of Jean Luc Goddard. And yet, Americans today are told that “the left” is primarily a matter of “culture”—which is to say that, though a “cultural left” is apparently possible, it may not be all that desirable.

 

 

 

Plaintive Anthems

22 January Joel Paterson and the Modern Sounds, Eddie Clendening, Ruby Ann
23 January Hoyle Brothers

“Adieu! adieu! thy plaintive anthem fades” as the poet rued, not knowing he would be remembered for not being James Dean, and Eddie Clendening and Ruby Ann played their final show in Chicago Friday night —at least for a while—at the always-urbane Ventrella’s Cafe on the North Side. The show itself was terrific—Eddie’s voice is more than remarkable, while the Modern Sounds are perhaps the best dance band in the city—but the occasion was bifurcated by two moods: a joy for Eddie, who’s moving on as part of Million Dollar Quartet’s expedition to Mt. Broadway, but also a sense of, if not melancholy, at least diminishment. Not only for Eddie, but also for Ventrella’s, which for a while in 2009 was maybe the best venue in the city: BYOB, only a nominal cover, ridiculously talented musicians, a dance-friendly crowd, and a passable dance floor. We are in the season of endings, and not yet beginnings, and so maybe it was inevitable that the evening ended forcibly with a martial display by Chicago’s Finest.

The CPD, however, walked away with nothing more than a citation for an unauthorized jukebox, making the evening’s final note in the key of slapstick. My friend the connoisseur of obscure emotional states argued it was the perfect Chicago night: screaming joy mixed with sorrow, ending with farce blended with terror of authority and the historical memory of the Depression. No one who heard it disagreed, though it may not have been heard by many as we did the quickstep out the back door while the badges flashed in front.

The humor at the end of Friday was mirrored by the weather late Saturday: the temperature rose enough to raise hopes of another season with it, hopes not dashed by the brief rainstorm that accompanied it. Saturday night was Friday’s younger brother: the mixtures of humor and pathos were reversed. The occasion was the Hoyles’ final show before their departure to points south—I am reliably informed that the front of Hoyles’ headquarters has a sign, “Gone to Texas,” ready for placement at the proper time. The Hoyles are no strangers to these notes, and they’ve also been the subject of glowing reviews in both the Reader and the Trib lately, so I’m not going to discuss much about their last Chicago show before their expedition to Mt. Austin. But the brief rainstorm last night, towards the end of the last set, brought with it whispers of the next season.

Saturday’s show had other hints and allegations surrounding it, along both directions of time’s arrow. Three younger dancers turned up—it was odd to find out I’m now somewhat of a veteran, after a bit over a year—curious and questioning. And yet at the same time another friend, a veteran in more or less the same sense I am, i.e. not very, asked some questions regarding what I knew of the past of this dance stuff, a history that more or less by chance I know something about. As I watched those younger kids struggle with 6-count turns, it occurred to me that writing that knowledge, however poor it is, might be valuable to someone, sometime.

To tell the story of where we’ve been after all, Lincoln remarks in a speech somewhere, is to tell the story of where we are going. I don’t particularly know where things are going, but I’ll tell what I know in a short capsule history anyway. (It would be great if people could add what they know—people, places, bands.) The genre of the story is itself kind of fascinating. As it’s been told to me by more than one source, the story of swing dance in Chicago, at least since the revival at the end of the last century, has a kind of post-apocalyptic, “and I alone am escaped to tell thee” flavor. It’s also reminiscent of And the Band Played On …, in that it is an epidemiological story, with a “Patient Zero.” There’s a history as well as a mythology about it.

The patient was Howard B., who brought swing back to Chicago for the first time in decades sometime during the mid-90s, probably at least by 1996 and not earlier than 1993. Howard was a doctor who’d transferred from Los Angeles, where he’d been part of the Pasadena Ballroom Association along with Erin Stevens and Steven Mitchell, two dance partners who, after viewing some old films (Hellzapoppin’ and A Day at the Races), were led to down an investigatory trail that eventually wound up at Frankie Manning, a New York City postal worker who’d long since forgotten his dance career. Arriving in Chicago, Howard danced all over town, including the country bar Whiskey River (which later became the celebrated Liquid) where eventually someone asked him just what he was doing. Eventually Howard began teaching a small group of students. These were some of the people now noted as Chicago’s best instructors.

Yet after some period described to me as being either a few months to as much as a year, the group remained small. They practiced together by dancing to old records, apparently in each other’s apartments, but nowhere else. That however changed with the first show by the Chicago band, the Mighty Blue Kings, at MadBar on Damen in the space now known as Cans. At this point mythology begins to turn to history—the Blue Kings formed sometime in 1994, and put out their first record in 1996. For several years thereafter, Howard rented space in a dance studio on Lincoln Avenue in the city, teaching swing once a week, advertising by word of mouth and a small ad placed in the Chicago Reader which read, simply, “Learn to Lindy.” Around this time, a few of the pioneers made the trip to Catalina Island in California, home to a dance camp hosted by Erin Stevens and Steven Mitchell. And then, in the spring of 1998, jean company The Gap released a television commercial.

What happened next is fairly well-known to many, and is a good example of what the writer Malcolm Gladwell, borrowing from epidemiological studies, has called “the tipping point”: a break-out moment when what was underground becomes mainstream. Whiskey River became Liquid, and classes that had once had only a few students were now being taught to dozens, even hundreds at a time. The stories of those who were there at the time all have certain common threads: many were completely absorbed, to the exclusion of everything else. Some others, particularly those who were there at the beginning, were making money, sometimes substantial amounts.

In the history of the Nazi U-Boat war called, afterwards, the Battle of the Atlantic, during World War II, the year or so directly after the entry of the United States into the war was known by the Germans submariners as the “Happy Time,” because it was so easy to torpedo U.S. ships. The United States did not do even elementary things like try to protect their fleet by bunching them up in convoys for safety in numbers, or bother to blackout the lights on the coast. Analogously, the time from the summer of 1998 through, as near as I can make out, sometime in the fall of 2001, was a “happy time” for swing—bands played out everywhere, everyone knew or wanted to know how to dance, and money was, at least for a fortunate few, almost falling from the sky.

It’s difficult to know just why that time ended. Perhaps the “swing fad,” like other such fads before it—disco, anyone?—had simply run its course, like a disease that has just run out of potential victims. Eventually anyone left not already brought down by the illness has developed some resistance to it. Dance is something for young single people, after all: eventually most dancers get steady jobs, marry, have kids—the kinds of things that don’t allow for late nights chasing bands and dance partners. Some people, though, have speculated that there might be some correlation with those jetliners in New York City in the early fall of 2001. It isn’t hard to see some historical rhyme like that: swing dance’s big revival happened during that time of “irrational exuberance” called by some the Roaring ’90s. Maybe in that way the end of swing’s “happy time” merely foreshadowed the economic crash that we are seeing now, the crash held back from its natural arrival by the war and unrelenting Republican military Keynesianism. But all songs, as Keats knew before he could know he was James Dean, come to an end, anthems or not.