High Anxiety

Now for our mountain sport …

Cymbeline 
Act III, Scene 3

High Hampton

Wade Hampton Golf Club Sign

Entrances to Wade Hampton Golf Club and High Hampton Inn and Country Club, North Carolina

Walt Whitman once said, as anyone who saw Bull Durham knows, that baseball would function to draw America together after the Civil War: the game, the poet said, would “repair our losses and be a blessing to us.” Many Americans have not lost this belief in the redemptive power of sports: as recently as 2011 John Boehner, then-Speaker of the House of Representatives, played a much-ballyhooed round of golf with President Barack Obama—along with many other outlets, Golf Digest presented the event as presaging a new era of American unity: the “pair can’t possibly spend four hours keeping score, conceding putts, complimenting drives, filling divots, retrieving pond balls, foraging for Pro V1s and springing for Kit Kats off the snack cart,” argued the magazine, “without finding greater common ground.” Golf would thusly be the antidote to what the late Columbia University history professor Richard Hofstadter, in 1964, called the “paranoid style”: the “heated exaggeration, suspiciousness, and conspiratorial fantasy” that Hofstadter found to be a common theme in American politics then and whose significance has seemingly only grown since. Yet, while the surface approval of the “golf summit” seemed warranted because golf is, after all, a game that cannot really be played without trust in your opponents—it’s only on the assumption that everyone is honest that the game can even work—as everyone knows by now the summit failed: Boehner was, more or less, forced out of office this summer by those members of his party who, Boehner said, got “bent out of shape” over his golf with the president. While golf might, in other words, furnish a kind of theoretical model for harmonious bipartisanship, in practice it has proved largely useless for preventing political polarization—a result that anyone who has traveled Highway 107 in western North Carolina might have realized. Up there, among the Great Smoky Mountains, there sits a counterexample to the dream of political consensus: the Wade Hampton Golf Club.

Admittedly, that a single golf club could be strong enough evidence as to smack down the flights of fancy of a Columbia University professor like Hofstadter—and a Columbia University alumni like Barack Obama—might appear a bit much: there’s a seeming disconnect between the weightiness of the subject matter and the evidential value of an individual golf club. What could the existence of the Wade Hampton Golf Club add (or detract) from Hofstadter’s assertions about the dominance of this “paranoid style,” examples of which range from the anti-Communist speeches of Senator Joseph McCarthy in the 1950s to the anti-Catholic, “nativist” movements of the 1830s and 1840s to the Populist denunciations of Wall Street during the 1890s? Yet, the existence of the Wade Hampton Golf Club does constitute strong evidence against one of the pieces of evidence Hofstadter adduces for his argument—and in doing so unravels not only the rest of Hofstadter’s spell like a kitten does a ball of string, but also the fantasy of “bipartisanship.”

One of the examples of “paranoia” Hofstadter cited, in other words, was the belief held by “certain spokesmen of abolitionism who regarded the United States as being in the grip of a slaveholders’ conspiracy”—a view that, Hofstadter implied, was not much different than the contemporary belief that fluoridation was a Soviet plot. But a growing number of historians now believe that Hofstadter was wrong about those abolitionists: according to historian Leonard Richards of the University of Massachusetts, for instance, there’s a great deal of evidence for “the notion that a slaveholding oligarchy ran the country—and ran it for their own advantage” in the years prior to the Civil War. The point is more than an academic one: if it’s all just a matter of belief, then the idea of bipartisanship makes a certain kind of sense; all that matters is whether those we elect can “get along.” But if not, then that would suggest that what matters is building the correct institutions, rather than electing the right people.

Again, that seems like rather more question than the existence of a golf club in North Carolina seems capable of answering. The existence of the Wade Hampton Golf Club however tends to reinforce Richards’ view if, for nothing else, on its name alone: the very biography of the man the golf club was named for, Wade Hampton III, lends credence to Richards’ notion about the real existence of a slave-owning, oligarchical conspiracy because Hampton was after all not only a Confederate general during the Civil War, but also the possessor (according to the website for the Civil War Trust, which attempts to preserve Civil War battlefields) of “one of the largest collections of slaves in the South.” Hampton’s career, in other words, demonstrates just how entwined slaveowners were with the “cause” of the South—and if secession was largely the result of a slave-owning conspiracy during the winter of 1860, it becomes a great deal easier to think that said conspiracy did not spring fully grown only then.

Descended from an obscenely wealthy family whose properties stretched from near Charleston in South Carolina’s Lowcountry to Millwood Plantation near the state capital of Columbia and all the way to the family’s summer resort of “High Hampton” in the Smokies—upon the site of which the golf club is now built—Wade Hampton was intimately involved with the Southern cause: not only was he one of the richest men in the South, but at the beginning of the war he organized and financed a military unit (“Hampton’s Legion”) that would, among other exploits, help win the first big battle of the war, near the stream of Bull Run. By the end of the war Hampton became, along with Nathan Bedford Forrest, the only man without prior military experience to achieve the rank of lieutenant general. In that sense, Hampton was exceptional—only eighteen other Confederate officers achieved that rank—but in another he was representative: as recent historical work shows, much of the Confederate army had direct links to slavery.

As historian Joseph T. Glatthaar has put the point in his General Lee’s Army: From Victory to Collapse, “more than one in every four volunteers” for the Confederate army in the first year of the war “lived with parents who were slaveholders”—as compared with the general population of the South, in which merely one in every twenty white persons owned slaves. If non-family members are included, or if economic connections like those to whom soldiers rented land or sold crops prior to the war are allowed, then “the vast majority of the volunteers of 1861 had a direct connection to slavery.” And if the slaveowners could create an army that could hold off the power of the United States for four years, it seems plausible they might have joined together prior to outright hostilities—which is to say that Hofstadter’s insinuations about the relative sanity of “certain” abolitionists (among them, Abraham Lincoln) don’t have the same value as they may once have.

After all, historians have determined that the abolitionists were certainly right when they suspected the motives of the slaveowners. “By itself,” wrote Roger Ransom of the University of California not long ago, “the South’s economic investment in slavery could easily explain the willingness of Southerners to risk war … [in] the fall of 1860.” “On the eve of the war,” as another historian noted in the New York Times, “cotton comprised almost 60 percent of America’s exports,” and the slaves themselves, as yet another historian—quoted by Ta-Nehisi Coates in The Atlantic—has observed, were “the largest single financial asset in the entire U.S. economy, worth more than all manufacturing and railroads combined.” Collectively, American slaves were worth 3.5 billion dollars—at a time when the entire budget for the federal government was less than eighty million dollars. Quite literally, in other words, American slaveowners could buy the entire U.S. government roughly forty three times over.

Slaveowners thusly had, in the words of a prosecutor, both means and motive to revolt against the American government; what’s really odd about the matter, however, is that Americans have ever questioned it. The slaveowners themselves fully admitted the point at the time: in South Carolina’s “Declaration of the Immediate Causes which Adduce and Justify the Secession of South Carolina from the Federal Union,” for instance, the state openly lamented the election of a president “whose opinions and purposes are hostile to slavery.” And not just South Carolina: “Seven Southern states had seceded in 1861,” as the dean of American Civil War historians James McPherson has put observed, “because they feared the incoming Lincoln administration’s designs on slavery.” When those states first met together at Montgomery, Alabama, in February of 1861 it took them only four days to promulgate what the New York Times called “a provisional constitution that explicitly recognized racial slavery”; in a March 1861 speech Alexander Stephens, who would become the vice president of the Confederate States of America, argued that slavery was the “cornerstone” of the new government. Slavery was, as virtually anyone who has seriously studied the matter has concluded, the cause motivating the Southern armies.

If so—if, that is, the slaveowners created an army so powerful that it could hold off the power of the United States for four years, simply in order to protect their financial interests in slave-owning—it then seems plausible they might have joined together prior to the beginning of outright hostilities. Further, if there was a “conspiracy” to begin the Civil War, then the claim that there was one in the years and decades before the war becomes just that much more believable. And if that possibility is tenable, then so is the claim by Richards and other historians—themselves merely following a notion that Abraham Lincoln himself endorsed in the 1850s—that the American constitution formed “a structural impediment to the full expression of Northern voting power” (as one reviewer has put it)—and that thusly the answer to political problems is not “bipartisanship,” or in other words, the election of friendlier politicians, but rather structural reform.

Such, at least, might be the lesson anyone might draw from the career of Wade Hampton III, Confederate general—in light of which it’s suggestive that the Wade Hampton Golf Club is not some relic of the nineteenth century. Planning for the club began, according to the club’s website, in 1982; the golf course was not completed until 1987, when it was named “Best New Private Course” by Golf Digest. More suggestive still, however, is the fact that under the original bylaws, “in order to be a member of the club, you [had] to own property or a house bordering the club”—rules that resulted, as one golfer has noted, in a club of “120 charter and founding members, all from below the Mason-Dixon Line: seven from Augusta, Georgia and the remainder from Florida, Alabama, and North Carolina.” “Such folks,” as Bradley Klein once wrote in Golfweek, “would have learned in elementary school that Wade Hampton III, 1818-1902, who owned the land on which the club now sits, was a prominent Confederate general.” That is, in order to become a member of Wade Hampton Golf Club you probably knew a great deal about the history of Wade Hampton III—and you were pretty ok with that.

The existence of the Wade Hampton Golf Club does not, to be sure, demonstrate a continuity between the slaveowners of the Old South and the present membership of the club that bears Hampton’s name. It is, however, suggestive to think that if it is true, as many Civil War historians now say, that prior to 1860 there was a conspiracy to maintain an oligarchic form of government, then what are we to make of a present in which—as former Secretary of Labor Robert Reich recently observed—“the richest one-hundreth of one percent of Americans now hold over 11 percent of the nation’s total wealth,” a proportion greater than at any time since before 1929 and the start of the Great Depression? Surely, one can only surmise, the answer is easier to find than a mountain hideaway far above the Appalachian clouds, and requires no poetic vision to see.

Advertisements

Instruments of Darkness

 

And oftentimes, to win us to our harm,
The instruments of darkness tell us truths …
—William Shakespeare
    The Tragedy of MacBeth
Act I, scene 3 132-3 (1606) 

 

This year’s Masters demonstrated, once again, the truism that nobody watches golf without Tiger Woods: last year’s Masters, played without Tiger, had the lowest ratings since 1957, while the ratings for this year’s Saturday’s round (featuring a charging Woods), were up nearly half again as much. So much is unsurprising; what was surprising, perhaps, was the reappearance of a journalistic fixture from the days of Tiger’s past: the “pre-Masters Tiger hype story.” It’s a reoccurance that suggests Tiger may be taking cues from another ratings monster: the television series Game of Thrones. But if so—with a nod to Ramsey Snow’s famous line in the show—it suggests that Tiger himself doesn’t think his tale will have a happy ending.

The prototype of the “pre-Masters” story was produced in 1997, the year of Tiger’s first Masters win: before that “win for the ages,” it was widely reported how the young phenom had shot a 59 during a practice round at Isleworth Country Club. At the time the story seemed innocuous, but in retrospect there are reasons to interrogate it more deeply—not to say it didn’t happen, exactly, but to question whether it was released as part of a larger design. After all, Tiger’s father Earl—still alive then—would have known just what to do with the story.

Earl, as all golf fans know, created and disseminated the myth of the invincible Tiger to anyone who would listen in the late 1990s: “Tiger will do more than any other man in history to change the course of humanity,” Gary Smith quoted him saying in the Sports Illustrated story (“The Chosen One”) that, more than any other, sold the Gospel of Woods. There is plenty of reason to suspect that the senior Woods deliberately created this myth as part of a larger campaign: because Earl, as a former member of the U.S. Army’s Green Berets, knew the importance of psychological warfare.

“As a Green Beret,” writes John Lamothe in an academic essay on both Woods, elder and junior, Earl “would have known the effect … psychological warfare could have on both the soldier and the enemy.” As Tiger himself said in a 1996 interview for Orange Coast magazine—before the golfer put up a barrier between himself and the press—“Green Berets know a lot about psychological torture and things like that.” Earl for his part remarked that, while raising Tiger, he “pulled every dirty, nasty trick I could remember from psychological warfare I learned as a Green Beret.” Both Woods described this training as a matter of rattling keys or ripping Velcro at inopportune moments—but it’s difficult not to wonder whether it went deeper.

At the moment of their origin in 1952 after all, the Green Berets, or Special Forces, were a subsection of the Psychological Warfare Staff at the Pentagon: psychological warfare, in other words, was part of their founding mission. And as Lamothe observes, part of the goal of psychological warfare is to create “confidence” in your allies “and doubt in the competitors.” As early as 2000, the sports columnist Thomas Boswell was describing how Tiger “tries to imprint on the mind of every opponent that resistance is useless,” a tactic that Boswell claimed the “military calls … ‘overwhelming force’”—and a tactic that is far older than the game of golf. Consider, for instance, a story from golf’s homeland of Scotland: the tale of the “Douglas Larder.”

It happened at a time of year not unfamiliar to viewers of the Masters: Palm Sunday, in April of 1308. The story goes that Sir James Douglas—an ally of Robert the Bruce, who was in rebellion against the English king Edward I—returned that day to his family’s home, Douglas Castle, which had been seized by the English. Taking advantage of the holiday, Douglas and his men—essentially, a band of guerrillas—slaughtered the English garrison within the church they worshipped in, then beheaded them, ate the Easter feast the Englishmen had no more use for, and subsequently poisoned the castle’s wells and destroyed its supplies (the “Larder” part of the story’s title). Lastly, Douglas set the English soldiers’ bodies afire.

To viewers of the television series Game of Thrones, or readers of the series of books it is based upon (A Song of Ice and Fire), the story might sound vaguely familiar: the “Douglas Larder” is, as popular historian William Rosen has pointed out, one source of the event known from the television series as the “Red Wedding.” Although the television event also borrows from the medieval Scot “Black Dinner” (which is perhaps closer in terms of the setting), and the later incident known as the Massacre at Glencoe, still the “Red Wedding” reproduces the most salient details of the “Douglas Larder.” In both, the attackers take advantage of their prey’s reliance on piety; in both, the bodies of the dead are mutilated in order to increase the monstrous effect.

To a modern reader, such a story is simply a record of barbarism—forgetting that medieval people were, though far less educated, equally as intelligent as nearly anyone alive today. Douglas’ actions were not meant for horror’s sake, but to send a message: the raid on the castle “was meant to leave a lasting impression … not least upon the men who came to replace their dead colleagues.” Acts like his attack on his own castle demonstrate how the “Black Douglas”—“mair fell than wes ony devill in hell” according to a contemporary account—was “an early practitioner of psychological warfare”: he knew how “fear alone could do much of the work of a successful commander.” It seems hardly credible to think Earl Woods—a man who’d been in combat in the guerrilla war of Vietnam—did not know the same lesson. Nor is it credible to think that Earl didn’t tell Tiger about it.

Certainly, Tiger himself has been a kind of Douglas: he won his first Masters by 12 shots, and in the annus mirabilis of 2000 he won the U.S. Open at Pebble Beach by 15. Displays like that, many have thought, functioned similarly, if less macabrely, as Douglas’ attacks. The effect has even been documented academically: in 2008’s “Dominance, Intimidation, and ‘Choking’ on the PGA Tour,” professors Robert Connolly and Richard Rendleman found that being paired with Tiger cost other tour pros nearly half a shot per round from 1998 to 2001. The “intimidation factor,” that is, has been quantified—so it seems jejune at best to think somebody connected to Tiger, even if he had not been aware of the effect in the past, would not have called his attention to the research.

Releasing a story prior to the Masters, then, can easily be seen as part of an attempt to revive Tiger’s heyday. But what’s interesting about this particular story is its difference from the 1997 version: then, Tiger just threw out a raw score; now, it’s being dressed in a peculiarly complicated costume. As retailed by Golf Digest’s Tim Rosaforte, the story goes like this: on the Tuesday before the tournament Tiger had “recently shot a worst-ball 66 at his home course, Medalist Golf Club.” In Golf Digest, Alex Meyers in turn explained that “a worst-ball 66 … is not to be confused with a best-ball 66 or even a normal 66 for that matter,” because what “worst-ball” means is that “Woods played two balls on each hole, but only played the worst shot each time.” Why not just say, as in 1997, Tiger shot some ridiculously low number?

The answer, I think, can be understood by way of the “Red Wedding”: just as George Martin, in order to write the A Song of Ice and Fire books, has revisited and revised many episodes of medieval history, so too is Tiger attempting to revisit his own past—a conclusion that would be glib were it not for the very make-up of this year’s version of the pre-Masters story itself. After all, to play a “worst-ball” is to time-travel: it is, in effect, to revise—or rewrite—the past. Not only that, but—and in this it is very much like both Scottish history and Game of Thrones—it is also to guarantee a “downer ending.” Maybe Tiger, then, is suggesting to his fans that they ought to pay more attention.

Thought Crimes

 

How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth?
Sherlock Holmes
    The Sign of Four (1890).

 

Whence heavy persecution shall arise
On all, who in the worship persevere
Of spirit and truth; the rest, far greater part,
Will deem in outward rites and specious forms
Religion satisfied; Truth shall retire
Bestuck with slanderous darts, and works of faith
Rarely be found: So shall the world go on …
John Milton
   Paradise Lost
   Book XII 531-37

 

When Tiger Woods, just after four o’clock Eastern time, hit a horrific duck-hook tee shot on Augusta National’s 13th hole during the third round of the Masters tournament Saturday, the golfer sent one of George Carlin’s “seven dirty words” after it, live on air. About an hour later, around a quarter after five, the announcer Ian Baker-Finch caught himself from uttering a taboo phrase: although he began by saying “back,” the Australian quickly corrected himself by saying “second nine.” To the novice Masters viewer the two misuses of language might appear quite different (Finch-Baker’s slip, that is, being far less offensive), but longtime viewers are aware that, had Baker-Finch not saved himself, his error would have been the more serious incident—to the extent, in fact, that he might have lost his job. Just why that is so is difficult to explain to outsiders unfamiliar with Augusta National’s particular vision of decorum; it may, however, perhaps be explained by one of the broadcast’s few commercials; an advert whose tagline connects a golf commentator’s innocent near-mistake to an argument about censorship conducted at the beginning of this year—in Paris, at the business end of a Kalashnikov.

France is a long way from Georgia, however, so let’s begin with how what Ian Baker-Finch almost said would have been far worse than Tiger’s f-bombs. In the first place that is because, as veterans of watching the Masters know, the announcing team is held to very strict standards largely unique to this sporting event. Golf is, in general, far more concerned with “decorum” and etiquette than other sports—it is, as its enthusiasts often remark, the only one where competitors regularly call penalties on themselves—but the Masters tournament examines the language of its broadcasters to an extent unknown even at other golf tournaments.

In 1966, for example, broadcaster Jack Whittaker—as described in the textbook, Sports Media: Planning, Production, and Reporting— “was canned for referring to Masters patrons as a ‘mob,’” while in 1994 Gary McCord joked (as told by Alex Myers in Golf Digest) “that ‘bikini wax’ is used to make Augusta National’s greens so slick”—and was unceremoniously dumped. Announcers at the Masters, in short, are well-aware they walk a fine line.

Hence, while Baker-Finch’s near-miss was by no means comparable to McCord’s attempts at humor, it was serious because it would have broken a known one of the “Augusta Rules,” as John Feinstein called them in Moment of Glory: The Year Underdogs Ruled Golf. “There are no front nine and back nine at Augusta but, rather, a first nine and a second nine,” Feinstein wrote; a rule that, it’s said, developed because the tournament’s founders, the golfer Bobby Jones and the club chairman Clifford Roberts, felt “back nine” sounded too close to “back side.” The Lords of Augusta, as the club’s members are sometimes referred to, will not stand for “vulgarity” from their announcing team—even if the golfers they are watching are sometimes much worse.

Woods, for example (as the Washington Post reported), “followed up a bad miss left off the 13th tee with a curse word that was picked up by an on-course microphone, prompting the CBS announcers to intone, ‘If you heard something offensive at 13, we apologize.’” Yet while even had Baker-Finch uttered the unutterable, he would only have suggested what Woods baldly verbalized, it’s unimaginable that Woods could suffer the same fate as a CBS announcer would, or be penalized in any way. The uproar that would follow if, for instance, the Lords decided to ban Tiger from further tournaments would make all previous golf scandals appear tame.

Undoubtedly, the difference in treatment conceivably could be justified by the fact that Woods is a competitor (and four-time winner) in the tournament while announcers are ancillary to it. In philosophic terms, players are essential while announcers are contingent: players just are the tournament because without them, no golf. That isn’t as possible to say about any particular broadcaster (though, when it comes to Jim Nantz, lead broadcaster since 1986, it might be close). From that perspective then it might make sense that Tiger’s “heat-of-the-moment” f-bombs are not as significant as a slip of the tongue by an announcer trained to speak in public could be.

Such, at least, might be a rationale for the differing treatment accorded golfers and announcers: so far as I am aware, neither the golf club nor CBS has come forward with an explanation regarding the difference. It was while I was turning this over in my mind that one of the tournament broadcast’s few commercials came on—and I realized just why the difference between Tiger’s words and, say, Gary McCord’s in 1994 caught in my brain.

The ad in question consisted of different people reciting, over and over again, a line once spoken by IBM pioneer Thomas Watson in 1915: “All of the problems of the world could be settled easily if men were only willing to think.” Something about this phrase—repeated so often it became quite literally like a mantra, defined as a “sacred utterance, numinous sound” by Wikipedia—rattled something in my head, which ignited a slight Internet investigation: it seems that, for IBM, that last word—think—became a catchword after 1915; the word was plastered on company ephemera like the name of the company magazine and even, in recent times, becoming the basis for the name of such products as the Thinkpad. The sentence, it could be said, is the official philosophy of the company.

As philosophies go it seems inarguable that this is rather a better one than, for instance, one that might demand “silence your enemies wherever possible.” It is, one might say, a hopeful sentence—if only people were willing to use their rationality, the difficult and the intractable could be vanquished. “Think,” in that sense, is a sentiment that seems quite at odds with the notion of censorship: without airing what someone is thinking, it appears impossible to believe that anything could be settled. In order to get people to think, it seems inarguable that they must be allowed to talk.

Such, at least, is one of the strongest pillars of the concept of “free speech,” as the English and law professor Stanley Fish has pointed out. Fish quotes, as an example of the argument, the Chairman of the National Endowment for the Humanities, James A. Leach, who gave a speech in 2009 claiming that “the cornerstone of democracy is access to knowledge.” In other words, in order to achieve the goal outlined by Watson (solving the world’s problems), it’s necessary to put everyone’s views in the open in order that they might be debated—a notion usually conceptualized, in relation to American law, as the “marketplace of ideas.”

That metaphor traces back to American Supreme Court justice Oliver Wendell Holmes, Jr.’s famous dissent in a case called Abrams v. United States, decided in 1919. “The ultimate good desired,” as Holmes wrote in that case (interestingly, in the light of his theory, against the majority opinion), “is better reached by free trade in ideas—that the best test of truth is the power of the thought to get itself accepted in the competition of the market.” That notion, in turn, can (as Fish observes) be followed back to English philosopher John Stuart Mill, and even beyond

“We can never be sure that the opinion we are endeavoring to stifle is a false opinion,” Mill wrote in his On Liberty, “and if we were sure, stifling it would be an evil still.” Yet further back,  the thought connects to John Milton’s Areopagitica, where the poet wrote “Let [Truth] and Falsehood grapple; who ever knew Truth put to the worse in a free and open encounter?” That is, so long as opinions can be freely shared, any problem could in principle be solved—more or less Thomas Watson’s point in 1915.

Let’s be clear, however, what is and what is not being said. That is, the words “in principle” above are important because I do not think that Watson or Mills or Milton or Holmes would deny that there are many practical reasons why it might be impossible to solve problems with a meeting or a series of meetings. No one believes, for instance, that the threat of ISIS could be contained by a summit meeting between ISIS and other parties—the claim that Holmes & Watson (smirk) et al. would make is just that the said threat could be solved if only that organization’s leaders would agree to a meeting. Merely objecting that many times such conceivable meetings are not practical isn’t, in that sense, an strong objection to the idea of the “idea market”—which asserts that in conditions of what could be called “perfect communication” disagreement is (eventually) impossible.

That however is precisely why Fish’s argument against the “market” metaphor is such a strong one: it is Fish’s opinion that the “marketplace” metaphor is just that—a metaphor, not a bedrock description of reality. In an essay entitled “Don’t Blame Relativism,” in fact, Fish apparently denies “the possibility of describing, and thereby evaluating” everything “in a language that all reasonable observers would accept.” That is, he denies the possibility that is imagined by Thomas Watson’s assertion regarding “[a]ll of the problems of the world”: the idea that, were only everyone reasonable, all problems could be solved.

To make the point clearer, while in Watson’s metaphor (which is also Milton’s and Mills’ and Holmes’), in theory everything can be sorted out if only everyone came to the bargaining table, to Fish such a possibility is not only practically impossible, but also theoretically impossible. Fish’s objection to the “market” idea isn’t just that it is difficult, for instance, to find the right translators to speak to different sides of a debate in their own language, but that even were all conditions for perfect communication met, that would not guarantee the end of disagreement.

It’s important to note at this point that this is a claim Fish needs to make in order to stick his argument, because if all he does is advance historically-based arguments to the effect that at no point in human history has the situation described by Watson et al. ever existed, their partisans can counterclaim that just because no one has yet seen perfect communication, that’s no reason to think it might not someday be possible. Such partisans might, for example, quote Alice Calaprice’s The Quotable Einstein, which asserts that Einstein once remarked that “No amount of experimentation can prove me right; a single experiment can prove me wrong.” Or, as the writer Nassem Nicholas Taleb has put the same point while asserting that it ultimately traces back through John Stuart Mill to David Hume: “No amount of observations of white swans can allow the inference that all swans are white, but the observation of a single black swan is sufficient to refute that conclusion.” In other words, Fish could be right that no such perfect communication has ever existed, but it would be logically inconsistent to try to claim that such evidence implies that it could never be possible.

To engage his opponents, then, Fish must take to the field of “theory,” not just adduce historical examples. That is why Fish cannot just claim that, historically, even regimes that claim to follow the creed of Watson and Holmes and so on in theory do not actually follow that creed in reality, though he does make that argument. He points out, for instance, that even in the Areopagitica, otherwise a passionate defense of “free speech,” Milton allowed that while “free speech” is all well and good for most people most of the time, he does not mean to imply “tolerated popery” (i.e., Catholics), because as that religion (according to Milton) “extirpates all religious and civil supremacies, so itself should be extirpate.”

In other words, Milton explains that anything that threatens the idea of “free speech” itself—as Catholicism, in Milton’s day arguably in the throes of the Inquisition, did so threaten—should not be included in the realm of protected speech, since that “which is impious or evil absolutely against faith or manners no law can possibly permit that intends not to unlaw itself.” And while it might be counterclaimed that in Milton’s time “free speech” was imperfectly realized, Fish also demonstrates that while Catholicism no longer constitutes a threat to modern “free speech” regimes, there are still exceptions to what can be said publicly.

As another American Supreme Court justice, Robert Jackson, would put the point centuries later, “the constitutional Bill of Rights”—including, one presumes, the free-speech-protecting First Amendment—is not “a suicide pact.” Or, as Fish himself put the same point, even today the most tolerant governments still ask themselves, regarding speech, “would this form of speech or advocacy, if permitted to flourish, tend to undermine the very purposes for which our society is constituted?” No government, in other words, can allow the kind of speech that threatens to end the practice of free speech itself.

Still, that is not enough to disrupt the “free speech” argument, because even if it has not been exemplified yet on this earth, that does not mean that it could not someday. To make his point, Fish has to go further; which he does in an essay called “There’s No Such Thing As Free Speech, And It’s A Good Thing Too.”

There, Fish says that he is not merely claiming that “saying something … is a realm whose integrity is sometimes compromised by certain restrictions”—that would be the above argument, where historical evidence is advanced—but rather “that restriction, in the form of an underlying articulation of the world that necessarily (if silently) negates alternatively possible articulations, is constitutive of expression.” The claim Fish wants to make in short—and it is important to see that it is the only argument that can confront the claims of the “marketplace of ideas” thesis—is that restrictions, such as Milton’s against Catholicism, aren’t the sad concessions we must make to an imperfect world, but are in fact what makes communication possible at all.

To those who take what’s known as a “free speech absolutism” position, such a notion might sound deeply subversive, if not heretical: the answer to pernicious opinions, in the view of the free speech absolutist, is not to outlaw them, but to produce more opinions—as Oliver Wendell Holmes, Mill, and Milton all advise. The headline of an editorial in Toronto’s Globe and Mail puts the point elegantly: “The lesson of Charlie Hebdo? We need more free speech, not less.” But what Fish is saying could be viewed in the light of the narrative described by the writer Nassim Nicholas Taleb about how he derived his saying regarding “black swans” under the influence of John Stuart Mill and David Hume.

Taleb says that while “Hume had been irked by the fact that science in his day … had experience a swing from scholasticism, entirely based on deductive reasoning” to “an overreaction into naive and unstructured empiricism.” The difficulty, as Hume recognized, “is that, without a proper method”—or, as Fish might say, a proper set of constraints—“empirical observations can lead you astray.” It’s possible, in other words, that amping up production of truths will not—indeed, perhaps can not—produce Truth.

In fact, Taleb argues (in a piece entitled “The Roots of Unfairness: the Black Swan in Arts and Literature”) that in reality, rather than the fantasies of free speech absolutists, the production of very many “truths” may tend to reward a very few examples at the expense of the majority—and that thusly “a large share of the success” of those examples may simply be due to “luck.” The specific market Taleb is examining in this essay is the artistic and literary world, but like many other spheres—such as “economics, sociology, linguistics, networks, the stock market”—that world is subject to “the Winner-Take-All effect.” (Taleb reports Robert H. Frank defined that effect in his article, “Talent and the Winner-Take-All Society,” as “markets in which a handful of top performers walk away with the lion’s share of total rewards.”) The “free speech absolutist” position would define the few survivors of the “truth market” as being, ipso facto, “the Truth”—but Taleb is suggesting that such a position takes a more sanguine view of the market than may be warranted.

The results of Taleb’s investigations imply that such may be the case. “Consider,” he observes, “that, in publishing, less than 1 in 800 books represent half of the total unit sales”—a phenomenon similar to that found by Art De Vany at the cinema in his Hollywood Economics. And while those results might be dismissed as subject to crass reasons, in fact the “academic citation system, itself supposedly free of commercialism, represents an even greater concentration” than that found in commercial publishing, and—perhaps even yet more alarmingly—there is “no meaningful difference between physics and comparative literature”: both display an equal amount of concentration. In all these fields, a very few objects are hugely successful, while the great mass sink like stones into the sea of anonymity.

The replication of these results do not confine themselves simply to artistic or scientific production; they are, in fact, applicable to subjects as diverse as the measurement of the coast of England to the error rates in telephone calls. George Zipf, for example, found that the rule applied to the “distribution of words in the vocabulary,” while Vilfredo Pareto found it applied to the distribution of income in any give society.

“Now,” asks Taleb, “think of waves of one meter tall in relation to waves of 2 meters tall”—there will inevitably be many more one meter waves than two meter waves, and by some magic the ratio between the two will be invariant, just as, according to what linguists call “Zipf’s Law,” “the most frequent word [in a given language] will occur approximately twice as often as the second most frequent word, three times as often as the third most frequent word,” and so on. As the Wikipedia entry for Zipf’s Law (from which the foregoing definition is taken) observes, the “same relationship occurs in many other rankings unrelated to language, such as the population ranks of cities in various countries, corporation sizes, income rankings, ranks of number of people watching the same TV channel, and so on.” All of these subjects are determined by what have come to be known as power laws—and according to some researchers, they even apply to subjects as seemingly immune to them as music.

Zipf himself, in order to explain the distribution he discovered among words, proposed that it could be explained by a kind of physical process, rather than discernment on the part of language-users: “people aim at minimizing effort in retrieving words; they are lazy and remember words that they have used in the past, so that the more a word is used, the more likely it is going to be used in the future, causing a snowball effect.” The explanation has an intuitive appeal: it appears difficult to argue that “the” (the most common English word) communicates twice as much information as “be” (the second-most common English word). Still less does such an argument explain why those word distributions should mirror the distributions of American cities, say, or the height of the waves on Hawaii’s North Shore, or the metabolic rates of various mammals. The widespread appearance of such distributions, in fact, suggests that rather than being determined by forces “intrinsic” to each case, the distributions are driven by a natural law that cares nothing for specifics.

So far, it seems, “we have no clue about the underlying process,” as Taleb says. “Nothing can explain why the success of a novelist … bears similarity to the bubbles and informational cascades seen in the financial markets,” much less why both should “resemble the behavior of electricity power grids.” What we can know is that, while according to the “free speech absolutist” position “one would think that a larger size of the population of producers would cause a democratization,” in fact “it does not.” “If anything,” Taleb notes, “it causes even more clustering.” The prediction of the “free speech absolutist” position suggests that the production of more speech results in a closer approximation of the Truth; experiential results, however, suggest that more production results merely in a smaller number of products becoming more successful for reasons that may have nothing to do with their intrinsic merits.

These results suggest that perhaps Stanley Fish has it right about “free speech,” and thus that the Lords of Augusta—like their spiritual brethren who shot up the offices of Charlie Hebdo in early January this year—have it completely right in the tight rein they hold over the announcers that work their golf tournament: Truth could be the result of, not the enemy of, regulation. The irony, of course, is that such also suggests the necessity of regulation in areas aside from commentary about golf and golfers—a result that, one suspects, is not only one not favored by the Lords of the Masters, but puts them in uncomfortable company. Allahu akbar, presumably, sounds peculiar with a Southern accent.

Bend Sinister

The rebs say that I am a traitor to my country. Why tis this[?] [B]ecause I am for a majority ruling, and for keeping the power in the people[?]
—Jesse Dobbins
Yadkin County, North Carolina
Federal pension application
Adjutant General’s Office
United States Department of War
3 July 1883.

Golf and (the theory of) capitalism were born in the same small country (Scotland) at the same historical moment, but while golf is entwined within the corporate world these days there’s actually a profound difference between the two: for capitalism everything is relative, but the value of a golf shot is absolute. Every shot is strictly as valuable as every other. The difference can be found in the concept of arbitrage—which conventional dictionaries define as taking advantage of a price difference between two markets. It’s at the heart of the financial kind of capitalism we live with these days—it’s why everything is relative under the regime of capitalism—but it’s completely antithetical to golf: you can’t trade golf shots. Still, the concept of arbitrage does explain one thing about golf: how a golf club in South Carolina, in the Low Country—the angry furnace of the Confederacy—could come to be composed of Northern financial types and be named “Secession,” in a manner that suggested its members believed, if only half-jokingly, that the firebrands of 1860 might have not been all wrong.

That, however, gets ahead of starting another golf tournament on the tenth tee. Historically, as some readers may remember, I haven’t done well starting on the tenth hole. To recap: twice I’ve started loops for professional golfers in tournaments on the tenth tee, and each time my pro has blown the first shot of the day out of bounds. So when I saw where we were starting at Oldfield Country Club just outside of Hilton Head in South Carolina, site of an eGolf tournament, my stomach dropped as if I were driving over one of the arched bridges across the housing development’s canals.

Both of those tenth holes were also, coincidentally or not, dog-leg rights; holes that begin at the tee, or upper left so to speak, and move towards the green in a more-or-less curved arc that ends, figuratively, on the lower right. In heraldry, a stripe in such a fashion is called a “bend sinister”: as Vladimir Nabokov put it in explaining the title of his novel by that name, “a bar drawn from the upper left to the lower right on a coat of arms.” My player was, naturally, assigned to start at the tenth tee. My history with such starts went unmentioned.

Superstitious nonsense aside, however, there’s likely reasons why my pros should have had a hard time of a dog-leg right. Very often on a dogleg right trees close off the right side quickly: there’s no room on the right to start the ball there in order to draw it back onto the fairway; which is to say, golfers who draw the ball are at a disadvantage. As this is the typical flight of your better player—while it might be so that the very longest players very often play a “power fade”—it’s perhaps not accidental that marginal players (the only type I, as an unproven commodity, might hope to obtain) ought to be drawers of the ball.

Had I known what I found out later, I might have been more anxious: my golfer had “scrapped … Operation Left to Right”—a project designed to enable him to hit a fade on command—all the way back in 2011, as detailed in a series of Golf Channel articles about him and his struggles in golf’s minor leagues. (“The Minors” golfchannel.com) His favorite ball shape was a draw, a right-to-left shot, which is just about the worst kind of shot you can have on a dogleg-right hole. The tenth at Oldfield had, of course, just that kind of shape.

Already, the sky was threatening, and the air had a chill to it: the kind of chill that can cause the muscles in your hand to be less supple, which can make it just that much harder to “release” the clubhead—which can cause a slice, a left-to-right movement of the ball. Later on my player actually would lose several tee shots to the right, all of them push-fades, including a tough-to-take water ball on the twelfth (our third) hole, a drivable par four.
Eventually the rain would become so bad that the next day the final round would be canceled, which left me at loose ends.

Up past Beaufort there’s a golf club called Secession—a reference to South Carolina’s pride of place with regard to the events leading up to the Civil War: it was the first state to secede, in late December of 1860, and actually helped persuade the other Southern states to secede with it by sending encouraging emissaries to those states. Yet while that name might appear deeply Southern, the membership is probably anything but: Secession, the golf club, is an extremely private course that has become what Augusta began as: a club for the financial guys of New York and Chicago to go to and gamble large sums on golf. Or, to put it another way, the spiritual descendants of the guys who financed Abraham Lincoln’s war.

You might think, of course, that such a place would be somewhat affected by the events of the past five years or so: in fact not, as on the day I stopped in every tee box seemed filled with foursomes, with quite a few filled by loopers carrying doubles. Perhaps I should have known better, since as Chris Lehmann at The Baffler has noted, the “top 1 percent of income earners have taken in fully 93 percent of the economic gains since the Great Recession.” In any case, my errand was unsuccessful: I found out, essentially, that I would need some kind of clout. So, rather than finding my way back directly, I spent a pleasant afternoon in Beaufort. While there, I learned the story of one Robert Smalls, namesake of a number of the town’s landmarks.

“I thought the Planter,” said Robert Smalls when he reached the deck of the USS Onward outside of Charleston Harbor in the late spring of 1862, “might be of some use to Uncle Abe.” Smalls, the pilot, had, along with his crew, stolen the Confederate ship Planter right out from under the Confederate guns by mimicking the Planter’s captain—Smalls knew what the usual signals to leave the harbor were, and by the half-light of dawn he looked sufficiently enough like that officer to secure permission from the sentries at Sumter. (He also knew enough to avoid the minefields, since he’d helped to lay them.) Upon reaching the Union blockade ships on the open Atlantic, Smalls surrendered his vessel to the United States officer in command.

After the war—and a number of rather exciting exploits—Smalls came back to Beaufort, where he bought his former master’s house—a man named McKee—with the bounty money he got for stealing the Planter, and got elected to both the South Carolina House of Representatives and the South Carolina Senate, founding the Republican Party in South Carolina along the way. In office he wrote legislation that provided for South Carolina to have the first statewide public school system in the history of the United States, and then he was elected to the United States House of Representatives, where he became the last Republican congressman from his district until 2010.

Historical tourism in Beaufort thusly means confronting the fact that the entire of the Lowcountry, as it’s called down here, was the center of secessionism. That’s in part why, in a lot of South Carolina, the war ended much earlier than in most of the South, because the Union invaded by sea in late 1861: 80 years before Normandy, in a fleet whose size would not be rivaled until after Pearl Harbor. That’s also why, as the British owner of a bar in the town I’m staying in, Bluffton, notes, the first thing the Yankees did when they arrived in Bluffton was burn in down. It was in order to make a statement similar to the larger point Sherman would later make during his celebrated visit to Atlanta.

The reason for such vindictiveness was because the slaveowners of the Lowcountry were at what their longtime Senator, John Calhoun, had long before called the “furthest outpost” of slavery’s empire. They not only wanted to continue slavery, they wanted to expand its reach—it’s the moral, in fact, of the curious tale of the yacht Wanderer, funded by a South Carolinian. It’s one of those incidents that happened just before the war, one of those incidents whose meaning would only become clear after the passage of time—and Sherman.

The Wanderer was built in 1857 on Long Island, New York, as a pleasure yacht. Her first owner, Col. John Johnson, sailed her down the Atlantic coast to New Orleans, then sailed her back to New York where a William Corrie, of Charleston, South Carolina, bought her. Corrie made some odd alterations to the ship—adding, for instance, a 15,000 gallon water tank. The work attracted the attention of federal officers aboard the steam revenue cutter USS Harriet Lane, who seized the ship when she attempted to leave New York harbor on 9 June 1858—as a suspected slave ship. But there was no other evidence of the intentions of her owner other than the basic alterations, and so the Wanderer was released. She arrived in Charleston on 25 June, completed her fitting out as a slave ship and, after a stop in Port of Spain, Trinidad, sailed for the Congo on 27 July. The Wanderer returned to the United States on 28 November, at Jekyll Island in Georgia, still in the Lowcountry.

The ship bore a human cargo.

Why, though, would William Corrie—and his partners, including the prominent Savannah businessman Charles Lamar, a member of a family that “included the second president of the Republic of Texas, a U.S. Supreme Court justice, and U.S. Secretary of the Treasury Howell Cobb”—have taken so desperate a measure as to have attempted to smuggle slaves into the United States? The slave trade had been banned in the United States since 1808, as per the United States Constitution, which is to say that importing human beings for the purpose of slavery was a federal crime. The punishment was death by hanging.

Ultimately, Corrie and his partners evaded conviction—there were three trials, all held in Savannah, all of which ended with a Savannah jury refusing to convict their local grandees. Oncoming events would, to be sure, soon make the whole episode beside the point. Still, Corrie and Lamar could not have known that, and on the whole the desperate crime seems rather a long chance to take. But the syndicate, led by Lamar, had two motives: one economic, and the other ideological.

The first motive was grasped by Thomas Jefferson, of all people, as early as 1792. Jefferson memorialized his thought, according to the Smithsonian magazine, “in a barely legible, scribbled note in the middle of a page, enclosed in brackets.” The earth-shaking, terrible thought was this: “he was making a 4 percent profit every year on the birth of black children.” In other words, like the land which his slaves worked, every year brought an increase to the value of Jefferson’s human capital. The value of slaves would, with time, become almost incredible: “In 1860,” historian David Brion Davis has noted, “the value of Southern slaves was about three times the amount invested in manufacturing or railroads nationwide.” And that value was only increased by the ban on the slave trade.

First, then, the voyage of the Wanderer was an act of economic arbitrage, which sought to exploit the price difference between slaves in Africa and those in the United States. But it was also an act of provocation—much like John Brown’s raid on Harper’s Ferry less than a year after the Wanderer landed in Georgia. Like the more celebrated case, the sailing of the Wanderer was meant to demonstrate that slave smuggling could be done—it was meant to inspire further acts of resistance to the Slave Importation Act.

Lamar was after all a Southern “firebrand,” common in the Lowcountry and represented in print by the Charleston Mercury. The firebrands advocated resuming the African slave trade: essentially, the members of this group believed that government shouldn’t interfere with the “natural” process of the market. Southerners like Lamar and Corrie, thusly, were the ancestors to those who today believe that, in the words of Italian sociologist Marco d’Eramo, “things would surely improve if only we left them to the free play of market forces.”
The voyage of the Wanderer was, in that sense, meant to demonstrate the thesis that, as Thomas Frank observed about how the ideological descendants of these forebears put it, that “it is the nature of government enterprises to fail.” The mission of the slave ship, that is, could be viewed as on a par with what Frank calls conservative cautions “against bringing top-notch talent into government service” or piling up “an Everest of debt in order to force the government into crisis.” The notion that the yacht’s trip was wholly contrived must have been lost on the Wanderer’s sponsors.

Surely, then, it isn’t difficult to explain the reasoning behind the appeal of a certain kind of South Carolinian thought and that of wealthy people today. What’s interesting about the whole episode, at least from today’s standpoint, is how it was ultimately defeated: by what, at least from one perspective, appears to be another case of arbitrage. In this case, the arbitrageur was named Abraham Lincoln, and he laid out what he was going to arbitrage long before the voyage of the Wanderer. It was in a speech in Peoria in the autumn of 1854, the speech that marked Lincoln’s return to politics after his defeat in the late 1840s after his opposition to the Mexican War. In that speech, Lincoln laid the groundwork for the defeat of slavery by describing how slavery had artificially interfered with a market—the one whose currency is votes.

The crucial passage of the Peoria speech begins when Lincoln begins to compare two states: South Carolina being one, likely not so coincidentally, and Maine being the other. Both states, Lincoln observes, are equally represented in Congress: “South Carolina has six representatives, and so has Maine; South Carolina has eight presidential electors, and so has Maine.” “Thus in the control of the government,” Lincoln concludes, “the two States are equals precisely.” But, Lincoln goes on to note, observe the numbers of their free people: “Maine has 581,813—while South Carolina has 274,567.” Somehow, then, the Southern voter “is more than double of any one of us in this crowd” in terms of control of the federal government: “it is an absolute truth, without an exception,” Lincoln said, “that there is no voter in any slave State, but who has more legal power in the government than any voter in any free State.” There was, in sum, a discrepancy in value—or what economists might call an “inefficiency.”

The reason for that discrepancy was, as Lincoln also observed, “in the Constitution”—by which he referred to what’s become known as the “Three-Fifths Compromise,” or Article One, Section 2, Paragraph 3: “Representatives and direct Taxes shall be apportioned among the several States … according to their respective Numbers, which shall be determined by adding to the whole Number of free Persons … [and] three fifths of all other Persons.” By this means, Southern states received representation in the federal government in excess of the number of their free inhabitants: in addition to the increase in wealth obtained by the reproduction of their slaves, then, slaveowners also benefitted politically.

In an article for the New York Times’ series Disunion (“The Census of Doom”), which is blogging the Civil War as it happened, Adam Goodheart observes that over the decade between the 1850 United States Census, however, as and the 1860 edition of same, the population of the North had exploded by 41 percent, while that of the South had only grown by 27 percent. (By comparison, Goodheart points out, between 2000 and 2010 the United States population grew by just 9.7 percent.) To take one state as an example, in less than 25 years one Northern state—Wisconsin—had grown by nearly 6400 (sic) percent. Wisconsin would, of course, go heavily for Lincoln in the presidential election—Lincoln would be the first president ever elected without the support of a single Southern state. (He wasn’t even on the ballot in most.) One Northern newspaper editor, Goodheart notes, smugly observed that “The difference in the relative standing of the slave states and the free, between 1850 and 1860, inevitably shows where the future greatness of our country is to be.” Lincoln’s election confirmed the fact that the political power held by the Southern states since the nation’s founding, with the help of an electoral concession, had been broken by a wash of new Northern voters.

If read in that light, then, the Thirteenth and Fourteenth Amendments to the Constitution, which ended both slavery and the Three Fifths Clause, could be understood as a kind of price correction: the two amendments effectively ended the premium that the Constitution had until then placed on Southern votes. Lincoln becomes a version of Brad Pitt’s character in the movie of Michael Lewis’ most famous book—Billy Beane in Moneyball. Just as Billy Beane saw—or was persuaded to see—that batting average was overvalued and on-base percentage was undervalued, thus creating an arbitrage possibility for players who walked a lot, Lincoln saw that Southern votes were too highly valued and Northern ones too undervalued, and that (sooner or later) the two had to converge towards what economists would call “fundamental value.”

That concept is something that golf teaches well. In golf, there are no differences in value to exploit: each shot has just the same fundamental value. On our first tee that day, which was the tenth hole at Oldfield Country Club, my golfer actually didn’t blow his first shot out-of-bounds—though I had fully expected that to happen. He did come pretty close though: it flew directly into the trees, a slicing, left-to-right block. I took off after everyone had teed off: clearly the old guy who was marshaling the hole wasn’t going to be of much help. But I found the ball easily enough, and my player pitched out and ended up making a great par save. The punch-out shot from the trees counted just the same as an approach shot might have, or as a second putt.

Understanding that notion of fundamental value taught by golf—among other possible human acts—allows the further understanding that the “price correction” undertaken by Lincoln wasn’t simply a one-time act: the value of an American vote still, today, varies across the nation. According to the organization FairVote, as of 2003 a vote in Wyoming was more than three times more valuable than, say, my vote as a resident of the state of Illinois. Even today—as the Senate’s own website notes—“senators from the twenty-six smallest states, who (according to the 2000 census) represent 17.8% of the nation’s population, constitute a majority of the Senate.” It’s a fact that the men of the Secession Golf Club might just as well people ignored—because it just may be why 93 percent of the wealth since the Great Recession has gone to the wealthy.

To take a small example of how the two points might be connected, a recent New Yorker piece has pointed out that “in the fifth year of his Presidency, Obama has failed to place even a single judge on the D.C. Circuit, considered the second most important court in the nation” because the Senate has refused to confirm any of his nominees. This despite the fact that there are now four vacancies out of eleven seats. Why? Because the Senate’s rules allow a minority of Senators—or even just one, in the case of what’s known as the “hold”—to interfere with the will of the majority: an advantage Republican senators have not hesitated to seize.

Nearly twenty years after the publication of Bend Sinister, Nabokov chose to write an introduction in which he endeavored to explain the novel’s name. “This choice of title,” he wrote, “was an attempt to suggest an outline broken by refraction, a distortion in the mirror of being, a wrong turn taken by life, a sinistral and sinister world.” If there are wrong turns, of course, that would suggest that there are right ones; if there are “distortions,” then there are clarities: that is, there is an order to which events will (eventually, sooner or later) return. It’s a suggestion that is not fashionable these days: Nabokov himself isn’t read much today for his own beliefs so much as for the confirmation his novels can provide for one or another thesis. But if he is right—if golf’s belief in “fundamental value” is right—then there must necessarily come some correction to this ongoing problem of the value of a vote.

The location of the new Fort Sumter, however, remains unknown.

Golf Architecture as Narrative Art

You think you can leave the past behind,
You must be out of your mind.
If you think you can simply press rewind,
You must be out of your mind, son
You must be out of your mind.
—Magnetic Fields “You Must Be Out Of Your Mind.” Realism.

I sometimes get asked just what the biggest difference is between the amateur and the professional games are, and about as often I want to say, “Amateurs always start on the first tee.” This is a smart-alecky remark, but it isn’t just smart-alecky. For over a century the United States Open sent every player off from the first tee the first two days of the tournament, a tradition that ended in 2002 at Bethpage in New York. Now, only the Masters and the Open Championship in the U.K. still start everyone on the first tee every day. Mostly nobody notices, in part because televised golf encourages a kind of schizoid viewing habit: we skip from hole to hole, shot to shot, seemingly at random, without order.

“Here’s Ernie at 11,” the announcer will say, never mind that the last thing we saw was the leaders hitting their approach shots into 7, and right before that we saw Player X finishing up at 18. All of this approaches the golf course like a deck of cards to be dealt at random: which is precisely the opposite of how the amateur player always sees a golf course, one hole at a time.

Pro golf, both on television and the way the players themselves experience it, is different. A golf course, like a book, is designed to be played in a certain order, which makes golf architecture different from other kinds of architecture or other kinds of art like painting or sculpture, as much as the brochures and the television announcers like to make mention of this week’s “breathtaking beauty.” Golf architecture though has just as much in common with temporal arts like music or narrative: what’s important isn’t just what’s happening now but what’s happened before.

Did the architect create the illusion that those bunkers weren’t a problem on the last hole, causing you to play safe on this one—or vice versa? Maybe two greens with similar-looking slopes will play differently because the grain runs differently on each. There’s a lot of games architects can play that take advantage of what we’ve learned—or thought we learned—on previous holes.

Mostly though the obvious tricks are easily discovered, or only work once. Courses like that are like murder mysteries spoiled after somebody tells you just how Mr. Green bought it from a rutabega poisoned by the maid, who turns out to be employed by and who the hell cares. What makes a course worth playing is one that continues to bewilder, even after you know the secret of it. Nobody gives a damn if you know “Rosebud” was the sled—Citizen Kane is still good. Good architecture, I would submit, tells a story.

Maybe the best example of what I mean is Riviera Country Club in Los Angeles, where the tour plays the L.A. Open every year. Widely acclaimed as an architect’s dream course, Riviera is also remarkably fun to play while still being one of the toughest tracks the professionals play every year. The first tee begins a few steps, quite literally, from the clubhouse, on a patch of grass high above the rest of the course. The tee shot drops out of the sky just as you do from the heights—Icarus (or Lucifer) plummeting, as Milton says, “toward the coast of Earth beneath,/Down from th’ Ecliptic, sped with hop’d success.” The first is the easiest hole on the course, a par five with the tee not only elevated, but a wide landing zone to receive the shot. The green is wide, and in general it’s a lullaby of a hole.

The second, however, turns the tables quickly. It’s a long dog-legged par four with out-of-bounds (the driving range) left and trees right: the tee shot is either to a narrow piece of fairway or the riskier shot over the neck of the dogleg on the right over the trees. Either way, the approach is to a very narrow green with deep bunkers left and a hillside with very tall rough on the right. The professionals regard a four here as dearly as a five is cheap on the first hole. Usually the second is the toughest hole on the course every year.

Whereas the first hole rewards the bomber, the second favors the straight-shooter. In other words, what worked on the first hole is exactly what’s penalized on the next, and vice versa. Riviera continues on like this all the way around the course, giving and taking away options throughout and always mixing it up: what worked on the last hole won’t necessarily work on the next; in fact, following the same strategy or style of play is exactly what leads to big numbers.

What’s really astonishing about Riviera is that it doesn’t matter whether you know what’s coming: just because you know the first hole is easy, and why, and the second is hard, and why, doesn’t change things. There isn’t any short-cut—such as is often found on the videogame Golden Tee for instance—that, once discovered, ends the problem the next hole presents. That ability to confound is something rare in a golf course. Most courses reward a particular style—Jack Nicklaus’ courses are notorious for rewarding high fades, the shot Nicklaus liked to hit in his prime.

The great courses, though, not only mix up styles, but also tell a story. As Rob says in High Fidelity, “You gotta kick it off with a killer to grab attention. Then you gotta take it up a notch. But you don’t want to blow your wad. So then you gotta cool it off a notch. There are a lot of rules.” Rob’s point owes something perhaps to Stanley Fish, the Miltonist, who argued in Surprised By Sin that the way Paradise Lost works is to ensnare the reader constantly, setting up one expectation after another, dashing each in turn.

At Riviera, for instance, the first two holes raise hopes and then dash them—or conceivably raise them to a higher pitch, should you somehow make a miraculous birdie on the second. The rest of the course continues to toy with a player’s mind. Two years ago Geoff Ogilvy, the Australian pro I’ve written about before, talked with Geoff Shackelford of Golf Digest about the short 10th hole and how important that hole’s place in the routing is:

“The eighth and ninth holes are very hard, but you know that the 10th and 11th [a reachable par 5] offer a couple of birdie or even eagle chances. So [the 10th hole] sits in the round at the perfect time,” says Ogilvy. “It’s definitely a much better hole than it [would be] if you teed off there to start your round when the dynamics just aren’t nearly the same.”

Sequence matters in other words even if, as at Riviera, players are guaranteed to have to start at least one round on the 10th hole because the first two days use split tees.

Medinah, where I usually work, often takes a lot of crap from the big-name golf writers on just that point: Bradley Klein, for instance, who’s not only the architecture critic for Golfweek but was also PGA Tour caddie and a professor of political science, doesn’t think much of the course. In 1999, he said it was “stunningly mediocre.” Klein doesn’t convince me. Maybe it’s because I am—maybe more so than anyone on the planet—familiar with the course, but it might also be that Klein either isn’t aware of the role of narrative in architecture, or isn’t familiar enough with Medinah to understand its narrative.

There’s a stretch of holes, for instance, that I think illustrate what I’ll call the High Fidelity or Paradise Lost principle pretty well: the ninth through the eleventh. The first and the last hole of this stretch are both dogleg-left four pars, sandwiching a long five par that goes directly into the prevailing wind. The ninth and the eleventh are both similar-looking holes to the unwary: both require you to choose either to try to carry the dogleg with a driver off the tee or lay-up with some other club. But the tee shot on nine is into the prevailing wind and uphill, while the eleventh is with the prevailing wind and downhill. What worked on the first one won’t work on the other. In addition, the tenth is so long, and into the wind, that the player usually thinks more club is necessary on the eleventh tee—but that’s usually exactly the wrong choice.

Medinah just underwent a renovation last year—again—so I will see how the changes went and report back on them here. What I wanted to do here first though was to describe a bit about how I’m going to understand that change, which is to evaluate the golf course through the story it tells. Playing the course as the architect meant it to be played is one advantage the amateur has over the professional. The PGA Tour isn’t far removed from the shotgun starts that are a feature of your typical pro-am event, where it doesn’t matter what hole you start on. But enjoying the structure, the internal logic, of course design is not only one of the game’s pleasures, but also I think a means of improving your own golf: understanding what the architect wants is a big step towards lowering your score. “But to convince the proud what signs avail?” Milton says in Paradise Lost, “Or wonders move the obdurate to relent?” Reading the signs in order, I think, is the amateur’s one advantage over the professional—it is a pleasure not unlike the bite of a noted apple.