His Dark Materials

But all these in their pregnant causes mixed
Confusedly, and which thus must ever fight.
Unless the Almighty Maker them ordain
His dark materials to create more worlds
—Paradise Lost II, 913-16

One of the theses of what’s known as the “academic Left” in America is that “nothing is natural,” or, as the literary critic (and “tenured radical”) Stanley Fish more properly puts it, “the thesis that the things we see and the categories we place them in … have their source in culture rather than nature.” It’s a thesis however, that seems to be obviously wrong in the case of professional golf. Without taking the time to do a full study of the PGA Tour’s website, which does list place of birth, it seems undoubtable that most of today’s American tour players originate south of the Mason-Dixon line: either in the former Confederacy or in other Sun Belt states. Thus it seems difficult to argue that there’s something about “Southern culture” that gives Southerners a leg up toward the professional ranks, rather than just the opportunity to play golf more times a year.

Let’s just look, in order to keep things manageable, at the current top ten: Jordan Speith, this year’s Masters winner, is from Texas, while Jimmy Walker, in second place, is just from up the road in Oklahoma. Rory McIlroy doesn’t count (though he is from Northern Ireland, for what that’s worth), while J.B. Holmes is from Kentucky. Patrick Reed is also from Texas, and Bubba Watson is from Florida. Dustin Johnson is from South Carolina, while Charlie Hoffman is from southern California. Hideki Matsuyama is from Ehime, Japan, which is located on the southern island of Shikoku in the archipelago, while Robert Streb rounds out the top ten and keeps the score even between Texas and Oklahoma.

Not until we reach Ryan Moore, at the fifteenth spot, do we find a golfer from an indisputably Northern state: Moore is from Tacoma, Washington. Washington however was not admitted to the Union until 1889; not until the seventeenth spot do we find a golfer from a Civil War-era Union state beside California. Gary Woodland, as it happens one of the longest drivers on tour, is from Kansas.

This geographic division has largely been stable in the history of American golf. It’s true of course that many great American golfers were Northerners, particularly at the beginnings of the game (like Francis Ouimet, “Chick” Evans, or Walter Hagan—from Massachusetts, Illinois, and Michigan respectively), and arguably the greatest of all time was from Ohio: Jack Nicklaus. But Byron Nelson and Ben Hogan were Texans, and of course Bobby Jones, one of the top three golfers ever, was a Georgian.

Yet while it might be true that nearly all of the great players are Southern, the division of labor in American golf is that nearly all of the great courses are Northern. In the latest Golf Digest ranking for instance, out of the top twenty courses only three—Augusta National, which is #1, Seminole in Florida, and Kiaweh in South Carolina—are in the South. New York (home to Winged Foot and Shinnecock, among others) and Pennsylvania (home to Merion and Oakmont) had the most courses in the top twenty; other Northern states included Michigan, Illinois, and Ohio. If it were access to great courses that made great golfers, in other words—a thesis that would appear to have a greater affinity with the notion that “culture,” rather than “nature,” was what produced great golfers, then we’d expect the PGA Tour to be dominated by Northerners.

That of course is not so, which perhaps makes it all the stranger that, if looked at by region, it is usually “the South” that champions “culture” and “the North” that champions “nature”—at least if you consider, as a proxy, how evolutionary biology is taught. Consider for instance a 2002 map generated by Lawrence S. Lerner of California State University at Long Beach:

v6i8g11

(Link here: http://bigthink.com/strange-maps/97-nil-where-and-how-evolution-is-taught-in-the-us). I realize that the map may be dated now, but still—although with some exceptions—the map generally shows that evolutionary biology is at least a controversial idea in the states of the former Confederacy, while Union states like Connecticut, New Jersey, and Pennsylvania are ranked by Professor Lerner as “Very good/excellent” in the matter of teaching Darwinian biology. In other words, it might be said that the states that are producing the best golfers are both the ones with the best weather and a belief that nature has little to do with anything.

Yet, as Professor Fish’s remarks above demonstrate, it’s the “radical” humanities professors of the nation’s top universities that are the foremost proponents of the notion that “culture” trumps “nature”—a fact that the cleverest creationists have not led slide. An article entitled “The Postmodern Sin of Intelligent Design Creationism” in a 2010 issue of Science and Education, for instance, lays out how “Intelligent Design Creationists” “try to advance their premodern view by adopting (if only tactically) a radical postmodern perspective.” In Darwinism and the Divine: Evolutionary Thought and Natural Theology, Alister McGrath argues not only “that it cannot be maintained that Darwin’s theory caused the ‘abandonment of natural theology,’” and also approvingly cites Fish: “Stanley Fish has rightly argued that the notion of ‘evidence’ is often tautologically determined by … interpretive assumptions.” So there really is a sense in which the the deepest part of the Bible Belt fully agrees with the most radical scholars at Berkeley and other top schools.

In Surprised By Sin: The Reader in Paradise Lost, Stanley Fish’s most famous work of scholarship, Fish argues that Satan is evil because he is “the poem’s true materialist”—and while Fish might say that he is merely reporting John Milton’s view, not revealing his own, still it’s difficult not to take away the conclusion that there’s something inherently wrong with the philosophical doctrine of materialism. (Not to be confused with the vulgar notion that life consists merely in piling up stuff, the philosophic version says that all existence is composed only of matter.) Or with the related doctrine of empiricism: “always an experimental scientist,” Fish has said more recently in the Preface to Surprised By Sin’s Second Edition, Satan busies himself “by mining the trails and entrails of empirical evidence.” Fish of course would be careful to distance himself from more vulgar thinkers regarding these matters—a distance that is there, sure—but it’s difficult not to see why creationists shouldn’t mine him for their own views.

Now, one way to explain that might be that both Fish and his creationist “frenemies” are drinking from the Pure Light of the Well of Truth. But there’s a possible materialistic candidate to explain just why humanities professors might end up with views similar to those of the most fundamentalist Christians: a similar mode of production. The political scientist Anne Norton remarks, in a book about the conservative scholar Leo Strauss, that the pedagogical technique pursued by Strauss—reading “a passage in a text” and asking questions about it—is also one pursued in “the shul and the madrasa, in seminaries and in Bible study groups.” At the time of Strauss’ arrival in the United States as a refugee from a 1930s Europe about to be engulfed in war, “this way of reading had fallen out of favor in the universities,” but as a result of Strauss’ career at the University of Chicago, along with that of philosophers Mortimer Adler (who founded the Great Books Program) and Robert Hutchins, it’s become at least a not-untypical pedagogical method in the humanities since.

At the least, that mode of humanistic study would explain what the philosopher Richard Rorty meant when he repeated Irving Howe’s “much-quoted jibe—‘These people don’t want to take over the government; they just want to take over the English Department.’” It explains, in other words, just how the American left might have “become an object of contempt,” as Rorty says—because it is a left that no longer believes that “the vast inequalities within American society could be corrected by using the institutions of a constitutional democracy.” How could it, after all, given a commitment against empiricism or materialism? Taking a practical perspective on the American political machinery would require taking on just the beliefs that are suicidal if your goal is to achieve tenure in the humanities at Stanford or Yale.

If you happen to think that most things aren’t due to the meddling of supernatural creatures, and you’ve given up on thoughts of tenure because you dislike both creationist nut-jobs and that “largely academic crowd cynical about America, disengaged from practice, and producing ever-more-abstract, jargon-ridden interpretations of cultural phenomena,” while at the same time you think that putting something in the place of God called “the free market”—which is what, exactly?—isn’t the answer either, why, then the answer is perfectly natural.

You are writing about golf.

Advertisements

Instruments of Darkness

 

And oftentimes, to win us to our harm,
The instruments of darkness tell us truths …
—William Shakespeare
    The Tragedy of MacBeth
Act I, scene 3 132-3 (1606) 

 

This year’s Masters demonstrated, once again, the truism that nobody watches golf without Tiger Woods: last year’s Masters, played without Tiger, had the lowest ratings since 1957, while the ratings for this year’s Saturday’s round (featuring a charging Woods), were up nearly half again as much. So much is unsurprising; what was surprising, perhaps, was the reappearance of a journalistic fixture from the days of Tiger’s past: the “pre-Masters Tiger hype story.” It’s a reoccurance that suggests Tiger may be taking cues from another ratings monster: the television series Game of Thrones. But if so—with a nod to Ramsey Snow’s famous line in the show—it suggests that Tiger himself doesn’t think his tale will have a happy ending.

The prototype of the “pre-Masters” story was produced in 1997, the year of Tiger’s first Masters win: before that “win for the ages,” it was widely reported how the young phenom had shot a 59 during a practice round at Isleworth Country Club. At the time the story seemed innocuous, but in retrospect there are reasons to interrogate it more deeply—not to say it didn’t happen, exactly, but to question whether it was released as part of a larger design. After all, Tiger’s father Earl—still alive then—would have known just what to do with the story.

Earl, as all golf fans know, created and disseminated the myth of the invincible Tiger to anyone who would listen in the late 1990s: “Tiger will do more than any other man in history to change the course of humanity,” Gary Smith quoted him saying in the Sports Illustrated story (“The Chosen One”) that, more than any other, sold the Gospel of Woods. There is plenty of reason to suspect that the senior Woods deliberately created this myth as part of a larger campaign: because Earl, as a former member of the U.S. Army’s Green Berets, knew the importance of psychological warfare.

“As a Green Beret,” writes John Lamothe in an academic essay on both Woods, elder and junior, Earl “would have known the effect … psychological warfare could have on both the soldier and the enemy.” As Tiger himself said in a 1996 interview for Orange Coast magazine—before the golfer put up a barrier between himself and the press—“Green Berets know a lot about psychological torture and things like that.” Earl for his part remarked that, while raising Tiger, he “pulled every dirty, nasty trick I could remember from psychological warfare I learned as a Green Beret.” Both Woods described this training as a matter of rattling keys or ripping Velcro at inopportune moments—but it’s difficult not to wonder whether it went deeper.

At the moment of their origin in 1952 after all, the Green Berets, or Special Forces, were a subsection of the Psychological Warfare Staff at the Pentagon: psychological warfare, in other words, was part of their founding mission. And as Lamothe observes, part of the goal of psychological warfare is to create “confidence” in your allies “and doubt in the competitors.” As early as 2000, the sports columnist Thomas Boswell was describing how Tiger “tries to imprint on the mind of every opponent that resistance is useless,” a tactic that Boswell claimed the “military calls … ‘overwhelming force’”—and a tactic that is far older than the game of golf. Consider, for instance, a story from golf’s homeland of Scotland: the tale of the “Douglas Larder.”

It happened at a time of year not unfamiliar to viewers of the Masters: Palm Sunday, in April of 1308. The story goes that Sir James Douglas—an ally of Robert the Bruce, who was in rebellion against the English king Edward I—returned that day to his family’s home, Douglas Castle, which had been seized by the English. Taking advantage of the holiday, Douglas and his men—essentially, a band of guerrillas—slaughtered the English garrison within the church they worshipped in, then beheaded them, ate the Easter feast the Englishmen had no more use for, and subsequently poisoned the castle’s wells and destroyed its supplies (the “Larder” part of the story’s title). Lastly, Douglas set the English soldiers’ bodies afire.

To viewers of the television series Game of Thrones, or readers of the series of books it is based upon (A Song of Ice and Fire), the story might sound vaguely familiar: the “Douglas Larder” is, as popular historian William Rosen has pointed out, one source of the event known from the television series as the “Red Wedding.” Although the television event also borrows from the medieval Scot “Black Dinner” (which is perhaps closer in terms of the setting), and the later incident known as the Massacre at Glencoe, still the “Red Wedding” reproduces the most salient details of the “Douglas Larder.” In both, the attackers take advantage of their prey’s reliance on piety; in both, the bodies of the dead are mutilated in order to increase the monstrous effect.

To a modern reader, such a story is simply a record of barbarism—forgetting that medieval people were, though far less educated, equally as intelligent as nearly anyone alive today. Douglas’ actions were not meant for horror’s sake, but to send a message: the raid on the castle “was meant to leave a lasting impression … not least upon the men who came to replace their dead colleagues.” Acts like his attack on his own castle demonstrate how the “Black Douglas”—“mair fell than wes ony devill in hell” according to a contemporary account—was “an early practitioner of psychological warfare”: he knew how “fear alone could do much of the work of a successful commander.” It seems hardly credible to think Earl Woods—a man who’d been in combat in the guerrilla war of Vietnam—did not know the same lesson. Nor is it credible to think that Earl didn’t tell Tiger about it.

Certainly, Tiger himself has been a kind of Douglas: he won his first Masters by 12 shots, and in the annus mirabilis of 2000 he won the U.S. Open at Pebble Beach by 15. Displays like that, many have thought, functioned similarly, if less macabrely, as Douglas’ attacks. The effect has even been documented academically: in 2008’s “Dominance, Intimidation, and ‘Choking’ on the PGA Tour,” professors Robert Connolly and Richard Rendleman found that being paired with Tiger cost other tour pros nearly half a shot per round from 1998 to 2001. The “intimidation factor,” that is, has been quantified—so it seems jejune at best to think somebody connected to Tiger, even if he had not been aware of the effect in the past, would not have called his attention to the research.

Releasing a story prior to the Masters, then, can easily be seen as part of an attempt to revive Tiger’s heyday. But what’s interesting about this particular story is its difference from the 1997 version: then, Tiger just threw out a raw score; now, it’s being dressed in a peculiarly complicated costume. As retailed by Golf Digest’s Tim Rosaforte, the story goes like this: on the Tuesday before the tournament Tiger had “recently shot a worst-ball 66 at his home course, Medalist Golf Club.” In Golf Digest, Alex Meyers in turn explained that “a worst-ball 66 … is not to be confused with a best-ball 66 or even a normal 66 for that matter,” because what “worst-ball” means is that “Woods played two balls on each hole, but only played the worst shot each time.” Why not just say, as in 1997, Tiger shot some ridiculously low number?

The answer, I think, can be understood by way of the “Red Wedding”: just as George Martin, in order to write the A Song of Ice and Fire books, has revisited and revised many episodes of medieval history, so too is Tiger attempting to revisit his own past—a conclusion that would be glib were it not for the very make-up of this year’s version of the pre-Masters story itself. After all, to play a “worst-ball” is to time-travel: it is, in effect, to revise—or rewrite—the past. Not only that, but—and in this it is very much like both Scottish history and Game of Thrones—it is also to guarantee a “downer ending.” Maybe Tiger, then, is suggesting to his fans that they ought to pay more attention.

Thought Crimes

 

How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth?
Sherlock Holmes
    The Sign of Four (1890).

 

Whence heavy persecution shall arise
On all, who in the worship persevere
Of spirit and truth; the rest, far greater part,
Will deem in outward rites and specious forms
Religion satisfied; Truth shall retire
Bestuck with slanderous darts, and works of faith
Rarely be found: So shall the world go on …
John Milton
   Paradise Lost
   Book XII 531-37

 

When Tiger Woods, just after four o’clock Eastern time, hit a horrific duck-hook tee shot on Augusta National’s 13th hole during the third round of the Masters tournament Saturday, the golfer sent one of George Carlin’s “seven dirty words” after it, live on air. About an hour later, around a quarter after five, the announcer Ian Baker-Finch caught himself from uttering a taboo phrase: although he began by saying “back,” the Australian quickly corrected himself by saying “second nine.” To the novice Masters viewer the two misuses of language might appear quite different (Finch-Baker’s slip, that is, being far less offensive), but longtime viewers are aware that, had Baker-Finch not saved himself, his error would have been the more serious incident—to the extent, in fact, that he might have lost his job. Just why that is so is difficult to explain to outsiders unfamiliar with Augusta National’s particular vision of decorum; it may, however, perhaps be explained by one of the broadcast’s few commercials; an advert whose tagline connects a golf commentator’s innocent near-mistake to an argument about censorship conducted at the beginning of this year—in Paris, at the business end of a Kalashnikov.

France is a long way from Georgia, however, so let’s begin with how what Ian Baker-Finch almost said would have been far worse than Tiger’s f-bombs. In the first place that is because, as veterans of watching the Masters know, the announcing team is held to very strict standards largely unique to this sporting event. Golf is, in general, far more concerned with “decorum” and etiquette than other sports—it is, as its enthusiasts often remark, the only one where competitors regularly call penalties on themselves—but the Masters tournament examines the language of its broadcasters to an extent unknown even at other golf tournaments.

In 1966, for example, broadcaster Jack Whittaker—as described in the textbook, Sports Media: Planning, Production, and Reporting— “was canned for referring to Masters patrons as a ‘mob,’” while in 1994 Gary McCord joked (as told by Alex Myers in Golf Digest) “that ‘bikini wax’ is used to make Augusta National’s greens so slick”—and was unceremoniously dumped. Announcers at the Masters, in short, are well-aware they walk a fine line.

Hence, while Baker-Finch’s near-miss was by no means comparable to McCord’s attempts at humor, it was serious because it would have broken a known one of the “Augusta Rules,” as John Feinstein called them in Moment of Glory: The Year Underdogs Ruled Golf. “There are no front nine and back nine at Augusta but, rather, a first nine and a second nine,” Feinstein wrote; a rule that, it’s said, developed because the tournament’s founders, the golfer Bobby Jones and the club chairman Clifford Roberts, felt “back nine” sounded too close to “back side.” The Lords of Augusta, as the club’s members are sometimes referred to, will not stand for “vulgarity” from their announcing team—even if the golfers they are watching are sometimes much worse.

Woods, for example (as the Washington Post reported), “followed up a bad miss left off the 13th tee with a curse word that was picked up by an on-course microphone, prompting the CBS announcers to intone, ‘If you heard something offensive at 13, we apologize.’” Yet while even had Baker-Finch uttered the unutterable, he would only have suggested what Woods baldly verbalized, it’s unimaginable that Woods could suffer the same fate as a CBS announcer would, or be penalized in any way. The uproar that would follow if, for instance, the Lords decided to ban Tiger from further tournaments would make all previous golf scandals appear tame.

Undoubtedly, the difference in treatment conceivably could be justified by the fact that Woods is a competitor (and four-time winner) in the tournament while announcers are ancillary to it. In philosophic terms, players are essential while announcers are contingent: players just are the tournament because without them, no golf. That isn’t as possible to say about any particular broadcaster (though, when it comes to Jim Nantz, lead broadcaster since 1986, it might be close). From that perspective then it might make sense that Tiger’s “heat-of-the-moment” f-bombs are not as significant as a slip of the tongue by an announcer trained to speak in public could be.

Such, at least, might be a rationale for the differing treatment accorded golfers and announcers: so far as I am aware, neither the golf club nor CBS has come forward with an explanation regarding the difference. It was while I was turning this over in my mind that one of the tournament broadcast’s few commercials came on—and I realized just why the difference between Tiger’s words and, say, Gary McCord’s in 1994 caught in my brain.

The ad in question consisted of different people reciting, over and over again, a line once spoken by IBM pioneer Thomas Watson in 1915: “All of the problems of the world could be settled easily if men were only willing to think.” Something about this phrase—repeated so often it became quite literally like a mantra, defined as a “sacred utterance, numinous sound” by Wikipedia—rattled something in my head, which ignited a slight Internet investigation: it seems that, for IBM, that last word—think—became a catchword after 1915; the word was plastered on company ephemera like the name of the company magazine and even, in recent times, becoming the basis for the name of such products as the Thinkpad. The sentence, it could be said, is the official philosophy of the company.

As philosophies go it seems inarguable that this is rather a better one than, for instance, one that might demand “silence your enemies wherever possible.” It is, one might say, a hopeful sentence—if only people were willing to use their rationality, the difficult and the intractable could be vanquished. “Think,” in that sense, is a sentiment that seems quite at odds with the notion of censorship: without airing what someone is thinking, it appears impossible to believe that anything could be settled. In order to get people to think, it seems inarguable that they must be allowed to talk.

Such, at least, is one of the strongest pillars of the concept of “free speech,” as the English and law professor Stanley Fish has pointed out. Fish quotes, as an example of the argument, the Chairman of the National Endowment for the Humanities, James A. Leach, who gave a speech in 2009 claiming that “the cornerstone of democracy is access to knowledge.” In other words, in order to achieve the goal outlined by Watson (solving the world’s problems), it’s necessary to put everyone’s views in the open in order that they might be debated—a notion usually conceptualized, in relation to American law, as the “marketplace of ideas.”

That metaphor traces back to American Supreme Court justice Oliver Wendell Holmes, Jr.’s famous dissent in a case called Abrams v. United States, decided in 1919. “The ultimate good desired,” as Holmes wrote in that case (interestingly, in the light of his theory, against the majority opinion), “is better reached by free trade in ideas—that the best test of truth is the power of the thought to get itself accepted in the competition of the market.” That notion, in turn, can (as Fish observes) be followed back to English philosopher John Stuart Mill, and even beyond

“We can never be sure that the opinion we are endeavoring to stifle is a false opinion,” Mill wrote in his On Liberty, “and if we were sure, stifling it would be an evil still.” Yet further back,  the thought connects to John Milton’s Areopagitica, where the poet wrote “Let [Truth] and Falsehood grapple; who ever knew Truth put to the worse in a free and open encounter?” That is, so long as opinions can be freely shared, any problem could in principle be solved—more or less Thomas Watson’s point in 1915.

Let’s be clear, however, what is and what is not being said. That is, the words “in principle” above are important because I do not think that Watson or Mills or Milton or Holmes would deny that there are many practical reasons why it might be impossible to solve problems with a meeting or a series of meetings. No one believes, for instance, that the threat of ISIS could be contained by a summit meeting between ISIS and other parties—the claim that Holmes & Watson (smirk) et al. would make is just that the said threat could be solved if only that organization’s leaders would agree to a meeting. Merely objecting that many times such conceivable meetings are not practical isn’t, in that sense, an strong objection to the idea of the “idea market”—which asserts that in conditions of what could be called “perfect communication” disagreement is (eventually) impossible.

That however is precisely why Fish’s argument against the “market” metaphor is such a strong one: it is Fish’s opinion that the “marketplace” metaphor is just that—a metaphor, not a bedrock description of reality. In an essay entitled “Don’t Blame Relativism,” in fact, Fish apparently denies “the possibility of describing, and thereby evaluating” everything “in a language that all reasonable observers would accept.” That is, he denies the possibility that is imagined by Thomas Watson’s assertion regarding “[a]ll of the problems of the world”: the idea that, were only everyone reasonable, all problems could be solved.

To make the point clearer, while in Watson’s metaphor (which is also Milton’s and Mills’ and Holmes’), in theory everything can be sorted out if only everyone came to the bargaining table, to Fish such a possibility is not only practically impossible, but also theoretically impossible. Fish’s objection to the “market” idea isn’t just that it is difficult, for instance, to find the right translators to speak to different sides of a debate in their own language, but that even were all conditions for perfect communication met, that would not guarantee the end of disagreement.

It’s important to note at this point that this is a claim Fish needs to make in order to stick his argument, because if all he does is advance historically-based arguments to the effect that at no point in human history has the situation described by Watson et al. ever existed, their partisans can counterclaim that just because no one has yet seen perfect communication, that’s no reason to think it might not someday be possible. Such partisans might, for example, quote Alice Calaprice’s The Quotable Einstein, which asserts that Einstein once remarked that “No amount of experimentation can prove me right; a single experiment can prove me wrong.” Or, as the writer Nassem Nicholas Taleb has put the same point while asserting that it ultimately traces back through John Stuart Mill to David Hume: “No amount of observations of white swans can allow the inference that all swans are white, but the observation of a single black swan is sufficient to refute that conclusion.” In other words, Fish could be right that no such perfect communication has ever existed, but it would be logically inconsistent to try to claim that such evidence implies that it could never be possible.

To engage his opponents, then, Fish must take to the field of “theory,” not just adduce historical examples. That is why Fish cannot just claim that, historically, even regimes that claim to follow the creed of Watson and Holmes and so on in theory do not actually follow that creed in reality, though he does make that argument. He points out, for instance, that even in the Areopagitica, otherwise a passionate defense of “free speech,” Milton allowed that while “free speech” is all well and good for most people most of the time, he does not mean to imply “tolerated popery” (i.e., Catholics), because as that religion (according to Milton) “extirpates all religious and civil supremacies, so itself should be extirpate.”

In other words, Milton explains that anything that threatens the idea of “free speech” itself—as Catholicism, in Milton’s day arguably in the throes of the Inquisition, did so threaten—should not be included in the realm of protected speech, since that “which is impious or evil absolutely against faith or manners no law can possibly permit that intends not to unlaw itself.” And while it might be counterclaimed that in Milton’s time “free speech” was imperfectly realized, Fish also demonstrates that while Catholicism no longer constitutes a threat to modern “free speech” regimes, there are still exceptions to what can be said publicly.

As another American Supreme Court justice, Robert Jackson, would put the point centuries later, “the constitutional Bill of Rights”—including, one presumes, the free-speech-protecting First Amendment—is not “a suicide pact.” Or, as Fish himself put the same point, even today the most tolerant governments still ask themselves, regarding speech, “would this form of speech or advocacy, if permitted to flourish, tend to undermine the very purposes for which our society is constituted?” No government, in other words, can allow the kind of speech that threatens to end the practice of free speech itself.

Still, that is not enough to disrupt the “free speech” argument, because even if it has not been exemplified yet on this earth, that does not mean that it could not someday. To make his point, Fish has to go further; which he does in an essay called “There’s No Such Thing As Free Speech, And It’s A Good Thing Too.”

There, Fish says that he is not merely claiming that “saying something … is a realm whose integrity is sometimes compromised by certain restrictions”—that would be the above argument, where historical evidence is advanced—but rather “that restriction, in the form of an underlying articulation of the world that necessarily (if silently) negates alternatively possible articulations, is constitutive of expression.” The claim Fish wants to make in short—and it is important to see that it is the only argument that can confront the claims of the “marketplace of ideas” thesis—is that restrictions, such as Milton’s against Catholicism, aren’t the sad concessions we must make to an imperfect world, but are in fact what makes communication possible at all.

To those who take what’s known as a “free speech absolutism” position, such a notion might sound deeply subversive, if not heretical: the answer to pernicious opinions, in the view of the free speech absolutist, is not to outlaw them, but to produce more opinions—as Oliver Wendell Holmes, Mill, and Milton all advise. The headline of an editorial in Toronto’s Globe and Mail puts the point elegantly: “The lesson of Charlie Hebdo? We need more free speech, not less.” But what Fish is saying could be viewed in the light of the narrative described by the writer Nassim Nicholas Taleb about how he derived his saying regarding “black swans” under the influence of John Stuart Mill and David Hume.

Taleb says that while “Hume had been irked by the fact that science in his day … had experience a swing from scholasticism, entirely based on deductive reasoning” to “an overreaction into naive and unstructured empiricism.” The difficulty, as Hume recognized, “is that, without a proper method”—or, as Fish might say, a proper set of constraints—“empirical observations can lead you astray.” It’s possible, in other words, that amping up production of truths will not—indeed, perhaps can not—produce Truth.

In fact, Taleb argues (in a piece entitled “The Roots of Unfairness: the Black Swan in Arts and Literature”) that in reality, rather than the fantasies of free speech absolutists, the production of very many “truths” may tend to reward a very few examples at the expense of the majority—and that thusly “a large share of the success” of those examples may simply be due to “luck.” The specific market Taleb is examining in this essay is the artistic and literary world, but like many other spheres—such as “economics, sociology, linguistics, networks, the stock market”—that world is subject to “the Winner-Take-All effect.” (Taleb reports Robert H. Frank defined that effect in his article, “Talent and the Winner-Take-All Society,” as “markets in which a handful of top performers walk away with the lion’s share of total rewards.”) The “free speech absolutist” position would define the few survivors of the “truth market” as being, ipso facto, “the Truth”—but Taleb is suggesting that such a position takes a more sanguine view of the market than may be warranted.

The results of Taleb’s investigations imply that such may be the case. “Consider,” he observes, “that, in publishing, less than 1 in 800 books represent half of the total unit sales”—a phenomenon similar to that found by Art De Vany at the cinema in his Hollywood Economics. And while those results might be dismissed as subject to crass reasons, in fact the “academic citation system, itself supposedly free of commercialism, represents an even greater concentration” than that found in commercial publishing, and—perhaps even yet more alarmingly—there is “no meaningful difference between physics and comparative literature”: both display an equal amount of concentration. In all these fields, a very few objects are hugely successful, while the great mass sink like stones into the sea of anonymity.

The replication of these results do not confine themselves simply to artistic or scientific production; they are, in fact, applicable to subjects as diverse as the measurement of the coast of England to the error rates in telephone calls. George Zipf, for example, found that the rule applied to the “distribution of words in the vocabulary,” while Vilfredo Pareto found it applied to the distribution of income in any give society.

“Now,” asks Taleb, “think of waves of one meter tall in relation to waves of 2 meters tall”—there will inevitably be many more one meter waves than two meter waves, and by some magic the ratio between the two will be invariant, just as, according to what linguists call “Zipf’s Law,” “the most frequent word [in a given language] will occur approximately twice as often as the second most frequent word, three times as often as the third most frequent word,” and so on. As the Wikipedia entry for Zipf’s Law (from which the foregoing definition is taken) observes, the “same relationship occurs in many other rankings unrelated to language, such as the population ranks of cities in various countries, corporation sizes, income rankings, ranks of number of people watching the same TV channel, and so on.” All of these subjects are determined by what have come to be known as power laws—and according to some researchers, they even apply to subjects as seemingly immune to them as music.

Zipf himself, in order to explain the distribution he discovered among words, proposed that it could be explained by a kind of physical process, rather than discernment on the part of language-users: “people aim at minimizing effort in retrieving words; they are lazy and remember words that they have used in the past, so that the more a word is used, the more likely it is going to be used in the future, causing a snowball effect.” The explanation has an intuitive appeal: it appears difficult to argue that “the” (the most common English word) communicates twice as much information as “be” (the second-most common English word). Still less does such an argument explain why those word distributions should mirror the distributions of American cities, say, or the height of the waves on Hawaii’s North Shore, or the metabolic rates of various mammals. The widespread appearance of such distributions, in fact, suggests that rather than being determined by forces “intrinsic” to each case, the distributions are driven by a natural law that cares nothing for specifics.

So far, it seems, “we have no clue about the underlying process,” as Taleb says. “Nothing can explain why the success of a novelist … bears similarity to the bubbles and informational cascades seen in the financial markets,” much less why both should “resemble the behavior of electricity power grids.” What we can know is that, while according to the “free speech absolutist” position “one would think that a larger size of the population of producers would cause a democratization,” in fact “it does not.” “If anything,” Taleb notes, “it causes even more clustering.” The prediction of the “free speech absolutist” position suggests that the production of more speech results in a closer approximation of the Truth; experiential results, however, suggest that more production results merely in a smaller number of products becoming more successful for reasons that may have nothing to do with their intrinsic merits.

These results suggest that perhaps Stanley Fish has it right about “free speech,” and thus that the Lords of Augusta—like their spiritual brethren who shot up the offices of Charlie Hebdo in early January this year—have it completely right in the tight rein they hold over the announcers that work their golf tournament: Truth could be the result of, not the enemy of, regulation. The irony, of course, is that such also suggests the necessity of regulation in areas aside from commentary about golf and golfers—a result that, one suspects, is not only one not favored by the Lords of the Masters, but puts them in uncomfortable company. Allahu akbar, presumably, sounds peculiar with a Southern accent.

Green Jackets ’n’ Blackfaces

But if you close your eyes,
Does it almost feel like
Nothing’s changed all?
—“Pompeii”
    Bastille (2013)

 

 

Some bore will undoubtedly claim, this April week, that the Masters is unique among golf’s major tournaments because it is the only one held at the same course every year—a claim not only about as fresh as a pimento cheese sandwich but refuted by the architectural website Golf Club Atlas. “Augusta National,” the entry for the course goes on their website, “has gone through more changes since its inception than any of the world’s twenty or so greatest courses.” But the club’s jive by no means stops there; just as the club—and the journalists who cover the tournament—likes to pretend its course is timeless, so too does the club—what with the sepia photos of Bobby Jones, the talk of mint juleps, the bright azaleas, the “limited commercial interruptions” and the old-timey piano music of the tournament broadcast—like to pretend it is an island of “the South” in a Yankee sea. The performance is worthy of one of the club’s former members: Freeman Gosden, who became a member of Augusta National as a result of the riches and fame thrown off by the radio show he created in 1928 Chicago—Amos ’n’ Andy.

Gosden played Amos; his partner, Charles Correll, played Andy. The two actors had met in Durham, North Carolina in 1920, and began performing together in Chicago soon afterwards. According to Wikipedia, both were “familiar with minstrel traditions”: the uniquely American art form  in which white performers would sing and tell jokes and stories while pretending to be black, usually while wearing “blackpaint”—that is, covering their faces with black makeup. The show they created, about two black cab drivers, translated those minstrel traditions to radioand became the most successful minstrel show in American history. Amos ’n’ Andy lasted 32 years on the radio—the last performance came in 1960—and while it only lasted a few years on television in the early 1950s, the last rerun played on American air as late as 1966.

The successful show made Gosden and Correll made so rich, in fact, that by the early 1950s Gosden had joined the Augusta National Golf Club, and sometime thereafter the actor had become so accepted that he joined the group known as “the Gang.” This was a troop of seven golfers that formed around General Dwight Eisenhower—who had led the amphibious Allied invasion of France on the beaches of Normandy in 1944—after the former war hero was invited to join the club in 1948. Gosden had, in other words, arrived: there was, it seems, something inherently entertaining about a white men pretending to be something he wasn’t.

Gosden was however arguably not the only minstrel performer associated with Augusta National: the golf architecture website Golf Club Atlas claims that the course itself performs a kind of minstrelry. Originally, Augusta’s golf course was designed by famed golf architect Alister MacKenzie, who also designed such courses as Cypress Point in California and Crystal Downs in Michigan, in consultation with Bobby Jones, the great player who won 13 major championships. As a headline from The Augusta Chronicle, the town’s local newspaper, once proclaimed, “MacKenzie Made Jones’ Dream Of Strategic Course Into Reality.” But in the years since, the course has been far from timeless: as Golf Club Atlas points out, in fact it has gone through “a slew of changes from at least 15 different ‘architects.’” As it now stands, the course is merely pretending to be a MacKenzie.

Nearly every year since the Masters began in 1934, the course has undergone some tweak or another: whereas, once “Augusta National could have been considered amongst the two or three most innovative designs ever,” it has now been so altered—according to the Golf Club Atlas article—that to “call it a MacKenzie course is false advertising as his features are essentially long gone.” To say that course Tiger Woods won on is the same as the one that Jack Nicklaus or Ben Hogan won on, thus, is to make a mockery of history.

The primary reason the Atlas can make that claim stick is because the golf club has flouted Jones’ and MacKenzie’s original intent, which was to build a course like one they both revered: the Old Course at St. Andrews. Jones loved the Old Course so much that, famously, he was later made an honorary citizen of the town, while for his part MacKenzie wrote a book—not published until decades after his death in 1995—called The Spirit of St. Andrews. And as anyone familiar with golf architecture knows, the Old Course is distinguished by the “ground game”: where the golfer does better to keep his ball rolling along the ground, following its contours, rather than flying it through the air.

As Golf Club Atlas observes, “Jones and MacKenzie both shared a passion for the Old Course at St. Andrews, and its influence is readily apparent in the initial design” because “the ground game was meant to be the key at Augusta National.” That intent, however, has been lost; in a mordant twist of history, the reason for that loss is arguably due to the success of the Masters tournament itself.

“Ironically, hosting the Masters has ruined one of MacKenzie’s most significant designs,” says the Atlas, because “much of the money that the club receives from the Invitational is plowed back into making changes to the course in a misguided effort to protect par.” Largely, “protecting par” has been interpreted by the leadership of the golf club to mean “to minimize the opportunity for the ground game.” As Rex Hoggard—repeating a line heard about the course for decades—wrote in an article for the Golf Channel’s website in 2011, it’s “important to hit the ball high at Augusta National”—a notion that would be nonsensical if Jones and MacKenzie’s purpose had been kept in view.

In short, the Atlas understands—perhaps shockingly—that “an invitation to play Augusta National remains golf’s most sought-after experience,” it thus also believes that “fans of Alister MacKenzie would be better served to look elsewhere for a game.” Though the golf club, and the television coverage, might work to present the course as a static beauty, in fact that effect is achieved through endless surgeries that have effectively made the course other than it was. The Augusta National golf course, thus, is a kind of minstrel.

Similarly, the presentation of the golf club as a specifically Southern institution—perhaps above all, by ensuring that the chairman of the club, the only member who regularly speaks to the media, possesses a Georgia drawl (as recent chairmen Hootie Johnson and Billy Payne have)—is belied by the club’s history. Consider, in that light, a story from the beginnings of the club itself, a story ably told in Curt Sampson’s The Masters: Golf, Money, and Power in Augusta, Georgia.

In January of 1933—the depths of the Great Depression—a New York investment banker named Clifford Roberts approached the Southern Railroad System with a proposal: “comfortable conveyance for one hundred New Yorkers to and from Augusta, Georgia”—at a discount. “Business was so bad,” Roberts himself would later write in his history of the golf club, “that the railroad promised not only a special low rate, but all new Pullman equipment with two club cars for card players and two dining cars.” In this way, Sampson writes, “the grand opening of the Augusta National Golf Club began in a railroad station in New York City.”

Most golf fans, if they are aware of the club that holds the tournament at all, only know that it was founded by Bobby Jones when he retired from competitive golf following the annus mirabilis of 1930, when Jones won the Grand Slam of all four major tournaments in the same year. But, as Sampson’s story demonstrates, it was Clifford Roberts that made Jones’ vision a reality by raising the money to build it—and that money came largely from New York, not the South.

Sixty of the 100 men Roberts recruited to join the club before it opened were from New York City: the Augusta National Golf Club would be, as Sampson puts it, “a private enclave for rich Yankees in the heart of the South, just sixty-eight years after the Civil War.” Sampson calls the idea “bizarre”—but in fact, it only is if one has a particularly narrow idea of “the South.” Augusta National’s status as a club designed to allow Yankees to masquerade as Southerners only seems ridiculous if it’s assumed that the very idea of “the South” itself is not a kind of minstrelry—as, in fact, it arguably is.

Links between New York finance and the South, that is, long predated the first golf shot at the new course. It’s often forgotten, for instance, that—as historians Charles and John Lockwood pointed out in the New York Times in 2011—after South Carolina declared it would secede in December of 1860, “the next call for secession would not come from a Southern state, but from a Northern city—New York.”

On 7 January of the bleak “Secession Winter” of ’61, the two historians note, New York’s mayor, Fernando Wood, spoke to the city council to urge that it follow the Southern state and secede. The mayor was merely articulating the “pro-Southern and pro-independence sentiment” of the city’s financiers and traders—a class buoyed up by the fact that “the city’s merchants took 40 cents of every dollar that Europeans paid for Southern cotton.” The Southern staple (and the slaves whose labor grew that crop), had in other words “helped build the new marble-fronted mercantile buildings in lower Manhattan, fill Broadway hotels and stores with customers, and build block after block of fashionable brownstones north of 14th Street.” Secession of the South put all those millions of dollars at risk: to protect its investments, thus Mayor Wood was proposing, New York might have to follow the South out of the Union.

Such a move would have had disastrous consequences. The city was the site of the vast Brooklyn Navy Yard, which in the months after the fall of Fort Sumter in Charleston Harbor would assemble the fleet that not only would blockade the Southern coast, but would, in November of ’61, land an army at Hilton Head, South Carolina, the heart of secessionism—a fleet only exceeded by the armada General Eisenhower would gather against Normandy in the late winter and spring of 1944. But even more importantly, in that time the taxes collected by the New York Customs House virtually paid the entire federal government’s budget each year.

“In 1860,” as the Lockwoods write, “tariffs on imported goods collected at ports … provided $56 million of the $64.6 million of federal revenue, and more than two-thirds of imports by value passed through New York.” If New York seceded, in other words, the administration of president-elect Abraham Lincoln would be bankrupt before it took office: the city, as it were, held the nation’s government by a golden leash.

But New York City did not follow the South out of the Union: when the cannons fired at Fort Sumter that April, New York joined the rest of the nation in confirming the sentiments of Daniel Webster’s Second Reply to Hayne: “Liberty and Union, Now and Forever, One and Inseparable!” Over a hundred thousand would turn out to the “Great Sumter Rally” at (the appropriately-named) Union Square in the city on 20 April, after the fall of the federal fort in Charleston Harbor. It was, perhaps, the largest expression of New York’s patriotism before the fall of the towers overlooking the city at the dawn of the twenty-first century.

Mayor Wood himself spoke at that rally to affirm his support for “the Union, the government, the laws and the flag”—reversing his course from mere months before, a turn that perhaps has served to obscure how close the city’s ties were to a region, and economic system, that had turned away from all of those institutions. But just because it was politically expedient to deny them did not conjure them away. Indeed, the very existence of the Augusta National Golf Club is testament to just how enduring those ties between New York and the Deep South may be.

Still, of course, none of these acts of minstrelry—the golf course’s masquerade as the work of a designer whose work barely survives, the golf club’s disguise as a Southern institution when in fact it has been largely the work of Yankee financiers, or even the South’s own pretense—could be said to matter, really, now. Except for one detail: those links, some might say, extend into the present: perhaps the biggest story in American political history over the past century is how the party that would win the Civil War, the party of Lincoln, has become the defender, instead of the antagonist, of that vision of the South portrayed every year by the Masters tournament. It’s an act of minstrelry that lies at the heart of American political life today.

In 1962, wrote Ian Haney-Lopez (John H. Boalt Professor of Law at the University of California, Berkeley) for Salon in 2013, “when asked which party ‘is more likely to see that Negroes get fair treatment in jobs and housing,’ 22.7 percent of the public said Democrats and 21.3 percent said Republicans, while over half could perceive no difference between the two.” The masks of the two parties were, on this issue, interchangeable.

Yet, by the summer of 1963, conservative journalist Robert Novak could report from the Republican National Committee’s meeting in Denver that a “good many, perhaps a majority of the party’s leadership, envision political gold to be mined in the racial crisis by becoming in fact, though not in name, the White Man’s Party.” It was a harvest that would first be reaped the following year: running against Lyndon Johnson, who had—against long odds—passed the 1964 Civil Rights Act, the Republican nominee, Barry Goldwater, would outright win five states of the Deep South: Louisiana, Alabama, Georgia, Mississippi, and South Carolina. It was the first time a Republican nominee for president had won in those states, at least since the end of Reconstruction and the beginning of Jim Crow.

Still, those states—and electoral votes—were not enough to carry Goldwater to the White House. But they formed the prelude to the election that did make those votes count: 1968, won by Richard Nixon. According to one of Nixon’s political strategists that year, Kevin Phillips, that election demonstrated the truth of the thesis Phillips would lay out in his 1969 book, The Emerging Republican Majority: “The Negro problem, having become a national rather than a local one, is the principal cause of the breakup of the New Deal coalition”—the coalition that had delivered landslides for Franklin Roosevelt and, in 1964, for Johnson. Phillips predicted that a counter-coalition would emerge that would be “white and middle class,” would be “concentrated in the South, the West, and suburbia,” and would be driven by reaction to “the immense midcentury impact of Negro enfranchisement and integration.” That realignment would become called Nixon’s “Southern Strategy.”

The “Southern Strategy,” as Nixon’s opponent in 1972, George McGovern, would later remark, “says to the South:”

Let the poor stay poor, let your economy trail the nation, forget about decent homes and medical care for all your people, choose officials who will oppose every effort to benefit the many at the expense of the few—and in return, we will try to overlook the rights of the black man, appoint a few southerners to high office, and lift your spirits by attacking the “eastern establishment” whose bank accounts we are filling with your labor and your industry.

Haney-Lopez argues, in the book from which this excerpt is taken—entitled Dog Whistle Politics: How Coded Racial Appeals Have Reinvented Racism and Wrecked the Middle Class, published by Oxford University Press—that it is the wreckage from Nixon’s course that surrounds us today: economic attacks on the majority enabled by nearly transparent racial coding. He may or may not be right—but what might be of interest to future historians is the role, large or small, that the Augusta National Golf Club may have played in that drama.

Certainly, after all, the golf club played an outsize role in the Eisenhower administration: according to the Augusta Chronicle, Eisenhower made 45 trips to the golf club during his life: “five before he became president, 29 while president and 11 after his last term.” And just as certainly the club provided more than recreation for the general and president.

One Augusta member (Pete Jones) would, according to Sampson and other sources, “offer Ike $1 million for his 1952 campaign for president.” (“When Pete Jones died in a plane crash in 1962,” Sampson reports, “he had $60,000 in his wallet.”) Even before that, Clifford Roberts had arranged for one Augusta member, a publisher, to buy the general’s memoirs; the money made Eisenhower financially secure for the first time in his life.

It was members of the golf club in short who provided the former Supreme Commander of the West with both the advice and the financial muscle to reach for the Republican nomination for president in 1952. His friends while in Augusta, as Sampson notes, included such figures as Robert Woodruff of Coca-Cola, “Bud (washing machines) Maytag, Albert (General Motors) Bradley, Alfred S. (Singer Sewing Machines) Bourne” and other captains of industry. Another member of the golf club was Ralph Reed, president of American Express, who would later find a job for the general’s driver during the war, Kay Summersby.

All of which is, to be sure, a long way from connecting the club directly to Nixon and the “Southern Strategy.” There’s a great deal of testimony, in fact, that would appear to demonstrate the contrary. According to Golf Digest, for example, Nixon “once told Clifford Roberts”—the storied golf club’s sometimes-malevolent dictator—“that he wouldn’t mind being a member of Augusta National, and Roberts, who didn’t like him any better than Eisenhower did, said “I didn’t know you were that interested in golf.” “And that,” goes the story, “was the end of that.” Sampson’s work tends to confirm the point: a few of Ike’s cronies at the club, Sampson reports, “even urged Ike to dump Dick in 1956,” the year the general ran for re-election.

Still, the provable is not the same as the unimaginable. Take, for instance, the testimony of Charlie Sifford, the man Lee Trevino called the “Jackie Robinson” of golf—he broke the game’s color barrier in 1961, after the attorney general of California threatened to sue the PGA of America for its “whites only” clause. Sifford fought for years to be invited to play in the Masters tournament, only to be denied despite winning two tournaments on the PGA Tour. (The 1967 Greater Hartford Open and the 1969 Los Angeles Open.) In his autobiography, Just Let Me Play, Sifford quoted Clifford Roberts as saying, “As long as I live, there will be nothing at the Masters besides black caddies and white players.”

Sampson for one discounts this as implausible—for what it’s worth, he thinks it unlikely that Roberts would have actually said such a thing, not that Roberts was incapable of thinking it. Nevertheless, golfers in the Masters tournament were required to take “local” (i.e., black) caddies until 1983, six years after Roberts shot himself in the head beside Ike’s Pond on the grounds of the club, in late September, 1977. (The chairman, it’s said, took a drop.) Of course, the facts of the golf club’s caddie policy means nothing, nor even would Clifford Roberts’ private thoughts regarding race. But the links between the club, the South, and the world of money and power remain, and whatever the future course of the club, or the nation, those forged in the past—no matter the acts of minstrelry designed to obscure them—remain.

Now, and forever.

Only You

 

 

This weekend Rory McIlroy not only held off a burning-bright Tiger Woods (who laid down a little 62) and won the Honda Classic, but succeeded Luke Donald as the best golfer in the world. Suddenly, whereas three years ago (as I wrote about in a previous post) Tiger had no rivals—a subject of much complaint by the golf press—now there is not only Tiger v. Phil but also Tiger v. Rory. But why should the new World #1 be from some small town in Northern Ireland, a country with fewer people than we have here in Chicago? The answer to that—which I suspect has much to do with that “Superstar Effect” I discussed in an earlier post—may in turn answer another, as put by the website ethnicmajority.com back in April of 2009: “Why are there no black pro golfers (other than you know who)?” Tiger’s success seemed to augur a new era of African-American golf—it may be, however, we have it backwards, and that it’s his success that explains why that hasn’t happened, not something that needs explaining.

Why there hasn’t been a successor to Tiger Woods from the African-American community has been a question for sportswriters with intellectual predilections for some time. ESPN devoted an episode of their show Outside the Lines to the question all the way back in June of 2001—“One … And Only”—and despite the occasional heralding of a successor, no black golfer has become a regular on the PGA Tour since Tiger won the Masters in 1997, now nearly fifteen years ago. The explanations mainly fall into two camps: racism or economics.

. “You need $70,000 a year to do that,” Tim Hall, a black player on the Nationwide Tour, told NBC.com in 2009 about playing on mini-tours—the proving grounds where would-be tour pros either find their games, or don’t. For people like Hall, such as Julius Erving (Dr. J), who spoke to ESPN for the Outside the Lines program, the main explanation for the conspicuous lack of black players at elite levels—even black colleges can’t fill out their teams with black players—is economic: as a writer for the website Color Lines put it in April of 2007, the “overwhelming majority of Black Americans cannot afford to practice golf and thereby do not gain a competitive edge in golf.”

The other side is represented by those who would explain black golfers lack of success in the familiar terms of racism. Undoubtedly, golf has a history: Augusta National’s annual tournament is, after all, called the Masters—an unfortunate name for a Southern organization to use, undoubtedly—and until 1961, as many know, the PGA Tour had a “Caucasians only” clause. This isn’t even to begin to rehearse, say, the 1990 Shoal Creek incident, when the president of that golf club, due to hold the PGA Championship that year, said about the lack of African-American members that “this is our home, and we pick and choose who we want.” The trouble is, however, that from 1961, when the PGA Tour ended the “Caucasian” clause (under the threat of a lawsuit by the California attorney general), until 1985, there were 26 black golfers who earned tour cards for the Big Show. Since then, only Woods. In order to be convincing, the burden of the “racism” theory is to explain why racism has, in golf, somehow gotten worse since the early 1960s.

As it happens, a similar question has been asked in a field in which I’m somewhat familiar, the study of literature. Why is it, for instance, that the giants of “English” literature have, since the 18th century, largely not been Englishmen? “From Conrad, Wilde and James,” writes scholar Terry Eagleton, “to Shaw, Pound and Eliot, the high literary ground is seized by those whose very marginality allows them to bring fresh perspectives to the society they have adopted.” “English” literature, in other words, has mostly been the province (a deliberate pun) of men and women whose origins lay far from London. Earlier, mostly Irish; latterly, from yet further on the periphery.

Something similar, perhaps, is at work in golf: though the sample size is a great deal smaller, it’s still true that on the list of World #1s, as ranked since the 1980s, the first player on it is Bernhard Langer, a German—not a nation known for its golfers (though this has been changing slowly recently, as witness Martin Kaymer; a point that may lend credence to my drift here). From there it alternated for several years between Seve Ballesteros and Greg Norman—from Spain and Australia respectively—and from there to even more improbable stories: like that of Vijay Singh, who’s from Fiji. Every golfer on that list is the product of one implausible story after another, whether it be a shoeless Seve hitting rocks on a Spanish beach to Vijay somehow climbing from the South Pacific to major champion.

The point is, it’s virtually inevitable that the World #1 will be the product of such a narrative. A really crazy story—the man-bites-dog story of world rankings—would be if somebody like newly-turned pro Peter Uihlein, son of the chief executive officer of Titleist golf Wally Uihlein and thus recipient of every possible break, became World #1. Davis Love III, for instance, whose father was himself a well-known and respected professional—and thus would seem to have had an advantage—never became the best player in the world. No: the best player in the world is, seemingly always, an oddball of one sort or another.

The natural question then is, why so? In his Atlas of the European Novel, the literary scholar Franco Moretti examines the construction of small libraries: “small [library] collections are hyper-canonical,” which is to say that “they have all the great books, and don’t care about the inferior ones.” But great books are ones that are obviously different from the rest: not only are they as good as run-of-the-mill books (which themselves are better than that half-finished draft in your aunt’s desk), but also have something extra, that makes them stand out. Otherwise, they wouldn’t be preserved at all. But that also makes them terrible models for would-be writers.

“What is wrong,” Moretti says about this practice of small libraries to have only the best of the best, “is the implicit belief that literature proceeds from one canonical form to the next, in a sort of unbroken thread.” Literature, Moretti says, actually works quite differently: “cheap jokes on bureaucrats, and Gogol’s Overcoat; rough city sketches, and Dickens’ London novels; silly colonial adventures, and Heart of Darkness.” In other words, literature is generated by having the space to work: Dickens doesn’t write David Copperfield right out of the box. Dickens has predecessors, precursors, a field to inhabit.

In this way, Moretti proposes a theory of literary history borrowed from Viktor Sklovsky, the “canonization of the cadet branch.” As Sklovsky put it in Theory of Prose: “The legacy that is passed on from one literary generation to the next moves not from father to son, but from uncle to nephew.” In order to have great literature, you need to have a lot of other kinds of literature: what George Orwell called, borrowing from Chesterton, “good bad books.” But—and this is where the “Superstar Effect” comes in— “good bad books” are the sort likely to be produced by those already located in the center: in order to get truly great books you need somebody with an outsider’s perspective. Why?

Here’s where Jennifer Brown’s research that led to the discovery of the “Superstar Effect” in golf—when Tiger was in his prime, he gained nearly a shot on the field in every tournament he entered, just by entering it—comes in. The implication of that research was that those on the “inside” (guys already on the tour) were intimidated by Tiger: he was, it seems, so foreign to their ideas of what was possible on a golf course that it threw off their games. Moretti similarly argues that those on the “inside”—close to the centers of literary production—simply can’t produce “great” literature: they are too close not to be judged, and found wanting.

In order to get to be an insider at all, that is, you have to devote a great deal of time to imitating one’s forebears—which is why it’s generally better to start out imitating solid, second-rate books rather than masterpieces—whether it be on the golf course or the page. But that pursuit necessarily supposes closing off other, potentially more interesting, options—the kind that only an outsider, who can’t get there any other way, must exploit. Of course, what that means is that, by definition, most “outsiders” will be destined to remain that way—ignored. But those that do “break through” will, necessarily, have some special quality about them. There are no “better-than-average” outsiders; conversely, all insiders must be at least better-than-average.

Somebody from Holywood, in County Down, Ireland, therefore, isn’t going to be just a journeyman golfer on the European Tour: that slot has already been filled with someone with the economic resources and connections. African-Americans like to tell their kids they have to be twice as good as anybody else to get noticed: here’s an empirical reason why. On the other hand, Rory’s success will now have consequences for any other golfers growing up in Holywood: the standard they’re judged by isn’t going to be the guy ranked #70 on the European Tour’s Order of Merit (money list), which is still a very respectable level of play; it’s going to be RORY MCILROY, #1 Player in the World.

In other words, if it was difficult before to imagine a great pro golfer to come out of Holywood, it must be even more difficult now, what with the expectations put in place by McIlory. Every action for such a hypothetical player will be scrutinized by the light of the predecessor, stacking the odds yet further. Though it isn’t true that lightning never strikes the same place twice, perhaps it’s so that the phrase holds water in human endeavors: it isn’t likely that there’s going to be a world-famous folk troubadour out of Hibbing, Minnesota (home, as any Iron Ranger will tell you, of Robert Zimmerman, aka Bob Dylan) any time soon.

Similarly, any young African-American golfer is going to be judged against the standard set by Woods, not the more-reasonable—though still wildly-overoptimistic—standards of merely making a good living by playing golf. African-Americans don’t have that problem in other fields: a young black basketball player knows that, even if he doesn’t make it to the NBA, he can still play overseas, or at least perhaps get a college education out of it. There’s enough of a pool, a “critical mass,” that that hypothetical player knows he doesn’t have to be an All-Star. It’s ok to be above-average; it’s ok not to be Michael Jordan.

It only, therefore, seems paradoxical that Tiger Woods is, and has been for many years, the only African-American on the PGA Tour. His very success doesn’t make it a mystery why there aren’t more black golfers: it actually may make it less likely that an African-American should become a touring professional. That is, obviously, a disturbing possibility. Yet, if that’s true, avoiding it doesn’t actually help produce more black golfers. Confronting it would lead to a different plan of attack: what would become important would stop being attacking racism in golf at some retail level, one club at a time—or even the general mission of creating black golfers at all, as the various charities founded in the wake of Woods’ success do. Instead, energy would be focused on creating more golfers, period—expanding access to everyone, without exception.

That is what Americans used to do, anyway. On ESPN’s “The 1 … And Only,” Lee Elder, the first African-American ever to play in the Masters tournament (in 1975, the year Tiger was born), pointed out that black golfers “all pretty much came out of the caddy ranks in the early days.” That’s not surprising, since that’s also how a lot of other players came to golf back then: Ben Hogan, Byron Nelson, Chick Evans, Francis Ouimet, and Lee Trevino all owed their careers to caddying—not to mention foreign players like Ballesteros. But looping is not a charitable operation: it’s paid labor, not a handout—or an “internship” or the like. Notice what that does: it creates the space, a field, for someone to work in; much like, perhaps, the existence of all those cheap colonial adventure stories, like King Solomon’s Mines might have created the space—what Virginia Woolf called a “room of one’s own”— for Conrad to write Heart of Darkness.

It’s not as if, for instance, that someone found Leonardo da Vinci (whose name means, “from Vinci,” a town as obscure as Holywood) as a child, knew who he’d become (which would, one supposes, make such a person an even greater genius than Leonardo), and paved his way. Instead, Leonardo got lucky enough to find himself in the workshop of Andrea Verrocchio, a workshop whose alumni included Lorenzo di Credi, Domenico Ghirlandaio, Francesco Botticini, and Pietro Perugino—great artists all, even if we mostly only remember them through the reflection of Leonardo’s glory. But Verrocchio’s workshop gave them, and Leonardo, work to do—and money to get for it. Greatness comes from having lots of pretty good stuff around: if you want to produce a Tiger Woods or a James Joyce or a Leonardo, in other words, you have to produce lots of Mark O’Mearas, P.G. Wodehouses, and di Credis. And that’s not cheap: you have to pay all of them.

That’s something that it seems as though America has forgotten lately, as wages have stagnated since the 1970s while, at the same time, the financial rewards for “superstars” has exploded. In academia, for instance, that’s led to highly-paid, “superstar” professors and legions of graduate students without hope of employment; in the business world a galaxy of CEOs who make hundreds of times what their workers make; and in music a few dozens of musicians who can sell out stadiums while your local tavern thinks it’s a big deal to have a band once a month. Maybe that’s the bargain that we’ve made lately. But if so, we shouldn’t kid ourselves about, say, why there aren’t more black pro golfers.

Or, you know, a middle class.

 

Tell You Wrong

I embrace my rival, but only to strangle him.
Jean Racine. Britannicus, Act IV, iii.

“Joe Frazier, I’ll tell the world right now, brings out the best in me,” Muhammed Ali said after the “Thrilla in Manila” in 1975, the third and final fight between the two: the one that “went the distance” of 15 rounds in the searing tropical heat of a Third World dictatorship, the one that nearly killed both men and did land them both in the hospital. Phil Mickelson wasn’t as lyrical after giving Tiger Woods an eleven-shot beating at Pebble Beach a few weeks ago: “Although I feel like he brings out the best in me,” Mickelson observed, “it’s only been the past five years.” (Since 2007 Mickelson’s been 8-3-1 when playing against Tiger, bringing the overall record to 13-13-4 in the thirty times they’ve been paired together.) For years, golf writers have lamented the fact that there have been no Tom Watsons or Lee Trevinos around to challenge Tiger as those players did Jack Nicklaus; as it turns out, it seems that rival—Phil—has been there for five years. But are rivals only recognizable in retrospect, and if so what does that mean for the “rivalry” theory?

I take it for granted that anyone reading this will be familiar with the complaint that Tiger has not faced any worthy rivals; as an example, I will cite a story from Yahoo Sports from nearly four years ago. It’s simply entitled “Tiger Misses What Arnie, Jack Had: Rivals.” “Tiger has no true rival,” wrote Dan Wetzel then, “no one familiar face just as cold-blooded, talented and intelligent to push him to perhaps even greater heights.” The complaints implicitly voiced here are longstanding, going back at least to the excitement surrounding the PGA at Medinah in 1999, when Sergio Garcia appeared to many about to challenge Tiger. Such complaints appear much like the usual sportswriter’s fantasies, like the “clutch” player—so far as I know, no player has ever been shown to perform better than his career numbers might indicate in particular situations, in any sport—or that running and defense wins football games. A contrarian might reply, for instance, that Tiger’s run was fueled by a number of breaks: the fact that David Duval essentially fell off the planet after 2001 might be the first item on that list.

Phil’s record with Tiger might suggest that simply because only a few of Phil’s and Tiger’s matchups have come on the final day of a major that one of them ended up winning (which disqualifies, for instance, the electric final day of the 2009 Masters, when Phil shot a 30 on the front nine but didn’t win), they have in fact been “rivals” the whole time—which in turn might suggest that a further combing of the data might discover other “rivals” whose presence had been undiscovered because they had not appeared at widely-televised moments. It’s kind of a silly argument, but as it turns out someone’s taken it seriously and quantified the difference between Tiger and his fellow competitors—and it’s really true: Tiger, in his heyday, didn’t have anyone who remotely approached him.

In 2008, as it happens, a paper published in the Journal of Political Economy by one Jennifer Brown entitled “Quitters Never Win: The (Adverse) Incentive Effects of Competing with Superstars,” found that in general players not named Woods took an additional .8 more shots in every tournament Tiger entered. The effect was even more pronounced in the first round of tournaments, where Woods was effectively conceded another third of a shot by the field, and yet more so among “elite” players: those close to the top of the leaderboard gave away nearly two shots to Tiger. Although these margins seem thin, the difference between first and second on the PGA Tour is usually one shot; what that’s meant, according to Brown, is that meant the rest of the tour players have conceded something on the order of $6 million to Tiger over the course of Tiger’s career.

Still, while that does I think prove the “no rivals” theory it doesn’t actually provide any causation: one possible explanation, for instance, might be found in the way that Tiger himself plays. According to his former coach, Butch Harmon, Tiger has methods to confound his playing partners: in an interview with Steve Elling of CBSSports.com, Harmon said that Tiger for instance will “often putt out first” (which means that galleries will often be moving to the next hole while whoever he’s playing with is putting); that Woods will try to get to the tee box last, so the crowd will give him its biggest cheers; change his pace of play to play “fast” with slow players and vice versa; and hit three-wood instead of driver on some holes, so as to hit his approach first—thereby making his opponent wait to hit his shot. None of these methods are against the rules, of course—but they don’t win friends in the locker room either.

Yet Brown’s paper found no evidence that players playing with Tiger are more affected than those not playing with him. Joel Waldfogel reported in Slate that Brown’s work found that “being in Tiger’s foursome [sic] has no additional negative impact on performance.” In other words, even if Tiger was practicing gamesmanship—and it was successful—it didn’t show up in the statistics. Playing with Tiger or not playing with Tiger, all that seems to matter is that the other players know he’s there.

One way to test for that is to see if the other players have been “attempting longer, riskier shots to try to keep up with Tiger.” A website called Physorg.com notes that Brown’s account does this: if players were trying such a strategy, there would likely be what financial professionals would call “volatility”: there’d be more eagles—and double bogeys—when Tiger played than in other tournaments. In reality though, there “were significantly fewer eagles and double bogeys when Woods played.” Tiger’s presence wasn’t causing the other players to adopt a “high-risk, high-reward” strategy. Instead, it seems that he really just caused them not to throw things into some higher gear that, possibly, might have been available to them.

What’s interesting about this is that what it suggests that Tiger’s dominance was, in fact, the effect of something within his opponents’ craniums, not just a statistical anomaly caused in part by Tiger’s skillfulness but also by chance. But what it also suggests is that the nature of that dominance didn’t lie in something sportswriters ascribed to Tiger’s “aura” or his vaunted “Zen-like” mental discipline: the potential mechanism that Brown theorizes to explain the effect is quite different.

Brown finds the mechanism by analogy to other fields: she “cites the competition among newly hired associates at a law firm as another example of a nonlinear incentive structure,” as another review of her work says. Such a structure might be better known from the practice of the firm in Glengarry Glen Ross—where, as Alec Baldwin’s character Blake said, first place is an Eldorado, second is a set of steak knives, and, anticipating Donald Trump, “third prize is you’re fired.” In a law firm, usually only one associate might be hired from a given group: in law firms as in Ricky Bobby’s NASCAR, “if you’re not first you’re last.”

The mechanism Brown proposes, as described by Jonah Lehrer in an essay on the paper for the Wall Street Journal is therefore that “the superstar effect is especially pronounced when the rewards for the competition are ‘non-linear,’ or there is an extra incentive to finish first.” In such a contest, the rewards for finishing first are so exponentially better that finishes less than first are, by comparison, not as meaningful. “We assume,” as Lehrer puts the point, “that the superstar will win, so why chase after meaningless scraps?” In other words, Brown’s theory is that professional golfers, seeing Woods’ name in the pairing sheets, consciously or not effectively “mail in” their effort. They aren’t expending everything they have because they don’t expect to be rewarded for extra effort.

What that suggests though is that what’s going on in tour players’ heads isn’t a fear of Tiger so much as it is a rational calculation based, ultimately, on some sense of fairness or justice. Isn’t that what we might call a reasonable conclusion in the face of evidence of a “rigged” game? It wouldn’t matter from this point of view (though you might compare my previous work on Taylor Smith) whether the game were “actually” gamed in some fashion or other in Tiger’s favor, merely that players behaved as if it were. Or to put it another way, from an individual tour player’s perspective it wouldn’t matter whether Tiger was who he was from sheer ability or from some shadiness: the player-not-named-Woods’ own abilities would be disturbed in some way in either case.

Now this is extremely interesting because what it suggests is that even the perception of inequality is harmful. Brown suggests that societies that insufficiently spread the wealth, however that is defined, in the long run are inefficient: they fail to get the best out of their people. Unequal societies waste human resources. And worse.

If Brown, for instance, was looking for a society that uses a “nonlinear incentive structure” as its working principle, she might have stopped looking for it on pristine golf courses and started in on the southwest corner of Utah, which is perhaps (and probably not coincidentally) some of the most isolated terrain in the continental United States. In that territory north of the Grand Canyon lie the adjacent towns of Colorado City, Arizona and Hilldale, Utah. What’s noticeable about these two towns is that there are lots of large families headed by “single” women: the product of a polygamous sect, the Fundamentalist Church of Jesus Christ of Latter Day Saints (FLDS). It’s an issue adequately explored elsewhere—Jon Krakauer’s Under the Banner of Heaven is perhaps an excellent beginning—but what’s not usually mentioned is something that has rather a bearing on Jennifer Brown’s research.

“Often,” observed the historian of marriage Stephanie Coontz, “the subordination of women is in fact also a way of controlling men.” Or as Libby Copeland, writing for Slate, puts it: “Rich old guys with lots of wives win twice: They have more women to bear them babies and do household work, and they also gain an advantage over other men.” Since they control access to marriage, any man who wants to get married has to deal with them—and since the rich old guys are taking a surplus, that makes a lot of boys inessential to the society. In a polygamous community, then, we’d expect to see a lot of homeless teenaged boys: in 2007, Time magazine said the number of boys abandoned by their polygamous families in that state may number in the thousands. The results of a “nonlinear incentive structure,” as Ms. Brown calls it, aren’t especially difficult to discern in this case: I don’t think the problem of a surplus of unsupervised and despairing teenagers needs much detailing. Nor, perhaps, do Tiger’s off-course problems appear as inscrutable.

I don’t mean, to be sure, to minimize the sufferings of women and children in such a community, but it is worth noting that such arrangements necessarily burden the whole community and not just particular groups in it. By laying down in front of Tiger, for instance, PGA Tour players effectively ceded him not only today’s purses but tomorrow’s: a tour that had had one or two other guys who could have gone the distance with Tiger in 2001 or 2002 might have gotten an even greater television contract. But by understanding the mechanism by which the trick is done goes a long way toward understanding how to combat it: removing the “nonlinear incentive structure,” rather than, as has been suggested, somehow convincing everyone on the tour that they’re “tougher,” or whatever, than they thought. Or to put it in terms relevant to a larger field, stop working on “raising self-esteem” or the like and more on regularizing pay-scales.

That isn’t, necessarily, to demand that the PGA Tour stop disproportionately rewarding its winners: golf is a sport, and sports aren’t necessarily the same as other parts of life. It can, and has, been argued that pro golf, in particular, needs a dominant, or a few dominant, players in order to make it interesting to the general public: if a different pro won every week, tournaments might come to seem like lotteries for people with the leisure to raise golfers. The regular appearance of some few names, perhaps, creates the possibility of drama.

Drama like that of the last Ali-Frazier fight. Frazier had trained for the fight like a man possessed, knowing that it would be his last shot at the title. Ali, in the midst of domestic turmoil, less so. Sometime in the seventh round, in the early Philippine afternoon—the fight started in the late morning for international television—Ali began to fade from the heat and a relentless assault from Frazier, who would not stop coming despite the furious combinations Ali laid on him. “Joe,” Ali said during a clinch, “they told me you was all washed up.” “They told you wrong, pretty boy,” Frazier replied. It’s arguable that, whatever the medical histories, neither man left that ring whole. For years, golf has wondered how to get that kind of effort out of its players. What evidence suggests is that if golf wants true rivalries, and the drama that results, it might do better to stop catering to the elite—which, despite the fact that it apparently remains unlearned in parts of Utah or the Philippines (or Wall Street), doesn’t appear a difficult lesson.

30 Seconds Over Waialae

“It needs to be good for the next player,” the man was telling me, though it took me a moment to understand him through the molasses of his Georgia-inflected speech. We were on the fourth hole of Augusta National—named “Flowering Crabapple”—and the man, who was considerably older than me, was raking out his player’s bunker to the left of the green with the care that, very likely, the White House gardeners devote to the Rose Garden. But we were done with the hole; we were moving on; there wasn’t time to do things the old man’s way. He was thinking of his responsibility to the other golfers; it was better, I thought, that he take care of his own player first.

Jeff Maggert—and Air Vice Marshal Sir Ralph Cochrane, whom I’ll get to a bit later—might disagree. First though, it’s necessary to explain that I’ve been reading Dan Jenkins’ The Money-Whipped, Steer-Job, Three-Jack Give-Up Artist this past week, which mentions the term “lurkers” (guys who haven’t won on tour) and why they are anathema to sportswriters. “Lurkers,” Jenkins explains, “are your basic nobodies,” players who “lurk around the top of the leaderboard where big names are involved and occasionally win a tournament, thereby screwing up everybody’s story.” But why do they screw up everybody’s story? Jeff Maggert, so I think, can answer part of that question.

Ask Jeff Maggert, for instance, about the 2003 Masters. Or better, ask his caddie, Brian Sullivan. To rehearse the story: on the third hole of the final round at that year’s Masters, Maggert hit himself in the chest with his own golf ball after rebounding it off the face of a fairway bunker. (There are, to be fair, rather a lot of them.) Maggert took a two-shot penalty, lost the lead he’d slept on Saturday night to Mike Weir, and never really recovered.

What not many remember though is that Maggert wasn’t completely out of the tournament until the twelfth hole, where he managed to hit his second shot into Rae’s Creek (incurring more penalty shots) because of what his caddie, Brian Sullivan, said was a case of “Somebody” doing “a very poor job of raking that trap”: instead of rolling back to the bottom of the bunker, his tee shot hung up on the bunker’s face, in a furrow created by a rake. Maggert ended up making an eight on the par-three hole—if he’d just bogied that hole and the third, he would have won the tournament by a shot.

The “somebody” whose rake job may have cost Maggert several hundred thousand dollars (and his looper tens of thousands) was Paul Tesori, best known for his “Tiger Who?” hat during the 2000 Presidents’ Cup when Tesori was working for Vijay Singh, then competing with Tiger for the #1 ranking. About that rake job, Tesori said in 2010 that it “was perfect,” and that when he heard about Sullivan’s comments he approached Sullivan and offered to settle it “like men.” Which is to say that what happens in a bunker can very often lead not only to fiscal consequences, but also physical ones.

But no matter how violent those consequences might be, they’re not as weighty as those that pressed on Sir Ralph in 1943 and ’44, when the black-painted night bombers flown by the men of Cochrane’s 5 Group, along with all the other bomb groups, were taking heavy casualties from Nazi night fighter planes because at that point in the war the long-range American fighter, the P-51, had yet to make an appearance in Europe. The only protection the bombers had from enemy fighters was their own gunners.

What Cochrane proposed to do was, instead of loading up the bombers with more guns, was just the opposite: he wanted to take all the heavy guns out of the bombers, along with their turrets and, incidentally, the gunners. What that would do was make the bombers lighter, hence able to fly not only faster, but higher, therefore avoiding the night fighters—and the anti-aircraft fire, or “flak”—altogether. (Also, for perhaps poetic reasons, Cochrane wanted to paint these planes white, instead of black.) But Bomber Command nixed Cochrane’s idea.

The reason Bomber Command didn’t want to follow Cochrane’s suggestion was because (in the words of Freeman Dyson, who conducted the original research and whose article “How To Dispel Your Illusions” from the 22 December issue of the New York Review of Books I’ve freely used here), Bomber Command “saw every bomber crew as a tightly knit team of seven.” Bomber Command also believed that as each “team … became more skillful and more closely bonded, their chances of survival would improve.” Experience, that is, improved survival chances, and the gunners were part of each team’s collective experience. Taking out the gunners, in other words, would destroy unit teamwork, thus making it less likely that each crew would ultimately survive. Such at least was Bomber Command’s theory.

Unfortunately, as Freeman Dyson, who ran the numbers, found out, this theory was entirely false. “Teamwork” or “morale” or “experience” didn’t actually improve any given bomber crew’s ability to survive. After canceling out for weather and geography and so forth, Dyson found that “whether a crew lived or died was purely a matter of chance.” Bomber Command’s belief in the value of experience, or skill, “was an illusion.” The men who crewed the bombers and survived the war did so by sheer luck.

Translating this awful story into the story of the 2003 Masters (perhaps a horrifying reduction to some people), what we could say is that Sullivan’s contention is that Maggert lost simply due to chance, or luck, whereas Tesori’s contention would be that ultimately Mike Weir’s skill overcame all obstacles. As the story of Bomber Command’s fateful decision makes clear (the pathos of which makes Randall Jarrell’s 1945 poem, “The Death of the Ball Turret Gunner”—“I woke to black flak and the nightmare fighters”—all the more awful) Tesori’s version is one that we are more susceptible to believe in. The idea that the universe is orderly, one supposes, is more important than the lives of bomb crews.

Regrettably for Sullivan, the evidence appears to be that if there was anyone who “deserved” to win the 2003 Masters, it was the man who did win: Mike Weir, who arguably had the steadiest tournament of any player. ’03 was a year of tough weather—the Thursday round was washed out—and high scores were typical even for players who finished well up on the leaderboard. Ernie Els for example, who eventually finished in 6th place, had a first-round 79. Tiger Woods, who came back on Saturday to scare everyone, had to come back from a first-round 76. But where all of the golfers who finished in the top-ten had one low round to balance their high round, Weir (who also had a 75 in his mix) had two low rounds of four-under 68s. And Weir’s first round of 70 was just a bit better than anyone else’s third-lowest scoring round.

So what one could say is that, while Maggert’s lowest scoring round, 66, was better than Weir’s lowest scoring round, overall Weir had the better tournament—though that would mean discounting the two disastrous holes that cost Maggert the tournament: the 7 and 8 that, had they been merely a bogey and a double bogey, would have made up the difference between the two. In other words, our perception of Weir’s tournament as “steadier” is driven by our knowledge of what eventually happened, not necessarily by the actual value of each player’s golf. Or to put it another way, thinking that Weir “deserved” to win still could be a bit like thinking that the bomber crews that survived the war “deserved” to survive.

In golf, it’s usually only a single shot that makes a difference between first and second place, which is to say that a single round, or even a tournament, is something of too small a sample size to declare whether one player is intrinsically better than another. What we could do instead is compare the careers of Maggert and Weir, in which case what we’d find is that Weir has 8 PGA Tour wins to Maggert’s 3—and Maggert’s wins were at the Disney, the St. Jude, and the World Match Play.

The first two of these tournaments are what could be called “second tier” events: the level of competition is not so high as in some other tournaments. (The Disney is played in the fall, when most of the superstars take time off, while the St. Jude also isn’t on most top players’ rotations because it falls near the Memorial, Jack Nicklaus’ tournament.) And match play, as I’ve discussed before, is an inherently uncertain format that virtually every year it’s played generates complaints about “no names.”

Weir though has not only won a major (which, granted, is just what the discussion is about) but also a Tour Championship (he beat David Toms, Ernie Els, and Sergio Garcia in a playoff) and a World Golf Championship tournament, along with back-to-back wins at the Nissan Open (formerly the L.A. Open and now the Northern Trust Open) at Riviera Country Club. It’s arguable that the quality of Weir’s wins is better than Maggert’s—in addition to the greater number of tournaments Weir has won. In that sense, we could argue that Weir has, over the course of his career, demonstrated a higher quality of golf than has Maggert.

These days the PGA Tour has developed Shotlink, which tracks every shot hit by every player, with what club and with what result. As it happens, that technology was introduced in 2003 at the Nissan Open that Weir won, which is to say that it must be available for the Masters tournament that year. But that data is locked up behind a paywall that I, for one, haven’t ponied up for, so while the question is in principle answerable, it’s also true that obtaining that answer requires resources that probably would be better spent elsewhere.

Anyway, even if that data were freely available, it’s not as though it would end the argument necessarily. While both Weir and Maggert have been very solid players, neither is the superstar sort. You aren’t going to find either one on the cover of any non-golf magazine; if there is some difference in quality between the two, it isn’t necessarily that great. Even if Weir’s second shots ended up marginally closer to the hole, perhaps Maggert’s second shots during his round of 66 were much, much better than Weir’s, or Maggert’s putting was demonstrably better than Weir’s throughout the tournament, or some other thing. Regardless, who raked, or didn’t rake, whose bunker doesn’t particularly matter now, nearly a decade after the event, to anyone outside of those involved.

The point however is that, even if we are able to construct a narrative after the fact that awards Mike Weir the prize, that narrative is not any more “real.” Perhaps, in other words, the “real” quality of their golf is about the same, but Weir somehow received more “breaks” than did Maggert. Improbable, perhaps, but then so is throwing heads 17 times in a row—or surviving thirty missions 6 miles over Germany.

Had things gone differently, we would have constructed a story that would convince us that Jeff Maggert deserved the trophy more because of the sheer brilliance of his third-round 66—just as we very nearly did, in reality, have to consider that possibility due to the scary 65 Len Mattiace did actually throw at Weir in the final round, which led to the playoff that Weir won. But playoffs are, perhaps, the ultimate in coin-tossing luck when it comes to golf: Weir, who has been involved in 5 playoffs during his PGA career and lost two of them while winning three, might affirm that himself.

Most times, in other words, merely a coin flip or less separates winners and losers on the PGA Tour, and yet every week someone is required to come up with that week’s storyline. It’s kind of heroic what golf writers are able to do week in and week out: they turn absurd coincidence into high drama, turn the recipients of good fortune into Bronze Age heroes. It’s been noted by many that golf is one of the few sports where no one cheers for the underdog: everyone wants Tiger (or Phil or some other “star”), not some no-namer, to win.

“When I died,” says Jarrell’s ball turret gunner, a man with no name, “they washed me out of the turret with a hose.” A win for a “star” is, in a curious way, an affirmation of human ability in a cold universe; a win for a no-name is just another affirmation of the random chance that surrounds us. That is why the title of one of Dan Jenkins’ books about golf is The Dogged Victims of Inexorable Fate.  Jarrell’s finale is a line pregnant with our own mortality: a reminder that, at the end of our days here on the Big Golf Course, there are no winners and losers. Lurkers are like Jarrell’s gunner—unlike when Tiger or somebody wins, they are signs, essentially, of that pale rider whose vehicle might be the Ghost of St. Trond’s Messerschmitt or, say, a rake.

Incidentally, Johnson Wagner won the Sony Open this past week—the first full-field event of the year—by two shots over four other guys.

What, haven’t you heard of him?