Lawyers, Guns, and Caddies

Why should that name be sounded more than yours?
Julius Caesar. Act I, Scene 2.

 

One of Ryan’s steady golfers—supposedly the youngest man ever to own an American car dealership—likes to call Ryan, one of the better caddies I know at Medinah, his “lawyer-caddie.” Ostensibly, it’s meant as a kind of joke, although it’s not particularly hard to hear it as a complicated slight mixed up with Schadenfreude: the golfer, involved in the tiring process of piling up cash by snookering old ladies with terrible trade-in deals, never bothered to get a college degree—and Ryan has both earned a law degree and passed the Illinois bar, one of the hardest tests in the country. Yet despite his educational accomplishments Ryan still earns the bulk of his income on the golf course, not in the law office. Which, sorry to say, is not surprising these days: as Alexander Eichler wrote for The Huffington Post in 2012, not only are “jobs … hard to come by in recent years” for would-be lawyers, but the jobs that there are come in two flavors—either “something that pays in the modest five figures” (which implies that Ryan might never get out of debt), “or something that pays much better” (the kinds of jobs that are about as likely as playing in the NBA). The legal profession has in other words bifurcated: something that, according to a 2010 article called “Talent Grab” by New Yorker writer Malcolm Gladwell, is not isolated to the law. From baseball players to investment bankers, it seems, the cream of nearly every profession has experienced a great rise in recent decades, even as much of the rest of the nation has been largely stuck in place economically: sometime in the 1970s, Gladwell writes, “salaries paid to high-level professionals—‘talent’—started to rise.” There’s at least two possible explanations for that rise: Gladwell’s is that “members of the professional class” have learned “from members of the working class”—that, in other words, “Talent” has learned the atemporal lessons of negotiation. The other, however, is both pretty simple to understand and (perhaps for that reason) might be favored by campus “leftists”: to them, widening inequality might be explained by the same reason that, surprisingly enough, prevented Lord Cornwallis from burning Mount Vernon and raping Martha Washington.

That, of course, will sound shocking to many readers—but in reality, Lord Cornwallis’ forbearance really is unexpected if the American Revolution is compared to some other British colonial military adventures. Like, for instance, the so-called “Mau Mau Uprising”—also known as the “Kenya Emergency”—during the 1950s: although much of the documentation only came out recently, after a long legal battle—which is how we know about this in the detail we do now at all—what happened in Kenya in those years was not an atypical example of British colonial management. In a nutshell: after World War II, many Kenyans, like a lot of other European colonies, demanded independence, and like a lot of other European powers, Britain would not give it to them. (A response with which Americans ought to be familiar through our own history.) Therefore, the two sides fought to demonstrate their sincerity.

Yet unlike the American experience, which largely consisted—nearly anomalously in the history of wars of independence—of set-piece battles that pitted conventionally-organized troops against each other, what makes the Kenyan episode relevant is that it was fought using the doctrines of counterinsurgency: that is, the “best practices” for the purposes of ending an armed independence movement. In Kenya, this meant “slicing off ears, boring holes in eardrums, flogging until death, pouring paraffin over suspects who were then set alight, and burning eardrums with lit cigarettes,” as Mark Curtis reported in 2003’s Web of Deceit: Britain’s Real Role in the World. It also meant gathering, according to Wikipedia, somewhere around half a million Kenyans into concentration camps, while more than a million were held in what were called “enclosed villages.” Those gathered were then “questioned” (i.e., tortured) in order to find those directly involved in the independence movement, and so forth. It’s a catalogue of horror, but what’s more horrifying is that the methods being used in Kenya were also being used, at precisely the same moment, half a world away, by more or less the same people: at the same time as the “Kenya Emergency,” the British Empire was also fighting in what’s called the “Malay Emergency.”

In Malaysia, from 1948 to 1960 the Malayan Communist Party fought a guerrilla war for independence against the British Army—a war that became such a model for counterinsurgency war that one British leader, Sir Robert Thompson, later became a senior advisor to the American effort in Vietnam. (Which itself draws attention to the fact that France was also involved in counterinsurgency wars at the time: not only in Vietnam, but also in Algeria.) And in case you happen to think that all of this is merely an historical coincidence regarding the aftershocks of the Second World War, it’s important to remember that the very word “concentration camp” was first widely used in English during the Second Boer War of 1899-1902. “Best practice” in fighting colonial wars, that is, was pretty standardized: go in, grab the wives and kids, threaten them, and then just follow the trail back to the ringleaders. In other words, Abu Ghraib—but also, the Romans.

It’s perhaps no coincidence, in other words, that the basis of elite education in the Western world for millennia began with Julius Caesar’s Gallic Wars, usually the first book assigned to beginning students of Latin. Often justified educationally on the basis of its unusually clear rhetoric (the famously deadpan opening line: “Gaul is divided into three parts …”), the Gallic Wars could also be described as a kind of “how to” manual regarding “pacification” campaigns: in this case, the failed rebellion of Vercingetorix in 52 BCE, who, according to Caesar, “urged them to take up arms in order to win liberty for all.” In Gallic Wars, Caesar details such common counterinsurgency techniques as, say, hostage-taking: in negotiations with the Helvetii in Book One, for instance, Caesar makes the offer that “if hostages were to be given by them [the Helvetii] in order that he may be assured these will do what they promise … he [Caesar] will make peace with them.” The book also describes torture in several places throughout (though, to be sure, it is usually described as the work of the Gauls, not the Romans). Hostage-taking and torture was all, in other words, common stuff in elite European education—the British Army did not suddenly create these techniques during the 1950s. And that, in turn, begs the question: if British officers were aware of the standard methods of “counterinsurgency,” why didn’t the British Army use them during the “American Emergency” of the 1770s?

According to Pando Daily columnist “Gary Brecher” (a pseudonym for John Dolan), perhaps the “British took it very, very easy on us” during the Revolution because Americans “were white, English-speaking Protestants like them.” In fact, that leniency may have been the reason the British lost the war—at least, according to Lieutenant Colonel Paul Montanus’ (U.S.M.C.) paper for the U.S. Army War College, “A Failed Counterinsurgency Strategy: The British Southern Campaign, 1780-1781.” To Montanus, the British Army “needed to execute a textbook pacification program”—instead, the actions that army took “actually inflamed the [populace] and pushed them toward the rebel cause.” Montanus, in other words, essentially asks the question: why didn’t the Royal Navy sail up the Potomac and grab Martha Washington? Brecher’s point is pretty valid: there simply aren’t a lot of reasons to explain just why Lord Cornwallis or the other British commanders didn’t do that other than the notion that, when British Army officers looked at Americans, they saw themselves. (Yet, it might be pointed out that just what the British officers saw is still an open question: did they see “cultural Englishmen”—or simply rich men like themselves?)

If Gladwell were telling the story of the American Revolution, however, he might explain American independence as a result simply of the Americans learning to say no—at least, that is what he advances as a possible explanation for the bifurcation Gladwell describes in the professions in American life these days. Take, for instance, the profession with which Gladwell begins: baseball. In the early 1970s, Gladwell tells us, Marvin Miller told the players of the San Francisco Giants that “‘If we can get rid of the system as we now know it, then Bobby Bond’s son, if he makes it to the majors, will make more in one year than Bobby will in his whole career.’” (Even then, when Barry Bonds was around ten years old, people knew that Barry Bonds was a special kind of athlete—though they might not have known he would go on to shatter, as he did in 2001, the single season home run record.) As it happens, Miller wildly understated Barry Bonds’ earning power: Barry Bonds “ended up making more in one year than all the members of his father’s San Francisco Giants team made in their entire careers, combined” (emp. added). Barry Bonds’ success has been mirrored in many other sports: the average player salary in the National Basketball Association, for instance, increased more than 800 percent from the 1984-5 season to the 1998-99 season, according to a 2000 article by the Chicago Tribune’s Paul Sullivan. And so on: it doesn’t take much acuity to know that professional athletes have taken a huge pay jump in recent decades. But as Gladwell says, that increase is not limited just to sportsmen.

Take book publishing, for instance. Gladwell tells an anecdote about the sale of William Safire’s “memoir of his years as a speechwriter in the Nixon Administration to William Morrow & Company”—a book that might seem like the kind of “insider” account that often finds its way to publication. In this case, however, between Safire’s sale to Morrow and final publication Watergate happened—which caused Morrow to rethink publishing a book from a White House insider that didn’t mention Watergate. In those circumstances, Morrow decided not to publish—and could they please have the advance they gave to Safire back?

In book contracts in those days, the publisher had all the cards: Morrow could ask for their money back after the contract was signed because, according to the terms of a standard publishing deal, they could return a book at any time, for more or less any reason—and thus not only void the contract, but demand the return of the book’s advance. Safire’s attorney, however—Mort Janklow, a corporate attorney unfamiliar with the ways of book publishing—thought that was nonsense, and threatened to sue. Janklow told Morrow’s attorney (Maurice Greenbaum, of Greenbaum, Wolff & Ernst) that the “acceptability clause” of the then-standard literary contract—which held that a publisher could refuse to publish a book, and thereby reclaim any advance, for essentially any reason—“‘was being fraudulently exercised’” because the reason Morrow wanted to reject Safire’s book wasn’t due to the reason Morrow said they wanted to reject it (the intrinsic value of the content) but simply because an external event—Watergate—had changed Morrow’s calculations. (Janklow discovered documentary evidence of the point.) Hence, if Morrow insisted on taking back the advance, Janklow was going to take them to court—and when faced with the abyss, Morrow crumbled, and standard contracts with authors have become (supposedly) far less weighted towards publishing houses. Today, bestselling authors (like, for instance, Gladwell) now have a great deal of power: they more or less negotiate with publishing houses as equals, rather than (as before) as, effectively, servants. And not just in publishing: Gladwell goes on to tell similar anecdotes about modeling (Lauren Hutton), moviemaking (George Lucas), and investing (Teddy Forstmann). In all of these cases, the “Talent” (Gladwell’s word) eventually triumphs over “Capital.”

As I mentioned, for a variety of reasons—in the first place, the justification for the study of “culture,” which these days means, as political scientist Adolph Reed of the University of Pennsylvania has remarked, “the idea that the mass culture industry and its representational practices constitute a meaningful terrain for struggle to advance egalitarian interests”—to a lot of academic leftists these days that triumph would best be explained by the fact that, say, George Lucas and the head of Twentieth-Century Fox at the time, George Stulberg, shared a common rapport. (Perhaps they gossiped over their common name.) Or to put it another way, that “Talent” has been rewarded by “Capital” because of a shared “culture” between the two (apparent) antagonists—just as in the same way that Britain treated their American subjects different than their Kenyan ones because the British shared something with the Americans that they did not with the Kenyans (and the Malaysians and the Boer …). (Which was either “culture”—or money.) But there’s a problem with this analysis: it doesn’t particularly explain Ryan’s situation. After all, if this hypothesis correct that would appear to imply that—since Ryan shares a great deal “culturally” with the power elite that employs him on the golf course—that Ryan ought to have a smooth path towards becoming a golfer who employs caddies, not a caddie who works for golfers. But that is not, obviously, the case.

Gladwell, on the other hand, does not advance a “cultural” explanation for why some people in a variety of professions have become compensated far beyond that even of their fellows within the profession. Instead, he prefers to explain what happened beginning in the 1970s as being instances of people learning how to use a tool initially widely used by organized labor: the strike.

It’s an explanation that has an initial plausibility about it, in the first place, because of Marvin Miller’s personal history: he began his career working for the United Steelworkers before becoming an employee of the baseball players’ union. (Hence, there is a means of transmission.) But even aside from that, it seems clear that each of the “talents” Gladwell writes about made use of either a kind of one-person strike, or the threat of it, to get their way: Lauren Hutton, for example, “decided she would no longer do piecework, the way every model had always done, and instead demanded that her biggest client, Revlon, sign her to a proper contract”; in 1975 “Hollywood agent Tom Pollock,” demanded “that Twentieth Century Fox grant his client George Lucas full ownership of any potential sequels to Star Wars”; and Mort Janklow … Well, here is what Janklow said to Gladwell regarding how he would negotiate with publishers after dealing with Safire’s book:

“The publisher would say, ‘Send back that contract or there’s no deal,’ […] And I would say, ‘Fine, there’s no deal,’ and hang up. They’d call back in an hour: ‘Whoa, what do you mean?’ The point I was making was that the author was more important than the publisher.”

Each of these instances, I would say, is more or less what happens when a group of industrial workers walk out: Mort Janklow (whose personal political opinions, by the way, are apparently the farthest thing from labor’s), was for instance telling the publishers that he would withhold the labor product until his demands were met, just as the United Autoworkers shut down General Motors’ Flint, Michigan assembly plant in the Sit-Down Strike of 1936-37. And Marvin Miller did take baseball players out on strike: the first baseball strike was in 1972, and lasted all of thirteen days before management crumbled. What all of these people learned, in other words, was to use a common technique or tool—but one that is by no means limited to unions.

In fact, it’s arguable that one of the best examples of it in action is a James Dean movie—while another is the fact the world has not experienced a nuclear explosion delivered in anger since 1945. In the James Dean movie, Rebel Without a Cause, there’s a scene in which James Dean’s character gets involved in what the kids in his town call a “chickie run”—what some Americans know as the game of “Chicken.” In the variant played in the movie, two players each drive a car towards the edge of a cliff—the “winner” of the game is the one who exits his car closest to the edge, thus demonstrating his “courage.” (The other player is, hence, the “chicken,” or coward.) Seems childish enough—until you realize, as the philosopher Bertrand Russell did in a book called Common Sense and Nuclear Warfare, that it was more or less this game that the United States and the Soviet Union were playing throughout the Cold War:

Since the nuclear stalemate became apparent, the Governments of East and West have adopted the policy which Mr. Dulles calls “brinksmanship.” This is a policy adapted from a sport which, I am told, is practised [sic] by some youthful degenerates. This sport is called “Chicken!” …

As many people of less intellectual firepower than Bertrand Russell have noticed, Rebel Without A Cause thusly describes what happened between Moscow and Washington D.C. faced each other in October 1962, the incident later called the Cuban Missile Crisis. (“We’re eyeball to eyeball,” then-U.S. Secretary of State Dean Rusk said later about those events, “and I think the other fellow just blinked.”) The blink was, metaphorically, the act of jumping out of the car before the cliff of nuclear annihilation: the same blink that Twentieth Century Fox gave when it signed over the rights to sequels to Star Wars to Lucas, or Revlon did when it signed Lauren Hutton to a contract. Each of the people Gladwell describes played “Chicken”—and won.

To those committed to a “cultural” explanation, of course, the notion that all these incidents might instead have to do with a common negotiating technique rather than a shared “culture” is simply question begging: after all, there have been plenty of people, and unions, that have played games of “Chicken”—and lost. So by itself the game of “Chicken,” it might be argued, explains nothing about what led employers to give way. Yet, at two points, the “cultural” explanation also is lacking: in the first place, it doesn’t explain how “rebel” figures like Marvin Miller or Janklow were able to apply essentially the same technique across many industries. If it were a matter of “culture,” in other words, it’s hard to see how the same technique could work no matter what the underlying business was—or, if “culture” is the explanation, it’s difficult to see how that could be distinguished from saying that an all-benevolent sky fairy did it. As an explanation, in other words, “culture” is vacuous: it explains both too much and not enough.

What needs to be explained, in other words, isn’t why a number of people across industries revolted against their masters—just as it likely doesn’t especially need to be explained why Kenyans stopped thinking Britain ought to run their land any more. What needs to explained instead is why these people were successful. In each of these industries, eventually “Capital” gave in to “Talent”: “when Miller pushed back, the owners capitulated,” Gladwell says—so quickly, in fact, that even Miller was surprised. In all of these industries, “Capital” gave in so easily that it’s hard to understand why there was any dispute in the first place.

That’s precisely why the ease of that victory is grounds for being suspicious: surely, if “Capital” really felt threatened by this so-called “talent revolution” they would have fought back. After all, American capital was (and is), historically, tremendously resistant to the labor movement: blacklisting, arrest, and even mass murder were all common techniques capital used against unions prior to World War II: when Wyndham Mortimer arrived in Flint to begin organizing for what would become the Sit-Down Strike, for instance, an anonymous caller phoned him at his hotel within moments of his arrival to tell him to leave town if the labor organizer didn’t “want to be carried out in a wooden box.” Surely, although industries like sports or publishing are probably governed by less hard-eyed people than automakers, neither are they so full of softies that they would surrender on the basis of a shared liking for Shakespeare or the films of Kurosawa, nor even the fact that they shared a common language. On the other hand, however, neither does it seem likely that anyone might concede after a minor threat or two. Still, I’d say that thinking about these events using Gladwell’s terms makes a great deal more sense than the “cultural” explanation—not because of the final answer they provide, but because of the method of thought they suggest.

There is, in short, another possible explanation—one that, however, will mean trudging through yet another industry to explain. This time, that industry is the same one where the “cultural” explanation is so popular: academia, which has in recent decades also experienced an apparent triumph of “Talent” at the expense of “Capital”; in this case, the university system itself. As Christopher Shea wrote in 2014 for The Chronicle of Higher Education, “the academic star system is still going strong: Universities that hope to move up in the graduate-program rankings target top professors and offer them high salaries and other perks.” The “Talent Revolution,” in short, has come to the academy too. Yet, if so, it’s had some curious consequences: if “Talent” were something mysterious, one might suspect that it might come from anywhere—yet academia appears to think that it comes from the same sources.

As Joel Warner of Slate and Aaron Clauset, an assistant professor of computer science at the University of Colorado wrote in Slate recently, “18 elite universities produce half of all computer science professors, 16 schools produce half of all business professors, and eight schools account for half of all history professors.” (In fact, when it comes to history, “the top 10 schools produce three times as many future professors as those ranked 11 through 20.”) This, one might say, is curious indeed: why should “Talent” be continually discovered in the same couple of places? It’s as if, because William Wilkerson  discovered Lana Turner at the Top Hat Cafe on Sunset Boulevard  in 1937, every casting director and talent agent in Hollywood had decided to spend the rest of their working lives sitting on a stool at the Top Hat waiting for the next big thing to walk through that door.

“Institutional affiliation,” as Shea puts the point, “has come to function like inherited wealth” within the walls of the academy—a fact that just might explain another curious similarity between the academy and other industries these days. Consider, for example, that while Marvin Miller did have an enormous impact on baseball player salaries, that impact has been limited to major league players, and not their comrades at lower levels of organized baseball. “Since 1976,” Patrick Redford noted in Deadspin recently, major leaguers’ “salaries have risen 2,500 percent while minor league salaries have only gone up 70 percent.” Minor league baseball players can, Redford says, “barely earn a living while playing baseball”—it’s not unheard of, in fact, for ballplayers to go to bed hungry. (Glen Hines, a writer for The Cauldron, has a piece for instance describing his playing days in the Jayhawk League in Kansas: “our per diem,” Hines reports, “was a measly 15 dollars per day.”) And while it might difficult to have much sympathy for minor league baseball players—They get to play baseball!—that’s exactly what makes them so similar to their opposite numbers within academia.

That, in fact, is the argument Major League Baseball uses to deny minor leaguers are subject to the Fair Labor Standards Act: as the author called “the Legal Blitz” wrote for Above the Law: Redline, “Major League Baseball claims that its system [of not paying minimum wage] is legal as it is not bound by the FLSA [Fair Labor Standards Act] due to an exemption for seasonal and recreational employers.” In other words, because baseball is a “game” and not a business, baseball doesn’t have to pay the workers at the low end of the hierarchy—which is precisely what makes minor leaguers like a certain sort of academic.

Like baseball, universities often argue (as Yale’s Peter Brooks told the New York Times when Yale’s Graduate Employees and Student Organization (GESO) went out on strike in the late 1990s) that adjunct faculty are “among the blessed of the earth,” not its downtrodden. As Emily Eakin reported for the now-defunct magazine Lingua Franca during that same strike, in those days Yale’s administration argued “that graduate students can’t possibly be workers, since they are admitted (not hired) and receive stipends (not wages).” But if the pastoral rhetoric—a rhetoric that excludes considerations common to other pursuits, like gambling—surrounding both baseball and the academy is cut away, the position of universities is much the same as Major League Baseball’s, because both academia and baseball (and the law, and a lot of other professions) are similar types of industries at least in one respect: as presently constituted, they’re dependent on small numbers of highly productive people—which is just why “Capital” should have tumbled so easily in the way Gladwell described in the 1970s.

Just as scholars are only very rarely productive early in their careers, in other words, so too are baseball players: as Jim Callis noted for Baseball America (as cited by the paper, “Initial Public Offerings of Baseball Players” by John D. Burger, Richard D. Grayson, and Stephen Walters), “just one of every four first-round picks ultimately makes a non-trivial contribution to a major league team, and a mere one in twenty becomes a star.” Similarly, just as just a few baseball players hit most of the home runs or pitch most of the complete games, most academic production is done by just a few producers, as a number of researchers discovered in the middle of the twentieth century: a verity variously formulated as “Price’s Law,” “Lotka’s Law,” or “Bradford’s Law.” (Or, there’s the notion described as “Sturgeon’s Law”: “90% of everything is crap.”) Hence, rationally enough, universities (and baseball teams) only want to pay for those high-producers, while leaving aside the great mass of others: why pay for a load of .200 hitters, when with the same money you can buy just one superstar?

That might explain just why it is that William Morrow folded when confronted by Mort Janklow, or why Major League Baseball collapsed when confronted by Marvin Miller. They weren’t persuaded by the justice of the case Janklow or Miller brought—rather, they decided that it was in their long-term interests to reward wildly the “superstars” because that bought them the most production at the cheapest rate. Why pay for a ton of guys who hit all of the home runs, you might say—when, for much less, you can buy Barry Bonds? (In 2001, all major leaguers collectively hit over 5000 home runs, for instance—but Barry Bonds hit 73 of them, in a context in which the very best players might hit 20.) In such a situation, it makes sense (seemingly) to overpay Barry Bonds wildly (so that he made more money in a single season than all of his father’s teammates did for their entire careers): given that Barry Bonds is so much more productive than his peers, it’s arguable that, despite his vast salary, he was actually underpaid.

If you assign a value per each home run, that is, Bonds got a lower price per home run than his peers did: despite his high salary he was—in a sense—a bargain. (The way to calculate the point is to take all the home runs hit by all the major leaguers in a given season, and then work out the average price per home run. Although I haven’t actually done the calculation, I would bet that the average price is more than the price per home run received by Barry Bonds—which isn’t even to get into how standard major league rookie contracts deflate the market: as Newsday reported in March, Bryce Harper of the Washington Nationals, who was third on the 2015 home run list, was paid only $59,524 per home run—when virtually every other top ten home run hitter in the major leagues made at least a quarter of a million dollars.) Similarly, an academic superstar is also, arguably, underpaid: even though, according to citation studies, a small number of scholars might be responsible for 80 percent of the citations in a given field, there’s no way they can get 80 percent of the total salaries being paid in that field. Hence, by (seemingly) wildly overpaying a few superstars, major league owners (like universities) can pocket the difference between those salaries and what they (wildly underpay) to the (vastly more) non-superstars.

Not only that, but wildly overpaying also has a secondary benefit, as Walter Benn Michaels has observed: by paying “Talent” vastly more money, not only are they actually getting a bargain (because no matter what “Talent” got paid, they simply couldn’t be paid what they were really “worth”), but also “Talent’s” (seemingly vast, but in reality undervalued) salaries enable the system to be performed as  “fair”—if you aren’t getting paid what, say, Barry Bonds or Nobel Prize-winning economist Gary Becker is getting paid, in other words, then that’s because you’re not smart enough or good enough or whatever enough, jack. That is what Michaels is talking about when he discusses how educational “institutions ranging from U.I.C. to Harvard” like to depict themselves as “meritocracies that reward individuals for their own efforts and abilities—as opposed to rewarding them for the advantages of their birth.” Which, as it happens, just might explain why it is that, despite his educational accomplishments, Ryan is working on a golf course as a servant instead of using his talent in a courtroom or boardroom or classroom—as Michaels says, the reality of the United States today is that the “American Dream … now has a better chance of coming true in Sweden than it does in America, and as good a chance of coming true in western Europe (which is to say, not very good) as it does here.” That reality, in turn, is something that American universities, who are supposed to pay attention to events like this, have rapidly turned their heads away from: as Michaels says, “the intellectual left has responded to the increase in economic inequality”—that is, the supposed “Talent Revolution”—“by insisting on the importance of cultural identity.” In other words, “when it comes to class difference” (as Michaels says elsewhere), even though liberal professors “have understood our universities to be part of the solution, they are in fact part of the problem.” Hence, Ryan’s educational accomplishments (remember Ryan? There’s an essay about Ryan) aren’t actually helping him: in reality, they’re precisely what is holding him back. The question that Americans ought to be asking these days, then, is this one: what happens when Ryan realizes that?

It’s enough to make Martha Washington nervous.

 

Advertisements

At Play In The Fields Of The Lord

Logo for 2015 US Amateur at Olympia Fields Country Club
Logo for 2015 US Amateur at Olympia Fields Country Club

 

Behold, I send you forth as sheep in the midst of wolves:
be ye therefore wise as serpents, and harmless as doves.
—Matthew 10:16

Now that the professional, Open tournaments are out of the way, the U.S. Amateur approaches. A tournament that has always been a symbol of wealth and discrimination—the Amateur was a tournament invented specifically to keep out the riff-raff of professional golfers—the site of this year’s edition might be considered particularly unfortunate considering that this year the tournament will fall just more than a year after the Michael Brown shooting in Ferguson, Missouri: Olympia Fields, in Chicago’s south suburbs, is a relatively wealthy enclave among a swath of exceedingly poor villages and towns very like the terrain of the St. Louis suburbs just a few hundred miles away. Yet there’s a deeper irony at work here that might be missed even by those who’d like to point out that similarity of setting: the format of the tournament, match-play, highlights precisely what the real message of the Brown shooting was. That real message, the one that is actually dangerous to power, wasn’t the one shouted by protestors—that American police departments are “racist.” The really dangerous message is the one echoed by the Amateur: a message that, read properly, tells us that our government’s structure is broken.

The later rounds of U. S. Amateur are played under golf’s match play, rather than stroke play, rules—a difference that will seem arcane to those unfamiliar with the sport, but is a very significant difference nevertheless. In stroke play, competitors play whatever number of holes are required—in professional tournaments, usually 72 holes—and count up however many strokes each took: the player with the fewest strokes wins. Match play however is not the same: in the first place, because in stroke play each golfer is effectively playing against every other player in the field, because all the strokes of every player count. But this is not so in match play.

In the first place, match play consists of, as the name suggests, matches: that is, once the field is cut to the 64 players with the lowest score after an initial two-day stroke play tournament, each of those 64 contestants plays an 18-hole match against one other contestant. The winner of each of these matches then proceeds to move on, until there is a champion—a single-elimination tournament that is exactly like the NCAA basketball tournament held every year in March. The winner of each match in turn, as John Van der Borght says on the website of the United States Golf Association, “is the player who wins the most holes.” That is, what matters on every hole is just whether the golfer has shot a lower score than the opponent for that hole, not overall. Each hole starts the competition again, in other words—like flipping coins, what happened in the past is irrelevant. It’s a format that might sound hopeful, because on each hole whatever screw-ups a player commits are consigned to the dustbin of history. In fact, however, it’s just this element that makes match-play the least egalitarian of formats—and ties it to Ferguson.

Tournaments conducted under match play rules are always subject to a kind of mathematical oddity called a Simpson’s Paradox: such a paradox occurs when, as the definition on Wikipedia says, it “appears that two sets of data separately support a certain hypothesis, but, when considered together, they support the opposite hypothesis.” For example, as I have mentioned in this blog before, in the first round of the PGA Tour’s 2014 Accenture Match Play tournament in Tucson, an unknown named Pedro Larrazabal shot a 68 to Hall-of-Famer Ernie Els’ 75—but because they played different opponents, Larrazabal was out of the tournament and Els was in. Admittedly, even with such an illustration the idea might still sound opaque, but the meaning can be seen by considering, for example, the tennis player Roger Federer’s record versus his rival Rafael Nadal.

Roger Federer has won 17 major championships in men’s tennis, a record—and yet many people argue that he is not the Greatest Of All Time (G.O.A.T.). The reason those people can argue that is because, as Michael Steinberger pointed in the New York Times not long ago, Federer “has a losing record against Nadal, and a lopsided one at that.” Steinberger then proceeded to argue why that record should be discarded and Federer should be called the “GOAT” anyway. But weirdly, Steinberger didn’t attempt—and neither, so far as I can tell, has anyone else—what an anonymous blogger did in 2009: a feat that demonstrates just what a Simpson’s Paradox is, and how it might apply both to the U.S. Amateur and Ferguson, Missouri.

What that blogger did, on a blog entitled SW19—a reference to the United Kingdom’s postal code for Wimbledon, the great tennis arena—was he counted up the points.

Let me repeat: he counted up the points.

That might sound trivial, of course, but as the writer of the SW19 blog realized, tennis is a game that abounds in Simpson’s Paradoxes: that is, it is a game in which it is possible to score fewer points than your opponent, but still win the match. Many people don’t realize this: it might be expected, for example, that because Nadal has an overwhelmingly-dominant win-loss record versus Federer, he must also have won an equally-dominant number of points from the Swiss champion. But an examination of the points scored in each of the matches between Federer and Nadal demonstrates that in fact the difference between them was miniscule.

The SW19 blogger wrote his post in 2009; at that time Nadal led Federer by 13 matches to 7 matches, a 65 percent winning edge for the Spaniard, Nadal. Of those 20 matches, Nadal won the 2008 French Open—played on Nadal’s best surface, clay—in straight sets, 6-1, 6-3, 6-0. In those 20 matches, the two men played 4,394 total points: that is, where one player served and the two volleyed back and forth until one player failed to deliver the ball to the other court according to the rules. If tennis had a straightforward relationship between points and wins—like golf’s stroke play format, in which every “point” (stroke) is simply added to the total and the winner has the fewest points—then it might be expected that Nadal has won about 65 percent of those 4,394 points played, which would be about 2,856 points. In other words, to get a 65 percent edge in total matches, Nadal should have about a 65 percent edge in total points: the point total, as opposed to the match record, between the two ought to be about 2,856 to 1,538.

Yet this, as the SW19 blogger realized, is not the case: the real margin between the two players was Nadal, 2,221, and Federer, 2,173. In other words, even including the epic beating at Roland Garros in 2008, Nadal had only beaten Federer by a total of 48 points over the course of their careers–a total of less than one percent of all the points scored. Not merely that, but if that single match at the 2008 French Open is excluded, then the margin becomes eight points.  The mathematical difference between Nadal and Federer, thus, is the difference between a couple of motes of dust on the edge of a coin while it’s being flipped—if what is measured is the act that is the basis of the sport, the act of scoring points. In terms of points scored, Nadal’s edge is about a half of percentage point—and most of that percentage was generated by a single match. But Nadal had a 65 percent edge in their matches.

How did that happen? The answer is that the structure of tennis scoring is similar to that of match play in golf: the relation between wins and points isn’t direct. In fact, as the SW19 blogger shows, of the twenty matches Nadal and Federer had played to that moment in 2009, Federer had actually scored more points than Nadal in three of them—and still lost the match. If there were a direct relation between points and wins in tennis, that is, the record between Federer and Nadal would actually stand even, at 10-10, instead of what it was in reality, 13-7—a record that would have accurately captured the real point differential between them. But because what matters in tennis isn’t—exactly—the total number of points you score, but instead the numbers of games and sets you win, it is entirely possible to score more points than your opponent in a tennis match—and still lose. (Or, the converse.)

The reason why that is possible, as Florida State University professor Ryan Rodenberg put it in The Atlantic not long ago, is due to “tennis’ decidedly unique scoring system.” (Actually, not unique, because as might be obvious by now match play golf is scored similarly.) In sports like soccer, baseball, or stroke play golf, as sports psychologist Allen Fox once wrote in Tennis magazine, “score is cumulative throughout the contest … and whoever has the most [or, in the case of stroke play golf, least] points at the end wins.” But in tennis things are different: “[i]f you reach game point and win it, you get the entire game while your opponent gets nothing—all the points he or she won in the game are eliminated.” Just in the same way that what matters in tennis is the game, not the point, in match play golf all that matters is the hole, and not the stroke.

Such scoring systems breed Simpson’s Paradoxes: that is, results that don’t reflect the underlying value a scoring system is meant to reflect—we want our games to be won by the better player, not the lucky one—but instead are merely artifacts of the system used to measure. The point (ha!) can be shown by way of an example taken from a blog written by one David Smith, head of marketing for a company called Revolution Analytics, about U.S. median wages. In that 2013 post, Smith reported that the “median US wage has risen about 1%, adjusted for inflation,” since 2000. But was that statistic important—that is, did it measure real value?

Well, what Smith found was that wages for high school dropouts, high school graduates, high school graduates with some college, college graduates, and people with advanced degrees all fell over the same period. Or, as Smith says, “within every educational subgroup, the median wage is now lower than it was in 2000.” But how can it be that “overall wages have risen, but wages within every subgroup have fallen?” The answer is similar to the reason why Rafael had a 65 percent winning margin against Federer: although there are more college graduates now than in 2000, the wages of college graduates haven’t fallen (1.2%) as far as, say, high school dropouts (7.9%). So despite the fact that everyone is poorer—everyone is receiving lower wages, adjusted for inflation—than in 2000, mendacious people can say wages are actually up. Wages are up—if you “compartmentalize” the numbers in just the way that reflects the story you’d like to tell.

Now, while the story about American wages might suggest a connection to Ferguson—and it does—that isn’t the connection between the U.S. Amateur and Ferguson, Missouri, I’d like to discuss. That connection is this one: if the trouble about the U.S. Amateur is that it is conducted under match play—a format that permits Simpson’s Paradox results—and Simpson’s Paradoxes are, at heart, boundary disputes—arguments about whether to divide up the raw data into smaller piles or present them as one big pile—then that suggests the real link to Ferguson because the real issue behind Darren Wilson’s shooting of Michael Brown then isn’t racism—or at least, the way to solve it isn’t to talk about racism. Instead, it’s to talk borders.

After Ferguson police officer Darren Wilson shot Michael Brown last August, the Department of Justice issued a report that was meant, as Zoë Carpenter of The Nation wrote this past March, to “address the roots of the police force’s discriminatory practices.” That report held that those practices were not “simply the result of racist cops,” but instead stemmed “from the way the city preys on residents financially, relying on the fines that accompany even minor offenses to balance its budget.” The report found an email from Ferguson’s finance director to the town’s police chief that, Carpenter reported, said “unless ticket writing ramps up significantly before the end of the year, it will be hard to significantly raise collections next year.” The finance director’s concerns were justified: only slightly less than a quarter of Ferguson’s total budget was generated by traffic tickets and other citations. The continuing operation of the town depends on revenue raised by the police—a need, in turn, that drives the kind of police zealotry that the Department of Justice said contributed to Brown’s death.

All of which might seem quite far from the concerns of the golf fans watching the results of the matches at the U.S. Amateur. Yet consider a town not far from Ferguson: Beverly Hills, Missouri. Like Ferguson, Beverly Hills is located to the northwest of downtown St. Louis, and like Ferguson it is a majority black town. But where Ferguson has over 20,000 residents, Beverly Hills has only around 600 residents—and that size difference is enough to make the connection to the U.S. Amateur’s format of play, match play, crystalline.

Ferguson after all is not alone in depending so highly on police actions for its revenues: Calverton Park, for instance, is another Missouri “municipality that last fiscal year raised a quarter of its revenue from traffic fines,” according to the St. Louis Post-Dispatch. Yet while Ferguson, like Calverton Park, also raised about a quarter of its budget from police actions, Beverly Hills raised something like half of its municipal budget on traffic and other kinds of citations, as a story in the Washington Post. All these little towns, all dependent on traffic tickets to meet their budgets; “Most of the roughly ninety municipalities in St. Louis County,” Carpenter reports in The Nation, “have their own courts, which … function much like Ferguson’s: for the purpose of balancing budgets.” Without even getting into the issue of the fairness of property taxes or sales taxes as a basis for municipal budgeting, it seems obvious that depending on traffic tickets as a major source of revenue is poor planning at best. Yet without the revenue provided by cops writing tickets—and, as a result of Ferguson, the state of Missouri is considering limiting the percentage of a town’s budget that can be raised by such tickets, as the St. Louis Dispatch article says—many of these towns will simply fail. And that is the connection to the U.S. Amateur.

What these towns are having to consider in other words is, according to the St. Louis Post-Dispatch, an option mentioned by St. Louis County Executive Steve Stenger last December: during an interview, the official said that “the consolidation of North County municipalities is what we should be talking about” in response to the threat of cutting back reliance on tickets. Small towns like Beverly Hills may simply be too small: they create too little revenue to support themselves without a huge effort on the part of the police force to find—and thus, in a sense, create—what are essentially taxable crimes. The way to solve the problem of a “racist” police department, in other words, might not be to conduct workshops or seminars in order to “retrain” the officers on the frontline, but instead to redrawn the political boundaries of the greater St. Louis metropolitan area.

That, at least, is a solution that our great-grandparents considered, as an article by writer Kim-Mai Cutler for Tech Crunch this past April remarked. Examining the historical roots of the housing crisis in San Francisco, Cutler discovered that in “1912, a Greater San Francisco movement emerged and the city tried to annex Oakland,” a move Oakland resisted. Yet as a consequence of not creating a Bay-wide government, Cutler says, “the Bay Area’s housing, transit infrastructure and tax system has been haunted by the region’s fragmented governance” ever since: the BART (Bay Area Regional Transit) system, for example, as originally designed “would have run around the entire Bay Area,” Cutler says, “but San Mateo County dropped out in 1961 and then Marin did too.” Many of the problems of that part of Northern California could be solved, Cutler thusly suggests via this and other instances—contra the received wisdom of our day—by bigger, not smaller, government.

“Bigger,” that is, in the sense of “more consolidated”: by the metric of sheer numbers, a government built to a larger scale might not employ as many people as do the scattered suburban governments of America today. But what such a government would do is capture all of the efficiencies of economies of scale available to a larger entity—thus, it might be in a sense smaller than the units it replaced, but definitely would be more powerful. What Missourians and Californians—and possibly others—may be realizing then is that the divisions between their towns are like the divisions tennis makes around its points, or match play golf makes around its strokes: dividing a finite resource, whether points or strokes or tax dollars (or votes), into smaller pools creates what might be called “unnatural,” or “artificial,” results—i.e., results that inadequately reflect the real value of the underlying resource. Just like match play can make Ernie Els’ 75 look better than Pedro Larrazabal’s 68, or tennis’ scoring system can make Rafael Nadal look much better than Federer—when in reality the difference between them is (or was) no more than a sliver of a gnat’s eyelash—dozens of little towns dissipate the real value, economic and otherwise, of the people that inhabit a region.

That’s why when Eric Holder, Attorney General for the United States, said that “the underlying culture” of the police department and court system of Ferguson needs to be reformed, he got it exactly wrong. The problems in St. Louis and San Francisco, the evidence suggests, are created not because government is getting in the way, but because government isn’t structured correctly to channel the real value of the people: scoring systems that leave participants subject to the vagaries of Simpson’s Paradox results might be perfectly fine for games like tennis or golf—where the downsides are minimal—but they shouldn’t be how real life gets scored, and especially not in government. Contra Holder, the problem is not that the members of the Ferguson police department are racists. The problem is that the government structure requires them, like occupying soldiers or cowboys, to view their fellow citizens as a kind of herd. Or, to put the manner in a pithier way: A system that depends on the harvesting of sheep will turn its agents into wolves. Instead of drowning the effects of racism—as a big enough government would through its very size—multiplying struggling towns only encourages racism: instead of diffusing racism, a system broken into little towns focuses it. The real problem of Ferguson then—the real problem of America—is not that Americans are systematically discriminatory: it’s that the systems used by Americans aren’t keeping the score right.

Bait and Switch

Golf, Race, and Class: Robert Todd Lincoln, Oldest Son of President Abraham Lincoln, and President of the Chicago Golf Club
Golf, Race, and Class: Robert Todd Lincoln, Oldest Son of President Abraham Lincoln, and President of the Chicago Golf Club

But insiders also understand one unbreakable rule: They don’t criticize other insiders.
A Fighting Chance
    Senator Elizabeth Warren.

… cast out first the beam out of thine own eye …
Matthew 7:5

 

“Where are all the black golfers?” Golf magazine’s Michael Bamberger asked back in 2013: Tiger Wood’s 1997 victory at the Masters, Bamberger says, was supposed to open “the floodgates … to minority golfers in general and black golfers in particular.” But nearly two decades later Tiger is the only player on the PGA Tour to claim to be African-American. It’s a question likely to loom larger as time passes: Woods missed the cut at last week’s British Open, the first time in his career he has missed a cut in back-to-back majors, and FiveThirtyEight.com’s line from April about Woods (“What once seemed to be destiny—Woods’ overtaking of Nicklaus as the winningest major champion ever—now looks like a fool’s notion”) seems more prophetic than ever. As Woods’ chase for Nicklaus fades, almost certainly the question of Woods’ legacy will turn to the renaissance in participation Woods was supposedly going to unleash—a renaissance that never happened. But where will the blame fall? Once we exclude Woods’ from responsibility for playing Moses, is the explanation for why are there no black golfers, as Bamberger seems to suggest, because golf is racist? Or is it, as Bamberger’s own reporting shows, more likely due to the economy? And further, if we can’t blame Woods for not creating more golfers in his image, can we blame Bamberger for giving Americans the story they want instead of the story they need?

Consider, for instance, Bamberger’s mention of the “Tour caddie yard, once a beautiful example of integration”—and now, he writes, “so white it looks like Little Rock Central High School, circa 1955.” Or his description of how, in “Division I men’s collegiate golf … the golfers, overwhelmingly, are white kids from country-club backgrounds with easy access to range balls.” Surely, although Bamberger omits the direct reference, the rise of the lily-white caddie yard is likely not due to a racist desire to bust up the beautifully diverse caddie tableau Bamberger describes, just as it seems more likely that the presence of the young white golfers at the highest level of collegiate golf owes more to their long-term access to range balls than it does to the color of their skin. Surely the mysterious disappearance of the black professional golfer is more likely due—as the title of a story by Forbes contributor Bob Cook has it—to “How A Declining Middle Class Is Killing Golf” than golf’s racism. An ebbing tide lowers all boats.

“Golf’s high cost to entry and association with an older, moneyed elite has resulted in young people sending it to the same golden scrap heap as [many] formerly mass activities,” as Cook wrote in Forbes—and so, as “people [have] had less disposable income and time to play,” golf has declined among all Americans and not just black ones. But then, maybe that shouldn’t be surprising when, as Scientific American reported in March, the “top 20% of US households own more than 84% of the wealth, and the bottom 40% combine for a paltry 0.3%,” or when, as Time said two years ago, “the wages of median workers have remained essentially the same” for the past thirty years. So it seems likelier that the non-existent black golfer can be found at the bottom of the same hole to which many other once-real and now-imaginary Americans—like a unionized, skilled, and educated working-class—have been consigned.

The conjuring trick however whereby the disappearance of black professional golfers becomes a profound mystery, rather than a thoroughly understandable consequence of the well-documented overall decline in wages for all Americans over the past two generations, would be no surprise to Walter Benn Michaels of the University of Illinois at Chicago. “In 1947,” Michaels has pointed out for instance, repeating all the statistics, “the bottom fifth of wage-earners got 5 per cent of total income,” while “today it gets 3.4 per cent.” But the literature professor is aware not only that inequality is rising, but also that it’s long been a standard American alchemy to turn economic matters into racial ones.

Americans, Michaels has written, “love thinking that the differences that divide us are not the differences between those of us who have money and those who don’t but are instead the differences between those of us who are black and those who are white or Asian or Latino or whatever.” Why? Because if the differences between us are due to money, and the lack of it, then there’s a “need to get rid of inequality or to justify it”—while on the other hand, if those differences are racial, then there’s a simple solution: “appreciating our diversity.” In sum, if the problem is due to racism, then we can solve it with workshops and such—but if the problem is due to, say, an historic loss of the structures of middle-class life, then a seminar during lunch probably won’t cut it.

Still, it’s hard to blame Bamberger for refusing to see what’s right in front of him: Americans have been turning economic issues into racial ones for some time. Consider the argument advanced by the Southern Literary Messenger (the South’s most important prewar magazine) in 1862: the war, the magazine said, was due to “the history of racial strife” between “a supposedly superior race” that had unwisely married its fortune “with one it considered inferior, and with whom co-existence on terms of political equality was impossible.” According to this journal, the Civil War was due to racial differences, and not from any kind of clash between two different economic interests—one of which was getting incredibly wealthy by the simple expedient of refusing to pay their workers and then protecting their investment by making secret and large-scale purchases of government officials while being protected by bought-and-paid-for judges. (You know, not like today.)

Yet despite how ridiculous it sounds—because it is—the theory does have a certain kind of loopy logic. According to these Southern, and some Northern, minds, the two races were so widely divergent politically and socially that their deep, historical differences were the obvious explanation for the conflict between the two sections of the country—instead of that conflict being the natural result of allowing a pack of lying, thieving criminals to prey upon decent people. The identity of these two races—as surely you have already guessed, since the evidence is so readily apparent—were, as historian Christopher Hanlon graciously informs us: “the Norman and Saxon races.”

Duh.

Admittedly, the theory does sound pretty out there—though I suspect it sounds a lot more absurd now that you know what races these writers were talking about, rather than the ones I suspect you thought they were talking about. Still, it’s worth knowing something of the details if only to understand how these could have been considered rational arguments: to understand, in other words, how people can come to think of economic matters as racial, or cultural, ones.

In the “Normans vs. Saxons” version of this operation, the theory comes in two flavors. According to University of Georgia historian James Cobb, the Southern flavor of this racial theory held that Southerners were “descended from the Norman barons who conquered England in the 11th century and populated the upper classes of English society,” and were thus naturally equipped for leadership. Northern versions held much the same, but flipped the script: as Ralph Waldo Emerson wrote in the 1850s, the Normans were “greedy and ferocious dragoons, sons of greedy and ferocious pirates” who had, as Conlon says, “imposed serfdom on their Saxon underlings.” To both sides then the great racial conflagration, the racial apocalypse, destined to set the continent alight would be fought between Southern white people … and Northern white people.

All of which is to say that Americans have historically liked to make their economic conflicts about race, and they haven’t always been particular about which ones—which might seem like downer news. But there is, perhaps, a bright spot to all this: whereas the Civil War-era writers treated “race” as a real description of a natural kind—as if their descriptions of “Norman” or “Saxon” had as much validity as a description of a great horned toad or Fraser’s eagle owl—nowadays Americans like to “dress race up as culture,” as Michaels says. This current orthodoxy holds that “the significant differences between us are cultural, that such differences should be respected, that our cultural heritages should be perpetuated, [and] that there’s a value in making sure that different cultures survive.” Nobody mentions that substituting “race” and “racial” for “culture” and “cultural” doesn’t change the sentence’s meaning in any important respects.

Still, it certainly has had an effect on current discourse: it’s what caused Bamberger to write that Tiger Woods “seems about as culturally black as John Boehner.” The phrase “culturally black” is arresting, because it implies that “race” may not be a biological category, as it was for the “Normans vs. Saxons” theorists. And certainly, that’s a measure of progress: just a generation or two ago it was possible to refer unselfconsciously to race in an explicitly biological way. So in that sense, it might be possible to think that because a Golf writer feels it necessary to clarify that “blackness” is a cultural, and not a biological, category, that constitutes a victory.

The credit for that victory surely goes to what the “heirs of the New Left and the Sixties have created, within the academy” as Stanford philosopher Richard Rorty wrote before his death—“a cultural Left.” The victories of that Left have certainly been laudable—they’ve even gotten a Golf magazine writer to talk about a “cultural,” instead of biological, version of whatever “blackness” is! But there’s also a cost, as Rorty also wrote: this “cultural Left,” he said, “thinks more about stigma than money, more about deep and hidden psychosexual motivations than about shallow and evident greed.” Seconding Rorty’s point, University of Chicago philosopher Martha Nussbaum has written that academia today is characterized by “the virtually complete turning away from the material side of life, toward a type of verbal and symbolic politics”—a “cultural Left” that thinks “the way to do … politics is to use words in a subversive way, in academic publications of lofty obscurity and disdainful abstractness,” and that “instructs its members that there is little room for large-scale social change, and maybe no room at all.” So, while it might be slightly better that mainstream publications now think of race in cultural, instead biological, terms, this might not be the triumph it’s sometimes said to be given the real facts of economic life in the United States.

Yet the advice of the American academy is that what the United States needs is more talk about culture, rather than a serious discussion about political economy. Their argument is a simple one, summarized by the recently deceased historical novelist E.L. Doctorow in an essay called “Notes on the History of Fiction”: there, the novelist argues that while there is a Richard III Society in England attempting to “recover the reputation of their man from the damage done to it by the calumnies of Shakespeare’s play,” all their efforts are useless—“there is a greater truth for the self-reflection of all mankind in the Shakespearean vision of his life than any simple set of facts can summon.” What matters, Doctorow is arguing, isn’t the real Richard III—coincidentally, the man apparently recently dug up in an English parking lot—but rather Shakespeare’s approximation of him, just in the same way that some Civil War-era writers argued that what mattered was “race” instead of the economics of slavery, or how Michael Bamberger fails to realize that the presence of the real white golfers that are in front of him explains the absence of the imaginary black golfers that aren’t fairly easily. What Doctorow then is really saying, and thus by extension what the “cultural Left” is really saying, is that the specific answer to the question of where the black golfers are is irrelevant, because dead words matter more than live people—an idea, however, that seems difficult to square with the notion that, as the slogan has it, black lives matter.

Golfers or not.

The Smell of Victory

To see what is in front of one’s nose needs a constant struggle.
George Orwell. “In Front of Your Nose”
    Tribune, 22 March 1946

 

Who says country clubs are irony-free? When I walked into Medinah Country Club’s caddie shack on the first day of the big member-guest tournament, the Medinah Classic, Caddyshack, that vicious class-based satire of country club stupid was on the television. These days, far from being patterned after Caddyshack’s Judge Smails (a pompous blowhard), most country club members are capable of reciting the lines of the movie nearly verbatim. Not only that—they’ve internalized the central message of the film, the one indicated by the “snobs against the slobs” tagline on the movie poster: the moral that, as another 1970s cinematic feat put it, the way to proceed through life is to “trust your feelings.” Like a lot of films of the 1970s—Animal House, written by the same team, is another example—Caddyshack’s basic idea is don’t trust rationality: i.e., “the Man.” Yet, as the phenomena of country club members who’ve memorized Caddyshack demonstrates, that signification has now become so utterly conventional that even the Man doesn’t trust the Man’s methods—which is how, just like O.J. Simpson’s jury, the contestants in this year’s Medinah Classic were prepared to ignore probabilistic evidence that somebody was getting away with murder.

That’s a pretty abrupt jump-cut in style, to be sure, particularly in regards to a sensitive subject like spousal abuse and murder. Yet, to get caught up in the (admittedly horrific) details of the Simpson case is to miss the trees for the forest—at least according to a short 2010 piece in the New York Times entitled “Chances Are,” by the Schurman Professor of Applied Mathematics at Cornell University, Steven Strogatz.

The professor begins by observing that the prosecution spent the first ten days of the six-month long trial establishing that O.J. Simpson abused his wife, Nicole. From there, as Strogatz says, prosecutors like Marcia Clark and Christopher Darden introduced statistical evidence that showed that abused women who are murdered are usually killed by their abusers. Thus, as Strogatz says, the “prosecution’s argument was that a pattern of spousal abuse reflected a motive to kill.” Unfortunately however the prosecution did not highlight a crucial point about their case: Nicole Brown Simpson was dead.

That, you might think, ought to be obvious in a murder trial, but because the prosecution did not underline the fact that Nicole was dead the defense, led on this issue by famed trial lawyer Alan Dershowitz, could (and did) argue that “even if the allegations of domestic violence were true, they were irrelevant.” As Dershowitz would later write, the defense claimed that “‘an infinitesimal percentage—certainly fewer than 1 of 2,500—of men who slap or beat their domestic partners go on to murder them.’” Ergo, even if battered women do tend to be murdered by their batterers, that didn’t mean that this battered woman (Nicole Brown Simpson) was murdered by her batterer, O.J. Simpson.

In a narrow sense, of course, Dershowitz’s claim is true: most abused women, like most women generally, are not murdered. So it is absolutely true that very, very few abusers are also murderers. But as Strogatz says, the defense’s argument was a very slippery one.

It’s true in other words that, as Strogatz says, “both sides were asking the jury to consider the probability that a man murdered his ex-wife, given that he previously battered her.” But to a mathematician like Strogatz, or his statistician colleague I.J. Good—who first tackled this point publicly—this is the wrong question to ask.

“The real question,” Strogatz writes, is: “What’s the probability that a man murdered his ex-wife, given that he previously battered her and she was murdered?” That’s the question that applied in the Simpson case: Nicole Simpson had been murdered. If the prosecution had asked the right question in turn, the answer to it—that is, the real question, not the poorly-asked or outright fraudulent questions put by both sides at Simpson’s trial—would have been revealed to be about 90 percent.

To run through the math used by Strogatz quickly (but still capture the basic points): of a sample of 100,000 battered American women, we could expect about 5 of them to be murdered by random strangers any given year, while we could also expect about 40 of them to be murdered by their batterers. So of the 45 battered women murdered each year per 100,000 battered women, about 90 percent of them are murdered by their batterers.

In a very real sense then, the prosecution lost its case against O.J. because it did not present its probabilistic evidence correctly. Interviewed years later for the PBS program, Frontline, Robert Ball, a lawyer for one of the jurors on the Simpson case, Brenda Moran, said that according to his client, the jury thought that for the prosecution “to place so much stock in the notion that because [O.J.] engaged in domestic violence that he must have killed her, created such a chasm in the logic [that] it cast doubt on the credibility of their case.” Or as one of the prosecutors, William Hodgman, said after the trial, the jury “didn’t understand why the prosecution spent all that time proving up the history of domestic violence,” because they “felt it had nothing to do with the murder case.” In that sense, Hodgman admitted, the prosecution failed because they failed to close the loop in the jury’s understanding—they didn’t make the point that Strogatz, and Good before him, say is crucial to understanding the probabilities here: the fact that Nicole Brown Simpson had been murdered.

I don’t know, of course, to what degree distrust of scientific or rational thought played in the jury’s ultimate decision—certainly, as has been discovered in recent years, it is the case that crime laboratories have often been accused of “massaging” the evidence, particularly when it comes to African-American defendants. As Spencer Hsu reported in the Washington Post, for instance, just in April of this year the “Justice Department and FBI … formally acknowledged that nearly every examiner in an elite FBI forensic unit gave flawed testimony in almost all trials in which they offered evidence.” Yet, while it’s obviously true that bad scientific thought—i.e., “thought” that isn’t scientific at all—ought to be quashed, it’s also I think true that there is a pattern of distrust of that kind of thinking that is not limited to jurors in Los Angeles County, as I discovered this weekend at the Medinah Classic.

The Classic is a member-guest tournament, and member-guests are golf tournaments consisting of two-man teams made up by a country club member and his guest. They are held by country clubs around the world, played according to differing formats but usually dependent upon each golfer’s handicap index: the number assigned by the United States Golf Association’s computer after the golfer pays a fee and inputs his scores into the USGA’s computer system. (It’s similar to the way that carrying weights allows horses of different sizes to race each other, or how different weight classes allows boxing or wrestling to be fair.) Medinah’s member-guest tournament is, nationally, one of the biggest because of the number of participants: around 300 golfers every year, divided into three flights according to handicap index (i.e. ability). Since Medinah has three golf courses, it can easily accommodate so many players—but what it can’t do, however, is adequately police the tournament’s entrants, as the golfers I caddied for discovered.

Our tournament began with the member shooting an amazing 30, after handicap adjustment, on the front nine of Medinah’s Course Three, the site of three U.S. Opens, two PGA Championships, numerous Western Opens (back when they were called Western Opens) and a Ryder Cup. A score of 30 for nine holes, on any golf course, is pretty strong—but how much more so on a brute like that course, and how much more so again in the worst of the Classic’s three flights? I thought so, and said so to the golfers I was caddieing for after our opening round. They were kind of down about the day’s ending—especially the guest, who had scored an eight on our last hole of the day. Despite that I told my guys that on the strength the member’s opening 30, if we weren’t just outright winning the thing we were top three. As it turned out, I was correct—but despite the amazing showing we had on the tournament’s first day, we would soon discover that there was no way we could catch the leading team.

In a handicapped tournament like the Classic, what matters isn’t so much what any golfer scores, but what he scores in relation to the handicap index. Thus, the member half of our member-guest team hadn’t actually shot a 30 on the front side of Medinah’s Course 3—which certainly would have been a record for an amateur tournament, and I think a record for any tournament at Medinah ever—but instead had shot a 30 considering the shots his handicap allowed. His score, to use the parlance, wasn’t gross but rather net: my golfer had shot an effective six under par according to the tournament rules.

Naturally, such an amazing score might raise questions: particularly when it’s shot as part of the flight reserved for the worst players. Yet my player has a ready explanation for why he was able to shoot a low number (in the mid 40s) and yet still have a legitimate handicap: he has a legitimate handicap—a congenital deformity in one of his ankles. The deformation is not enough to prevent him from playing, but as he plays—and his pain medications wear off—he usually tires, which is to say that he can very often shoot respectable scores in the first nine holes, and horrific scores on the second nine holes. His actual handicap, in other words, causes his golf handicap index to be askew slightly from reality.

Thus, he is like the legendary Sir Gawain, who according to Arthurian legend tripled his strength at noon but faded as the sun set—a situation that the handicap system is ill-designed to handle. Handicap indexes presume roughly the same ability at the beginning of a round as at the end, so in this Medinah member’s case his index understates his ability at the beginning of his round while wildly overstating it at the end. In a sense then it could perhaps be complained that this member benefits from the handicap system unfairly—unless you happen to consider that the man walks in nearly constant pain every day of his life. If that’s “gaming the system” it’s a hell of a way to do it: getting a literal handicap to pad your golf handicap would obviously be absurd.

Still, the very question suggests the great danger of handicapping systems, which is one reason why people have gone to the trouble of investigating whether there are ways to determine whether someone is taking advantage of the handicap system—without using telepathy or some other kind of magic to determine the golfer’s real intent. The most important of the people who have investigated the question is Dean L. Knuth—the former Senior Director of Handicapping for the United States Golf Association, a man whose nickname is the “Pope of Slope.” In that capacity Mr. Knuth developed the modern handicapping system—and a way to calculate the odds of a person of a given handicap shooting a particular score.

In this case, my information is that the team that ended up winning our flight—and won the first round—had a guest player who represented himself as possessing a handicap index of 23 when the tournament began. For those who aren’t aware, a 23 is a player who does not expect to play better than a score of ninety during a round of golf, when the usual par for most courses is 72. (In other words, a 23 isn’t a very good player.) Yet this same golfer shot a gross 79 during his second round for what would have been a net 56: a ridiculous number.

Knuth’s calculations reflect that: they judge that the odds of someone shooting a score so far below his handicap to be on the order of several tens of thousands to one, especially in tournament conditions. In other words, while my player’s handicap wasn’t a straightforward depiction of his real ability, it did adequately capture his total worth as a golfer. This other player’s handicap though sure appeared to many, including one of the assistant professionals who went out to watch him play, to be highly suspect.

That assistant professional, who is a five handicap himself, said that after watching this guest play he would hesitate to play him straight up, much less giving the fellow ten or more shots: the man not only was hitting his shots crisply, but also hit shots that even professionals fear, like trying to get a ball to stop on a downslope. So for the gentleman to claim to be a 23 handicap seemed, to this assistant professional, to be incredibly, monumentally, improbable. Observation then seems to confirm what Dean Knuth’s probability tables would suggest: the man was playing with an improper handicap.

What happened as the tournament went along also appears to indicate that at least Medinah’s head professional was aware that the man’s reported handicap index wasn’t legitimate: after the first round, in which that player shot a similarly suspect score as his second round 79 (I couldn’t discover what it was precisely), his handicap was adjusted downwards, and after that second round 79 more shots got knocked off his initial index. Yet although there was a lot of complaining on the part of fellow competitors, no one was willing to take any kind of serious action.

Presumably, this inaction was on a theory similar to the legal system’s presumption of innocence: maybe the man just really had “found his swing” or “practiced really hard” or gotten a particularly good lesson just before arriving at Medinah’s gates. But to my mind, such a presumption ignores, like the O.J. jury did, the really salient issue: in the Simpson case, that Nicole was dead; in the Classic, the fact that this team was leading the tournament. That was the crucial piece of data: it wasn’t just that this team could be leading the tournament, it was that they were leading the tournament—just in the same way that, while you couldn’t use statistics to predict whether O.J. Simpson would murder his ex-wife Nicole, you certainly can use statistics to say that O.J. probably murdered Nicole once Nicole was murdered.

The fact in other words that this team of golfers was winning the tournament was itself evidence they were cheating—why would anyone cheat if they weren’t going to win as a result? That doesn’t mean, to be sure, that winning constitutes conclusive evidence of fraud—just as probabilistic evidence doesn’t mean that O.J. must have killed Nicole—but it does indicate the need for further investigation, and suggests what presumption an investigation ought to pursue. Particularly by the amount of the lead: by the end of the second day, that team was leading by more than twenty shots over the next competitors.

Somehow however it seems that Americans have lost the ability to see the obvious. Perhaps that’s through the influence of films from the 1970s like Caddyshack or Star Wars: both films, interestingly, feature scenes where one of the good guys puts on a blindfold in order to “get in touch” with some cosmic quality that lies far outside the visible spectrum. (The original Caddyshack script actually cites the Star Wars scene.) But it is not necessary to blame just those films themselves: as Thomas Frank says in his book The Conquest of Cool, one of America’s outstanding myths represents the world as a conflict between all that is “tepid, mechanical, and uniform” versus the possibility of a “joyous and even a glorious cultural flowering.” In the story told by cultural products like Caddyshack, it’s by casting aside rational methods—like Luke Skywalker casting aside his targeting computer in the trench of the Death Star—that we are all going to be saved. (Or, as Rodney Dangerfield’s character puts it at the end of Caddyshack, “We’re all going to get laid!”) That, I suppose, might be true—but perhaps not for the reasons advertised.

After all, once we’ve put on the blindfold, how can we be expected to see?

A Momentary Lapse

 

The sweets we wish for turn to loathed sours
Even in the moment that we call them ours.
The Rape of Lucrece, by William Shakespeare

“I think caddies are important to performance,” wrote ESPN’s Jason Sobel late Friday night. “But Reed/DJ each put a family member on bag last year with no experience. Didn’t skip a beat.” To me, Sobel’s tweet appeared to question the value of caddies, and so I wrote to Mr. Sobel and put it to him that sure, F. Scott Fitzgerald could write before he met Maxwell Perkins—but without Perkins on Fitzgerald’s bag, no Gatsby. Still, I don’t mention the point simply to crow about what happened: how Dustin Johnson missed a putt to tie Jordan Spieth in regulation, a putt that arguably a professional caddie would have held Johnson from hitting so quickly. What’s important about Spieth’s victory is that it might finally have killed the idea of “staying in the moment”: an un-American idea far too prevalent for the past two decades or more not only in golf, but in American life.

Anyway, it’s been around a while. “Staying in the moment,” as so much in golf does, likely traces at least so far back as Tiger Woods’ victory at Augusta National in 1997. Sportswriters then liked to make a big deal out of Tiger’s Thai heritage: supposedly, his mother’s people, with their Buddhist religion, helped Tiger to focus. It was a thesis that to my mind was more than a little racially suspect—seemed to me that Woods’ won a lot of tournaments because he hit the ball further than anyone else at the time, and it was matched by an amazing short game. That was the story that got retailed then however.

Back in 2000, for instance, Robert Wright of the online magazine Slate was peddling what he called the “the New Age Theory of Golf.” “To be a great golfer,” Wright said, “you have to do what some Eastern religions stress—live in the present and free yourself of aspiration and anxiety.” “You can’t be angry over a previous error or worried about repeating it,” Wright went on to say. You are just supposed to “move forward”—and, you know, forget about the past. Or to put it another way, success is determined by how much you can ignore reality.

Now, some might say that it was precisely this attitude that won the U.S. Open for Team Jordan Spieth. “I always try to stay in the present,” Jordan Spieth’s caddie Michael Greller told The Des Moines Register in 2014, when Greller and Spieth returned to Iowa to defend the title the duo had won in 2013. But a close examination of their behavior on the course, by Shane Ryan of Golf Digest, questions that interpretation.

Spieth, Ryan writes, “kept up a neurotic monologue with Michael Greller all day, constantly seeking and receiving reassurance about the wind, the terrain, the distance, the break, and god knows what else.” To my mind, this hardly counts as the usual view of “staying in the present.” The usual view, I think, was what was going on with their opponents.

During the course of his round, Ryan reports, Johnson “rarely spoke with his brother and caddie Austin.” Johnson’s relative silence appears to me to be much like Wright’s passive, “New Age,” reality-ignoring, ideal. Far more, anyway, than the constant squawking that was going on in Spieth’s camp.

It’s a difference, I realize, that is easy to underestimate—but a crucial one nonetheless. Just how significant that difference is might be best revealed by an anecdote the writer, Gary Brecher, tells about the aftermath of the second Iraq War: about being in the office with a higher-ranking woman who declared her support for George Bush’s war. When Brecher said to her that perhaps these rumors of Saddam’s weapons could be exaggerated—well, let’s read Brecher’s description:

She just stared at me a second—I’ve seen this a lot from Americans who outrank me; they never argue with you, they don’t do arguments, they just wait for you to finish and then repeat what they said in the beginning—she said, “I believe there are WMDs.”

It’s a stunning description. Not only does it sum up what the Bush Administration did in the run-up to the Iraq War, but it’s also something of a fact of life around workplaces and virtually everywhere else in the United States these days: two Americans, especially ones of differing classes, rarely talk to one another these days. But they sure are pretty passive.

Americans however aren’t supposed to think of themselves as being passive—at least, they didn’t use to think of themselves that way. The English writer George Orwell described the American attitude in an essay about the quintessentially American author, Mark Twain: a man who “had his youth and early manhood in the golden age of America … when wealth and opportunity seemed limitless, and human beings felt free, indeed were free, as they had never been before and may not be again for centuries.” In those days, Orwell says, “at least it was NOT the case that a man’s destiny was settled from his birth,” and if “you disliked your job you simply hit the boss in the eye and moved further west.” Those older Americans did not simply accept what happened to them, the way the doctrine of “staying in the present” teaches.

If so, then perhaps Spieth and Greller, despite what they say, are bringing back an old American custom by killing an alien one. In a nation where 400 Americans are worth more than the poorest 150 million Americans, as I learned Sunday night after the Open by watching Robert Reich’s film, Inequality for All, it may not be a moment too soon.

The Curious Incident of the Silent Tournament

O Scotland! Scotland!
The Tragedy of Macbeth IV, 3

Where Scotland?
The Comedy of Errors III, 2

 

 

The “breakup of Britain must now be considered a realistic possibility,” according to James Kirkup of the Daily Telegraph, because in the United Kingdom’s May 7 general election the Scottish Nationalist Party swept all but three of Scotland’s parliamentary seats—an event that took nearly the entire British establishment by surprise. But the 7 May results are really two surprising events: as the New York Times reported, in the United Kingdom as a whole the Conservative Party won “an unexpected majority in what was supposed to be a down-to-the-wire election, proving polls and pundits wrong.” The two victories have made both Scotland and England virtually one-party states—which perhaps paradoxically may be a sign that the British state has taken a first step to a republic. At least, if golf’s British Open is a guide.

“Who’s he when he’s at home?” is a British idiom, meaning, “what’s he like when he’s among friends, when nobody’s watching?” Admittedly, the idea that a golf tournament might tell you something useful about an important thing like a national election is odd at best. But scholar Benedict Anderson’s Imagined Communities: Reflections on the Origins and Spread of Nationalism shows how the claim might be justified: he argues that the “generation of the impersonal will” necessary to nations is “better sought in … diurnal regularities” than in the “rare and moveable feast” of an election. In other words, consulting official papers, census returns, election results and economic data and so forth are like visiting someone’s front parlor on Sunday: you’ll get a story, but only the most sanitized version. But by looking at something like the British Open it might be possible to get a sense of what Britain really thinks.

Anderson’s method, which teaches paying attention to small details, is after all rewarded by the very results of the 7 May election itself: reading the granular measurements of incomes, polling, and past results is what the official press did leading up to Election Day—just in time to receive the proverbial pie in the face. The Scottish Nationalist Party’s triumph is a classic example of an underdog’s victory—and it’s the definition of a David vs. Goliath battle that David’s win should be a surprise. Just so, when scholar Tom Nairn published The Break-up of Britain: Crisis and Neo-nationalism in 1977, few would have thought that Scottish nationalists would ever become the majority party in Scotland: at the time, Scottish electoral politics were dominated by the Labour Party, as they had been since the 1960s. Until this past election, Labour was still the top dog in Scottish politics—and then they weren’t.

Nevertheless, the idea that the SNP’s triumph might threaten the very integrity of the United Kingdom might, to the outsider, appear to be the apocalyptic hyperbole designed to sell newspapers. Scotland constitutes less than ten percent of the United Kingdom’s population; what happens there arguably can hardly affect much of the rest of the country. But that assumption would be false, as a scrutiny of the British Open might show.

From Anderson’s perspective, the fact that the golf tournament is far removed from the game of electoral politics is just what makes it worth examining—in a manner also suggestive of Arthur Conan Doyle’s greatest creation. Like the dog in “The Adventure of Silver Blaze”—the dog that, famously, didn’t bark—the silence of the R & A (the organization that has run the golf tournament since 2004), is after all a bit curious, even on its own terms. The R & A has a vested interest in maintaining the Act of Union that binds the United Kingdom together because the possibility of an independent Scotland presents, at minimum, a practical problem.

The group’s headquarters are in St. Andrews, first of all, but more importantly, of the nine golf courses in the Open Championship’s current “rota,” five lie north of Berwick-upon-Tweed: the Old Course at St. Andrews (the “Home of Golf), Muirfield, Royal Troon, Carnoustie, and the Ailsa Course at Turnberry, within sight of Ailsa Craig. But most of the Open’s fans lie south of the Tweed; logistically, if for no other reason, an independent Scotland would be a great complication for the R & A.

The R & A’s silence then is suggestive—at the very least, it reveals something about how how difficult it might be psychologically to think about an independent Scotland. For example, consider both the name of the tournament—the “Open Championship”—and how the winner of each year’s tournament is introduced following victory: the “champion golfer of the year.” Despite name of the tournament in America—the “British Open”—neither of these make any reference to Great Britain as a nation; the organizers of the golf tournament thus might appear to be philosophically opposed to nationalism.

In that view, nationalism is “the pathology of modern developmental history, as inescapable of ‘neurosis’ in the individual,” as Tom Nairn puts it. It’s the view that reads nationalism as a slap in the face to Enlightenment, which proclaims, as British academic Terry Eagleton says, “the abstract universal right of all to be free” regardless of the claims of nationality or other conceptual divisions of identity like class or race or gender. Hence, the name of the tournament and the title of the R & A’s champion could be a read as a sign that the R & A heroically refuses nationalism in the name of universal humanity.

Yet Anderson gives us reason to doubt that sanguine view. The name of the old “Union of Soviet Socialist Republics,” Anderson remarks for instance, billed itself as “the precursor” of an “internationalist order” because it refused to acknowledge nationality in its name—a style it shared with Britain’s current name. But where the Soviet Union’s name was meant to point to a post-nationalist future of a universal humanity, the name of the “United Kingdom of Great Britain and Northern Ireland” is the name of a “prenational dynastic state.” Where the name of the Soviet Union bid towards a future beyond the nation-state, the name of the United Kingdom hearkens back before the nation-state.

The name in other words reflects the fact that Great Britain is ruled by an anachronistic form of government: a kingdom, a style of government virtually unique in the contemporary world. Whereas, as Benedict says, in “the modern conception, state sovereignty is fully, flatly, and evenly operative over each square centimetre of a legally demarcated territory,” a kingdom “revolves around a high centre”: the monarch, who may add or lose new territories as war and marriage might permit.

A kingdom’s borders are thus “open” to new territory in a way that a republic’s are not: Henry V, of Shakespeare’s famous play, ruled nearly as far east as Paris, and on a historical timescale it wasn’t that long ago that a resident of Calais was as much an “Englishman” as any Londoner. In those days, as Anderson says, “borders were porous and indistinct.” The “openness” of the Open may not therefore reflect a pious refusal of nationalism so much as it is a studied ignorance of nationalism’s terms—which is to say, it would reflect how most Englishmen (and, presumably, women) think about their country. The apparent universality of the name of the Open Championship may thus reflect more the atavistic qualities of the United Kingdom than a utopian vision of the future.

For the R & A to take a position regarding Scottish secession would require revisiting the name of the tournament, which would require rethinking the assumptions behind the name—and doing that would lead to a confrontation with the monarchy, because as Anderson demonstrates, the question of Scotland is necessarily a question of the monarchy. That is why, for example, he says that “[Tom] Nairn is certainly correct in describing the 1707 Act of Union between England and Scotland as a ‘patrician bargain.’” What Anderson means is that it was the “conception of a United Kingdom [that] was surely the crucial mediating element that made the deal possible”—in other words, only in a world where lands and peoples are no more than pieces on a chessboard can such deals be struck.

One has only to imagine Paris today selling Normandy to London to see how uniting England and Scotland would be “impossible,” as Nairn puts it, once “the age of democratic nationalism had arrived.” Many witnesses at the time testified to the Act of Union’s unpopularity with the Scottish people: one negotiator on the Scottish side—a pro-Union man to boot—wrote that he thought the Act was “contrary to the inclinations of at least three-fourths of the Kingdom.” Only under a monarchy could such a deal have been possible—again, another way to put the matter is to imagine the United States selling Louisiana back to France, or California back to Mexico.

It isn’t any wonder then why the R & A would refuse to bark; or to put it better, avoid discussing the matter. To discuss Scottish independence is to discuss how Scotland lost its independence, and to discuss that is necessarily to discuss the monarchy. To bring up one subject is to bring up, sooner or later, the other. Reversing the polarity, however, solves the problem of the “double event” of the 7 May general election: if Scottish nationalism threatens the monarchy by threatening the premises it relies upon, then why England simultaneously elected the most pro-aristocracy party isn’t much of a mystery—as Holmes remarks about coincidence in Sherlock, the most recent television adaptation of his adventures, the “universe is rarely so lazy.”

Instruments of Darkness

 

And oftentimes, to win us to our harm,
The instruments of darkness tell us truths …
—William Shakespeare
    The Tragedy of MacBeth
Act I, scene 3 132-3 (1606) 

 

This year’s Masters demonstrated, once again, the truism that nobody watches golf without Tiger Woods: last year’s Masters, played without Tiger, had the lowest ratings since 1957, while the ratings for this year’s Saturday’s round (featuring a charging Woods), were up nearly half again as much. So much is unsurprising; what was surprising, perhaps, was the reappearance of a journalistic fixture from the days of Tiger’s past: the “pre-Masters Tiger hype story.” It’s a reoccurance that suggests Tiger may be taking cues from another ratings monster: the television series Game of Thrones. But if so—with a nod to Ramsey Snow’s famous line in the show—it suggests that Tiger himself doesn’t think his tale will have a happy ending.

The prototype of the “pre-Masters” story was produced in 1997, the year of Tiger’s first Masters win: before that “win for the ages,” it was widely reported how the young phenom had shot a 59 during a practice round at Isleworth Country Club. At the time the story seemed innocuous, but in retrospect there are reasons to interrogate it more deeply—not to say it didn’t happen, exactly, but to question whether it was released as part of a larger design. After all, Tiger’s father Earl—still alive then—would have known just what to do with the story.

Earl, as all golf fans know, created and disseminated the myth of the invincible Tiger to anyone who would listen in the late 1990s: “Tiger will do more than any other man in history to change the course of humanity,” Gary Smith quoted him saying in the Sports Illustrated story (“The Chosen One”) that, more than any other, sold the Gospel of Woods. There is plenty of reason to suspect that the senior Woods deliberately created this myth as part of a larger campaign: because Earl, as a former member of the U.S. Army’s Green Berets, knew the importance of psychological warfare.

“As a Green Beret,” writes John Lamothe in an academic essay on both Woods, elder and junior, Earl “would have known the effect … psychological warfare could have on both the soldier and the enemy.” As Tiger himself said in a 1996 interview for Orange Coast magazine—before the golfer put up a barrier between himself and the press—“Green Berets know a lot about psychological torture and things like that.” Earl for his part remarked that, while raising Tiger, he “pulled every dirty, nasty trick I could remember from psychological warfare I learned as a Green Beret.” Both Woods described this training as a matter of rattling keys or ripping Velcro at inopportune moments—but it’s difficult not to wonder whether it went deeper.

At the moment of their origin in 1952 after all, the Green Berets, or Special Forces, were a subsection of the Psychological Warfare Staff at the Pentagon: psychological warfare, in other words, was part of their founding mission. And as Lamothe observes, part of the goal of psychological warfare is to create “confidence” in your allies “and doubt in the competitors.” As early as 2000, the sports columnist Thomas Boswell was describing how Tiger “tries to imprint on the mind of every opponent that resistance is useless,” a tactic that Boswell claimed the “military calls … ‘overwhelming force’”—and a tactic that is far older than the game of golf. Consider, for instance, a story from golf’s homeland of Scotland: the tale of the “Douglas Larder.”

It happened at a time of year not unfamiliar to viewers of the Masters: Palm Sunday, in April of 1308. The story goes that Sir James Douglas—an ally of Robert the Bruce, who was in rebellion against the English king Edward I—returned that day to his family’s home, Douglas Castle, which had been seized by the English. Taking advantage of the holiday, Douglas and his men—essentially, a band of guerrillas—slaughtered the English garrison within the church they worshipped in, then beheaded them, ate the Easter feast the Englishmen had no more use for, and subsequently poisoned the castle’s wells and destroyed its supplies (the “Larder” part of the story’s title). Lastly, Douglas set the English soldiers’ bodies afire.

To viewers of the television series Game of Thrones, or readers of the series of books it is based upon (A Song of Ice and Fire), the story might sound vaguely familiar: the “Douglas Larder” is, as popular historian William Rosen has pointed out, one source of the event known from the television series as the “Red Wedding.” Although the television event also borrows from the medieval Scot “Black Dinner” (which is perhaps closer in terms of the setting), and the later incident known as the Massacre at Glencoe, still the “Red Wedding” reproduces the most salient details of the “Douglas Larder.” In both, the attackers take advantage of their prey’s reliance on piety; in both, the bodies of the dead are mutilated in order to increase the monstrous effect.

To a modern reader, such a story is simply a record of barbarism—forgetting that medieval people were, though far less educated, equally as intelligent as nearly anyone alive today. Douglas’ actions were not meant for horror’s sake, but to send a message: the raid on the castle “was meant to leave a lasting impression … not least upon the men who came to replace their dead colleagues.” Acts like his attack on his own castle demonstrate how the “Black Douglas”—“mair fell than wes ony devill in hell” according to a contemporary account—was “an early practitioner of psychological warfare”: he knew how “fear alone could do much of the work of a successful commander.” It seems hardly credible to think Earl Woods—a man who’d been in combat in the guerrilla war of Vietnam—did not know the same lesson. Nor is it credible to think that Earl didn’t tell Tiger about it.

Certainly, Tiger himself has been a kind of Douglas: he won his first Masters by 12 shots, and in the annus mirabilis of 2000 he won the U.S. Open at Pebble Beach by 15. Displays like that, many have thought, functioned similarly, if less macabrely, as Douglas’ attacks. The effect has even been documented academically: in 2008’s “Dominance, Intimidation, and ‘Choking’ on the PGA Tour,” professors Robert Connolly and Richard Rendleman found that being paired with Tiger cost other tour pros nearly half a shot per round from 1998 to 2001. The “intimidation factor,” that is, has been quantified—so it seems jejune at best to think somebody connected to Tiger, even if he had not been aware of the effect in the past, would not have called his attention to the research.

Releasing a story prior to the Masters, then, can easily be seen as part of an attempt to revive Tiger’s heyday. But what’s interesting about this particular story is its difference from the 1997 version: then, Tiger just threw out a raw score; now, it’s being dressed in a peculiarly complicated costume. As retailed by Golf Digest’s Tim Rosaforte, the story goes like this: on the Tuesday before the tournament Tiger had “recently shot a worst-ball 66 at his home course, Medalist Golf Club.” In Golf Digest, Alex Meyers in turn explained that “a worst-ball 66 … is not to be confused with a best-ball 66 or even a normal 66 for that matter,” because what “worst-ball” means is that “Woods played two balls on each hole, but only played the worst shot each time.” Why not just say, as in 1997, Tiger shot some ridiculously low number?

The answer, I think, can be understood by way of the “Red Wedding”: just as George Martin, in order to write the A Song of Ice and Fire books, has revisited and revised many episodes of medieval history, so too is Tiger attempting to revisit his own past—a conclusion that would be glib were it not for the very make-up of this year’s version of the pre-Masters story itself. After all, to play a “worst-ball” is to time-travel: it is, in effect, to revise—or rewrite—the past. Not only that, but—and in this it is very much like both Scottish history and Game of Thrones—it is also to guarantee a “downer ending.” Maybe Tiger, then, is suggesting to his fans that they ought to pay more attention.