Buck Dancer’s Choice

Buck Dancer’s Choice: “a tune that goes back to Saturday-night dances, when the Buck, or male partner, got to choose who his partner would be.”
—Taj Mahal. Oooh So Good ‘n’ Blues. (1973).

 

“Goddamn it,” Scott said, as I was driving down the Kennedy Expressway towards Medinah Country Club. Scott is another caddie I sometimes give rides to; he’s living in the suburbs now and has to take the train into the city every morning to get his methadone pill, where I pick him up and take him to work. On this morning, Scott was distracting himself, as he often does, from the traffic outside by playing, on his phone, the card game known as spades—a game in which, somewhat like contract bridge, two players team up against an opposing partnership. On this morning, he was matched with a bad partner—a player who had, it came to light later, not trumped a ten of spades with the king the other player had in possession, and instead had played a three of spades. (In so doing, Scott’s incompetent partner thereby negated the value of the latter while receiving nothing in return.) Since, as I agree, that sounds relentlessly boring, I wouldn’t have paid much attention to the whole complaint—until I realized that not only did Scott’s grumble about his partner essentially describe the chief event of the previous night’s baseball game, but also why so many potential Democratic voters will likely sit out this election. After all, arguably the best Democratic candidate for the presidency this year will not be on the ballot in November.

What had happened the previous night was described on ESPN’s website as “one of the worst managerial decisions in postseason history”: in a one-game, extra-innings, playoff between the Baltimore Orioles and and the Toronto Blue Jays, Orioles manager Buck Showalter used six relief pitchers after starter Chris Tillman got pulled in the fifth inning. But he did not order his best reliever, Zach Britton, into the game at all. During the regular season, Britton had been one of the best relief pitchers in baseball; as ESPN observed, Britton had allowed precisely one earned run since April, and as Jonah Keri wrote for CBS Sports, over the course of the year Britton posted an Earned Run Average (.53) that was “the lowest by any pitcher in major league history with that many innings [67] pitched.” (And as Deadspin’s Barry Petchesky remarked the next day, Britton had “the best ground ball rate in baseball”—which, given that Orioles ultimately lost on a huge, moon-shot walk-off home run by Edwin Encarnacion, seems especially pertinent.) Despite the fact that the game went 11 innings, Showalter did not put Britton on the mound even once—which is to say that the Orioles ended their season with one of their best weapons sitting on the bench.

Showalter had the king of spades in his hand—but neglected to play him when it mattered. He defended himself later by saying, essentially, that he is the manager of the Baltimore Orioles, and that everyone else was lost in hypotheticals. “That’s the way it went,” the veteran manager said in the post-game press conference—as if the “way it went” had nothing to do with Showalter’s own choices. Some journalists speculated, in turn, that Showalter’s choices were motivated by what Deadspin called “the long-held, slightly-less-long-derided philosophy that teams shouldn’t use their closers in tied road games, because if they’re going to win, they’re going to need to protect a lead anyway.” In this possible view, Showalter could not have known how long the game would last, and could only know that, until his team scored some runs, the game would continue. If so, then it might be possible to lose by using your ace of spades too early.

Yet, not only did Showalter deny that such was a factor in his thinking—“It [had] nothing to do with ‘philosophical,’” he said afterwards—but such a view takes things precisely backward: it’s the position that imagines the Orioles scoring some runs first that’s lost in hypothetical thinking. Indisputably, the Orioles needed to shut down the Jays in order to continue the game; the non-hypothetical problem presented to the Orioles manager was that the O’s needed outs. Showalter had the best instrument available to him to make those outs … but didn’t use him. And that is to say that it was Showalter who got lost in his imagination, not the critics. By not using his best pitcher Showalter was effectively reacting to an imaginative hypothetical scenario, instead of responding to the actual facts playing out before him.

What Showalter was flouting, in other words, was a manner of thinking that is arguably the reason for what successes there are in the present world: probability, the first principle of which is known as the Law of Large Numbers. First conceived by a couple of Italians—Gerolamo Cardano, the first man known to have devised the idea, during the sixteenth century, and Jacob Bernoulli, who publicized it during the eighteenth—the Law of Large Numbers holds that, as Bernoulli put it in his Ars Conjectandi from 1713, “the more observations … are taken into account, the less is the danger of straying.” Or, that the more observations, the less the danger of reaching wrong conclusions. What Bernoulli is saying, in other words, is that in order to demonstrate the truth of something, the investigator should look at as many instances as possible: a rule that is, largely, the basis for science itself.

What the Law of Large Numbers says then is that, in order to determine a course of action, it should first be asked, “what is more likely to happen, over the long run?” In the case of the one-game playoff, for instance, it’s arguable that Britton, who has one of the best statistical records in baseball, would have been less likely to give up the Encarnacion home run than the pitcher who did (Ubaldo Jimenez, 2016 ERA 5.44) was. Although Jimenez, for example, was not a bad ground ball pitcher in 2015—he had a 1.85 ground ball to fly ball ratio that season, putting him 27th out of 78 pitchers, according to SportingCharts.com—his ratio was dwarfed by Britton’s: as J.J. Cooper observed just this past month for Baseball America, Britton is “quite simply the greatest ground ball pitcher we’ve seen in the modern, stat-heavy era.” (Britton faced 254 batters in 2016; only nine of them got an extra-base hit.) Who would you rather have on the mound in a situation where a home run (which is obviously a fly ball) can end not only the game, but the season?

What Bernoulli (and Cardano’s) Law of Large Numbers does is define what we mean by the concept, “the odds”: that is, the outcome that is most likely to happen. Bucking the odds is, in short, precisely the crime Buck Showalter committed during the game with the Blue Jays: as Deadspin’s Petchesky wrote, “the concept that you maximize value and win expectancy by using your best pitcher in the highest-leverage situations is not ‘wisdom’—it is fact.” As Petchesky goes on to say “the odds are the odds”—and Showalter, by putting all those other pitchers on the mound instead of Britton, ignored those odds.

As it happens, “bucking the odds” is just what the Democratic Party may be doing by adopting Hillary Clinton as their nominee instead of Bernie Sanders. As a number of articles this past spring noted, at that time many polls were saying that Sanders had better odds of beating Donald Trump than Clinton did. In May, Linda Qiu and Louis Jacobson noted in The Daily Beast Sanders was making the argument that “he’s a better nominee for November because he polls better than Clinton in head-to-head matches against” Trump. (“Right now,” Sanders said then on the television show, Meet the Press, “in every major poll … we are defeating Trump, often by big numbers, and always at a larger margin than Secretary Clinton is.”) Then, the evidence suggested Sanders was right: “Out of eight polls,” Qiu and Jacobson wrote, “Sanders beat Trump eight times, and Clinton beat Trump seven out of eight times,” and “in each case, Sanders’s lead against Trump was larger.” (In fact, usually by double digits.) But, as everyone now knows, that argument did not help to secure the nomination for Sanders: in August, Clinton became the Democratic nominee.

To some, that ought to be the end of the story: Sanders tried, and (as Showalter said after his game), “it didn’t work out.” Many—including Sanders himself—have urged fellow Democrats to put the past behind them and work towards Clinton’s election. Yet, that’s an odd position to take regarding a campaign that, above everything, was about the importance of principle over personality. Sanders’ campaign was, if anything, about the same point enunciated by William Jennings Bryan at the 1896 Democratic National Convention, in the famous “Cross of Gold” speech: the notion that the “Democratic idea … has been that if you legislate to make the masses prosperous, their prosperity will find its way up through every class which rests upon them.” Bryan’s idea, as ought to clear, has certain links to Bernoulli’s Law of Large Numbers—among them, the notion that it’s what happens most often (or to the most people) that matters.

That’s why, after all, Bryan insisted that the Democratic Party “cannot serve plutocracy and at the same time defend the rights of the masses.” Similarly—as Michael Kazin of Georgetown University described the point in May for The Daily Beast—Sanders’ campaign fought for a party “that would benefit working families.” (A point that suggests, it might be noted, that the election of Sanders’ opponent, Clinton, would benefit others.) Over the course of the twentieth century, in other words, the Democratic Party stood for the majority against the depredations of the minority—or, to put it another way, for the principle that you play the odds, not hunches.

“No past candidate comes close to Clinton,” wrote FiveThirtyEight’s Harry Enten last May, “in terms of engendering strong dislike a little more than six months before the election.” It’s a reality that suggests, in the first place, that the Democratic Party is hardly attempting to maximize their win expectancy. But more than simply those pragmatic concerns regarding her electability, however, Clinton’s candidacy represents—from the particulars of her policy positions, her statements to Wall Street financial types, and the existence of electoral irregularities in Iowa and elsewhere—a repudiation, not simply of Bernie Sanders the person, but of the very idea about the importance of the majority the Democratic Party once proposed and defended. What that means is that, even were Hillary Clinton to be elected in November, the Democratic Party—and those it supposedly represents—will have lost the election.

But then, you probably don’t need any statistics to know that.

Advertisements

Lawyers, Guns, and Caddies

Why should that name be sounded more than yours?
Julius Caesar. Act I, Scene 2.

 

One of Ryan’s steady golfers—supposedly the youngest man ever to own an American car dealership—likes to call Ryan, one of the better caddies I know at Medinah, his “lawyer-caddie.” Ostensibly, it’s meant as a kind of joke, although it’s not particularly hard to hear it as a complicated slight mixed up with Schadenfreude: the golfer, involved in the tiring process of piling up cash by snookering old ladies with terrible trade-in deals, never bothered to get a college degree—and Ryan has both earned a law degree and passed the Illinois bar, one of the hardest tests in the country. Yet despite his educational accomplishments Ryan still earns the bulk of his income on the golf course, not in the law office. Which, sorry to say, is not surprising these days: as Alexander Eichler wrote for The Huffington Post in 2012, not only are “jobs … hard to come by in recent years” for would-be lawyers, but the jobs that there are come in two flavors—either “something that pays in the modest five figures” (which implies that Ryan might never get out of debt), “or something that pays much better” (the kinds of jobs that are about as likely as playing in the NBA). The legal profession has in other words bifurcated: something that, according to a 2010 article called “Talent Grab” by New Yorker writer Malcolm Gladwell, is not isolated to the law. From baseball players to investment bankers, it seems, the cream of nearly every profession has experienced a great rise in recent decades, even as much of the rest of the nation has been largely stuck in place economically: sometime in the 1970s, Gladwell writes, “salaries paid to high-level professionals—‘talent’—started to rise.” There’s at least two possible explanations for that rise: Gladwell’s is that “members of the professional class” have learned “from members of the working class”—that, in other words, “Talent” has learned the atemporal lessons of negotiation. The other, however, is both pretty simple to understand and (perhaps for that reason) might be favored by campus “leftists”: to them, widening inequality might be explained by the same reason that, surprisingly enough, prevented Lord Cornwallis from burning Mount Vernon and raping Martha Washington.

That, of course, will sound shocking to many readers—but in reality, Lord Cornwallis’ forbearance really is unexpected if the American Revolution is compared to some other British colonial military adventures. Like, for instance, the so-called “Mau Mau Uprising”—also known as the “Kenya Emergency”—during the 1950s: although much of the documentation only came out recently, after a long legal battle—which is how we know about this in the detail we do now at all—what happened in Kenya in those years was not an atypical example of British colonial management. In a nutshell: after World War II, many Kenyans, like a lot of other European colonies, demanded independence, and like a lot of other European powers, Britain would not give it to them. (A response with which Americans ought to be familiar through our own history.) Therefore, the two sides fought to demonstrate their sincerity.

Yet unlike the American experience, which largely consisted—nearly anomalously in the history of wars of independence—of set-piece battles that pitted conventionally-organized troops against each other, what makes the Kenyan episode relevant is that it was fought using the doctrines of counterinsurgency: that is, the “best practices” for the purposes of ending an armed independence movement. In Kenya, this meant “slicing off ears, boring holes in eardrums, flogging until death, pouring paraffin over suspects who were then set alight, and burning eardrums with lit cigarettes,” as Mark Curtis reported in 2003’s Web of Deceit: Britain’s Real Role in the World. It also meant gathering, according to Wikipedia, somewhere around half a million Kenyans into concentration camps, while more than a million were held in what were called “enclosed villages.” Those gathered were then “questioned” (i.e., tortured) in order to find those directly involved in the independence movement, and so forth. It’s a catalogue of horror, but what’s more horrifying is that the methods being used in Kenya were also being used, at precisely the same moment, half a world away, by more or less the same people: at the same time as the “Kenya Emergency,” the British Empire was also fighting in what’s called the “Malay Emergency.”

In Malaysia, from 1948 to 1960 the Malayan Communist Party fought a guerrilla war for independence against the British Army—a war that became such a model for counterinsurgency war that one British leader, Sir Robert Thompson, later became a senior advisor to the American effort in Vietnam. (Which itself draws attention to the fact that France was also involved in counterinsurgency wars at the time: not only in Vietnam, but also in Algeria.) And in case you happen to think that all of this is merely an historical coincidence regarding the aftershocks of the Second World War, it’s important to remember that the very word “concentration camp” was first widely used in English during the Second Boer War of 1899-1902. “Best practice” in fighting colonial wars, that is, was pretty standardized: go in, grab the wives and kids, threaten them, and then just follow the trail back to the ringleaders. In other words, Abu Ghraib—but also, the Romans.

It’s perhaps no coincidence, in other words, that the basis of elite education in the Western world for millennia began with Julius Caesar’s Gallic Wars, usually the first book assigned to beginning students of Latin. Often justified educationally on the basis of its unusually clear rhetoric (the famously deadpan opening line: “Gaul is divided into three parts …”), the Gallic Wars could also be described as a kind of “how to” manual regarding “pacification” campaigns: in this case, the failed rebellion of Vercingetorix in 52 BCE, who, according to Caesar, “urged them to take up arms in order to win liberty for all.” In Gallic Wars, Caesar details such common counterinsurgency techniques as, say, hostage-taking: in negotiations with the Helvetii in Book One, for instance, Caesar makes the offer that “if hostages were to be given by them [the Helvetii] in order that he may be assured these will do what they promise … he [Caesar] will make peace with them.” The book also describes torture in several places throughout (though, to be sure, it is usually described as the work of the Gauls, not the Romans). Hostage-taking and torture was all, in other words, common stuff in elite European education—the British Army did not suddenly create these techniques during the 1950s. And that, in turn, begs the question: if British officers were aware of the standard methods of “counterinsurgency,” why didn’t the British Army use them during the “American Emergency” of the 1770s?

According to Pando Daily columnist “Gary Brecher” (a pseudonym for John Dolan), perhaps the “British took it very, very easy on us” during the Revolution because Americans “were white, English-speaking Protestants like them.” In fact, that leniency may have been the reason the British lost the war—at least, according to Lieutenant Colonel Paul Montanus’ (U.S.M.C.) paper for the U.S. Army War College, “A Failed Counterinsurgency Strategy: The British Southern Campaign, 1780-1781.” To Montanus, the British Army “needed to execute a textbook pacification program”—instead, the actions that army took “actually inflamed the [populace] and pushed them toward the rebel cause.” Montanus, in other words, essentially asks the question: why didn’t the Royal Navy sail up the Potomac and grab Martha Washington? Brecher’s point is pretty valid: there simply aren’t a lot of reasons to explain just why Lord Cornwallis or the other British commanders didn’t do that other than the notion that, when British Army officers looked at Americans, they saw themselves. (Yet, it might be pointed out that just what the British officers saw is still an open question: did they see “cultural Englishmen”—or simply rich men like themselves?)

If Gladwell were telling the story of the American Revolution, however, he might explain American independence as a result simply of the Americans learning to say no—at least, that is what he advances as a possible explanation for the bifurcation Gladwell describes in the professions in American life these days. Take, for instance, the profession with which Gladwell begins: baseball. In the early 1970s, Gladwell tells us, Marvin Miller told the players of the San Francisco Giants that “‘If we can get rid of the system as we now know it, then Bobby Bond’s son, if he makes it to the majors, will make more in one year than Bobby will in his whole career.’” (Even then, when Barry Bonds was around ten years old, people knew that Barry Bonds was a special kind of athlete—though they might not have known he would go on to shatter, as he did in 2001, the single season home run record.) As it happens, Miller wildly understated Barry Bonds’ earning power: Barry Bonds “ended up making more in one year than all the members of his father’s San Francisco Giants team made in their entire careers, combined” (emp. added). Barry Bonds’ success has been mirrored in many other sports: the average player salary in the National Basketball Association, for instance, increased more than 800 percent from the 1984-5 season to the 1998-99 season, according to a 2000 article by the Chicago Tribune’s Paul Sullivan. And so on: it doesn’t take much acuity to know that professional athletes have taken a huge pay jump in recent decades. But as Gladwell says, that increase is not limited just to sportsmen.

Take book publishing, for instance. Gladwell tells an anecdote about the sale of William Safire’s “memoir of his years as a speechwriter in the Nixon Administration to William Morrow & Company”—a book that might seem like the kind of “insider” account that often finds its way to publication. In this case, however, between Safire’s sale to Morrow and final publication Watergate happened—which caused Morrow to rethink publishing a book from a White House insider that didn’t mention Watergate. In those circumstances, Morrow decided not to publish—and could they please have the advance they gave to Safire back?

In book contracts in those days, the publisher had all the cards: Morrow could ask for their money back after the contract was signed because, according to the terms of a standard publishing deal, they could return a book at any time, for more or less any reason—and thus not only void the contract, but demand the return of the book’s advance. Safire’s attorney, however—Mort Janklow, a corporate attorney unfamiliar with the ways of book publishing—thought that was nonsense, and threatened to sue. Janklow told Morrow’s attorney (Maurice Greenbaum, of Greenbaum, Wolff & Ernst) that the “acceptability clause” of the then-standard literary contract—which held that a publisher could refuse to publish a book, and thereby reclaim any advance, for essentially any reason—“‘was being fraudulently exercised’” because the reason Morrow wanted to reject Safire’s book wasn’t due to the reason Morrow said they wanted to reject it (the intrinsic value of the content) but simply because an external event—Watergate—had changed Morrow’s calculations. (Janklow discovered documentary evidence of the point.) Hence, if Morrow insisted on taking back the advance, Janklow was going to take them to court—and when faced with the abyss, Morrow crumbled, and standard contracts with authors have become (supposedly) far less weighted towards publishing houses. Today, bestselling authors (like, for instance, Gladwell) now have a great deal of power: they more or less negotiate with publishing houses as equals, rather than (as before) as, effectively, servants. And not just in publishing: Gladwell goes on to tell similar anecdotes about modeling (Lauren Hutton), moviemaking (George Lucas), and investing (Teddy Forstmann). In all of these cases, the “Talent” (Gladwell’s word) eventually triumphs over “Capital.”

As I mentioned, for a variety of reasons—in the first place, the justification for the study of “culture,” which these days means, as political scientist Adolph Reed of the University of Pennsylvania has remarked, “the idea that the mass culture industry and its representational practices constitute a meaningful terrain for struggle to advance egalitarian interests”—to a lot of academic leftists these days that triumph would best be explained by the fact that, say, George Lucas and the head of Twentieth-Century Fox at the time, George Stulberg, shared a common rapport. (Perhaps they gossiped over their common name.) Or to put it another way, that “Talent” has been rewarded by “Capital” because of a shared “culture” between the two (apparent) antagonists—just as in the same way that Britain treated their American subjects different than their Kenyan ones because the British shared something with the Americans that they did not with the Kenyans (and the Malaysians and the Boer …). (Which was either “culture”—or money.) But there’s a problem with this analysis: it doesn’t particularly explain Ryan’s situation. After all, if this hypothesis correct that would appear to imply that—since Ryan shares a great deal “culturally” with the power elite that employs him on the golf course—that Ryan ought to have a smooth path towards becoming a golfer who employs caddies, not a caddie who works for golfers. But that is not, obviously, the case.

Gladwell, on the other hand, does not advance a “cultural” explanation for why some people in a variety of professions have become compensated far beyond that even of their fellows within the profession. Instead, he prefers to explain what happened beginning in the 1970s as being instances of people learning how to use a tool initially widely used by organized labor: the strike.

It’s an explanation that has an initial plausibility about it, in the first place, because of Marvin Miller’s personal history: he began his career working for the United Steelworkers before becoming an employee of the baseball players’ union. (Hence, there is a means of transmission.) But even aside from that, it seems clear that each of the “talents” Gladwell writes about made use of either a kind of one-person strike, or the threat of it, to get their way: Lauren Hutton, for example, “decided she would no longer do piecework, the way every model had always done, and instead demanded that her biggest client, Revlon, sign her to a proper contract”; in 1975 “Hollywood agent Tom Pollock,” demanded “that Twentieth Century Fox grant his client George Lucas full ownership of any potential sequels to Star Wars”; and Mort Janklow … Well, here is what Janklow said to Gladwell regarding how he would negotiate with publishers after dealing with Safire’s book:

“The publisher would say, ‘Send back that contract or there’s no deal,’ […] And I would say, ‘Fine, there’s no deal,’ and hang up. They’d call back in an hour: ‘Whoa, what do you mean?’ The point I was making was that the author was more important than the publisher.”

Each of these instances, I would say, is more or less what happens when a group of industrial workers walk out: Mort Janklow (whose personal political opinions, by the way, are apparently the farthest thing from labor’s), was for instance telling the publishers that he would withhold the labor product until his demands were met, just as the United Autoworkers shut down General Motors’ Flint, Michigan assembly plant in the Sit-Down Strike of 1936-37. And Marvin Miller did take baseball players out on strike: the first baseball strike was in 1972, and lasted all of thirteen days before management crumbled. What all of these people learned, in other words, was to use a common technique or tool—but one that is by no means limited to unions.

In fact, it’s arguable that one of the best examples of it in action is a James Dean movie—while another is the fact the world has not experienced a nuclear explosion delivered in anger since 1945. In the James Dean movie, Rebel Without a Cause, there’s a scene in which James Dean’s character gets involved in what the kids in his town call a “chickie run”—what some Americans know as the game of “Chicken.” In the variant played in the movie, two players each drive a car towards the edge of a cliff—the “winner” of the game is the one who exits his car closest to the edge, thus demonstrating his “courage.” (The other player is, hence, the “chicken,” or coward.) Seems childish enough—until you realize, as the philosopher Bertrand Russell did in a book called Common Sense and Nuclear Warfare, that it was more or less this game that the United States and the Soviet Union were playing throughout the Cold War:

Since the nuclear stalemate became apparent, the Governments of East and West have adopted the policy which Mr. Dulles calls “brinksmanship.” This is a policy adapted from a sport which, I am told, is practised [sic] by some youthful degenerates. This sport is called “Chicken!” …

As many people of less intellectual firepower than Bertrand Russell have noticed, Rebel Without A Cause thusly describes what happened between Moscow and Washington D.C. faced each other in October 1962, the incident later called the Cuban Missile Crisis. (“We’re eyeball to eyeball,” then-U.S. Secretary of State Dean Rusk said later about those events, “and I think the other fellow just blinked.”) The blink was, metaphorically, the act of jumping out of the car before the cliff of nuclear annihilation: the same blink that Twentieth Century Fox gave when it signed over the rights to sequels to Star Wars to Lucas, or Revlon did when it signed Lauren Hutton to a contract. Each of the people Gladwell describes played “Chicken”—and won.

To those committed to a “cultural” explanation, of course, the notion that all these incidents might instead have to do with a common negotiating technique rather than a shared “culture” is simply question begging: after all, there have been plenty of people, and unions, that have played games of “Chicken”—and lost. So by itself the game of “Chicken,” it might be argued, explains nothing about what led employers to give way. Yet, at two points, the “cultural” explanation also is lacking: in the first place, it doesn’t explain how “rebel” figures like Marvin Miller or Janklow were able to apply essentially the same technique across many industries. If it were a matter of “culture,” in other words, it’s hard to see how the same technique could work no matter what the underlying business was—or, if “culture” is the explanation, it’s difficult to see how that could be distinguished from saying that an all-benevolent sky fairy did it. As an explanation, in other words, “culture” is vacuous: it explains both too much and not enough.

What needs to be explained, in other words, isn’t why a number of people across industries revolted against their masters—just as it likely doesn’t especially need to be explained why Kenyans stopped thinking Britain ought to run their land any more. What needs to explained instead is why these people were successful. In each of these industries, eventually “Capital” gave in to “Talent”: “when Miller pushed back, the owners capitulated,” Gladwell says—so quickly, in fact, that even Miller was surprised. In all of these industries, “Capital” gave in so easily that it’s hard to understand why there was any dispute in the first place.

That’s precisely why the ease of that victory is grounds for being suspicious: surely, if “Capital” really felt threatened by this so-called “talent revolution” they would have fought back. After all, American capital was (and is), historically, tremendously resistant to the labor movement: blacklisting, arrest, and even mass murder were all common techniques capital used against unions prior to World War II: when Wyndham Mortimer arrived in Flint to begin organizing for what would become the Sit-Down Strike, for instance, an anonymous caller phoned him at his hotel within moments of his arrival to tell him to leave town if the labor organizer didn’t “want to be carried out in a wooden box.” Surely, although industries like sports or publishing are probably governed by less hard-eyed people than automakers, neither are they so full of softies that they would surrender on the basis of a shared liking for Shakespeare or the films of Kurosawa, nor even the fact that they shared a common language. On the other hand, however, neither does it seem likely that anyone might concede after a minor threat or two. Still, I’d say that thinking about these events using Gladwell’s terms makes a great deal more sense than the “cultural” explanation—not because of the final answer they provide, but because of the method of thought they suggest.

There is, in short, another possible explanation—one that, however, will mean trudging through yet another industry to explain. This time, that industry is the same one where the “cultural” explanation is so popular: academia, which has in recent decades also experienced an apparent triumph of “Talent” at the expense of “Capital”; in this case, the university system itself. As Christopher Shea wrote in 2014 for The Chronicle of Higher Education, “the academic star system is still going strong: Universities that hope to move up in the graduate-program rankings target top professors and offer them high salaries and other perks.” The “Talent Revolution,” in short, has come to the academy too. Yet, if so, it’s had some curious consequences: if “Talent” were something mysterious, one might suspect that it might come from anywhere—yet academia appears to think that it comes from the same sources.

As Joel Warner of Slate and Aaron Clauset, an assistant professor of computer science at the University of Colorado wrote in Slate recently, “18 elite universities produce half of all computer science professors, 16 schools produce half of all business professors, and eight schools account for half of all history professors.” (In fact, when it comes to history, “the top 10 schools produce three times as many future professors as those ranked 11 through 20.”) This, one might say, is curious indeed: why should “Talent” be continually discovered in the same couple of places? It’s as if, because William Wilkerson  discovered Lana Turner at the Top Hat Cafe on Sunset Boulevard  in 1937, every casting director and talent agent in Hollywood had decided to spend the rest of their working lives sitting on a stool at the Top Hat waiting for the next big thing to walk through that door.

“Institutional affiliation,” as Shea puts the point, “has come to function like inherited wealth” within the walls of the academy—a fact that just might explain another curious similarity between the academy and other industries these days. Consider, for example, that while Marvin Miller did have an enormous impact on baseball player salaries, that impact has been limited to major league players, and not their comrades at lower levels of organized baseball. “Since 1976,” Patrick Redford noted in Deadspin recently, major leaguers’ “salaries have risen 2,500 percent while minor league salaries have only gone up 70 percent.” Minor league baseball players can, Redford says, “barely earn a living while playing baseball”—it’s not unheard of, in fact, for ballplayers to go to bed hungry. (Glen Hines, a writer for The Cauldron, has a piece for instance describing his playing days in the Jayhawk League in Kansas: “our per diem,” Hines reports, “was a measly 15 dollars per day.”) And while it might difficult to have much sympathy for minor league baseball players—They get to play baseball!—that’s exactly what makes them so similar to their opposite numbers within academia.

That, in fact, is the argument Major League Baseball uses to deny minor leaguers are subject to the Fair Labor Standards Act: as the author called “the Legal Blitz” wrote for Above the Law: Redline, “Major League Baseball claims that its system [of not paying minimum wage] is legal as it is not bound by the FLSA [Fair Labor Standards Act] due to an exemption for seasonal and recreational employers.” In other words, because baseball is a “game” and not a business, baseball doesn’t have to pay the workers at the low end of the hierarchy—which is precisely what makes minor leaguers like a certain sort of academic.

Like baseball, universities often argue (as Yale’s Peter Brooks told the New York Times when Yale’s Graduate Employees and Student Organization (GESO) went out on strike in the late 1990s) that adjunct faculty are “among the blessed of the earth,” not its downtrodden. As Emily Eakin reported for the now-defunct magazine Lingua Franca during that same strike, in those days Yale’s administration argued “that graduate students can’t possibly be workers, since they are admitted (not hired) and receive stipends (not wages).” But if the pastoral rhetoric—a rhetoric that excludes considerations common to other pursuits, like gambling—surrounding both baseball and the academy is cut away, the position of universities is much the same as Major League Baseball’s, because both academia and baseball (and the law, and a lot of other professions) are similar types of industries at least in one respect: as presently constituted, they’re dependent on small numbers of highly productive people—which is just why “Capital” should have tumbled so easily in the way Gladwell described in the 1970s.

Just as scholars are only very rarely productive early in their careers, in other words, so too are baseball players: as Jim Callis noted for Baseball America (as cited by the paper, “Initial Public Offerings of Baseball Players” by John D. Burger, Richard D. Grayson, and Stephen Walters), “just one of every four first-round picks ultimately makes a non-trivial contribution to a major league team, and a mere one in twenty becomes a star.” Similarly, just as just a few baseball players hit most of the home runs or pitch most of the complete games, most academic production is done by just a few producers, as a number of researchers discovered in the middle of the twentieth century: a verity variously formulated as “Price’s Law,” “Lotka’s Law,” or “Bradford’s Law.” (Or, there’s the notion described as “Sturgeon’s Law”: “90% of everything is crap.”) Hence, rationally enough, universities (and baseball teams) only want to pay for those high-producers, while leaving aside the great mass of others: why pay for a load of .200 hitters, when with the same money you can buy just one superstar?

That might explain just why it is that William Morrow folded when confronted by Mort Janklow, or why Major League Baseball collapsed when confronted by Marvin Miller. They weren’t persuaded by the justice of the case Janklow or Miller brought—rather, they decided that it was in their long-term interests to reward wildly the “superstars” because that bought them the most production at the cheapest rate. Why pay for a ton of guys who hit all of the home runs, you might say—when, for much less, you can buy Barry Bonds? (In 2001, all major leaguers collectively hit over 5000 home runs, for instance—but Barry Bonds hit 73 of them, in a context in which the very best players might hit 20.) In such a situation, it makes sense (seemingly) to overpay Barry Bonds wildly (so that he made more money in a single season than all of his father’s teammates did for their entire careers): given that Barry Bonds is so much more productive than his peers, it’s arguable that, despite his vast salary, he was actually underpaid.

If you assign a value per each home run, that is, Bonds got a lower price per home run than his peers did: despite his high salary he was—in a sense—a bargain. (The way to calculate the point is to take all the home runs hit by all the major leaguers in a given season, and then work out the average price per home run. Although I haven’t actually done the calculation, I would bet that the average price is more than the price per home run received by Barry Bonds—which isn’t even to get into how standard major league rookie contracts deflate the market: as Newsday reported in March, Bryce Harper of the Washington Nationals, who was third on the 2015 home run list, was paid only $59,524 per home run—when virtually every other top ten home run hitter in the major leagues made at least a quarter of a million dollars.) Similarly, an academic superstar is also, arguably, underpaid: even though, according to citation studies, a small number of scholars might be responsible for 80 percent of the citations in a given field, there’s no way they can get 80 percent of the total salaries being paid in that field. Hence, by (seemingly) wildly overpaying a few superstars, major league owners (like universities) can pocket the difference between those salaries and what they (wildly underpay) to the (vastly more) non-superstars.

Not only that, but wildly overpaying also has a secondary benefit, as Walter Benn Michaels has observed: by paying “Talent” vastly more money, not only are they actually getting a bargain (because no matter what “Talent” got paid, they simply couldn’t be paid what they were really “worth”), but also “Talent’s” (seemingly vast, but in reality undervalued) salaries enable the system to be performed as  “fair”—if you aren’t getting paid what, say, Barry Bonds or Nobel Prize-winning economist Gary Becker is getting paid, in other words, then that’s because you’re not smart enough or good enough or whatever enough, jack. That is what Michaels is talking about when he discusses how educational “institutions ranging from U.I.C. to Harvard” like to depict themselves as “meritocracies that reward individuals for their own efforts and abilities—as opposed to rewarding them for the advantages of their birth.” Which, as it happens, just might explain why it is that, despite his educational accomplishments, Ryan is working on a golf course as a servant instead of using his talent in a courtroom or boardroom or classroom—as Michaels says, the reality of the United States today is that the “American Dream … now has a better chance of coming true in Sweden than it does in America, and as good a chance of coming true in western Europe (which is to say, not very good) as it does here.” That reality, in turn, is something that American universities, who are supposed to pay attention to events like this, have rapidly turned their heads away from: as Michaels says, “the intellectual left has responded to the increase in economic inequality”—that is, the supposed “Talent Revolution”—“by insisting on the importance of cultural identity.” In other words, “when it comes to class difference” (as Michaels says elsewhere), even though liberal professors “have understood our universities to be part of the solution, they are in fact part of the problem.” Hence, Ryan’s educational accomplishments (remember Ryan? There’s an essay about Ryan) aren’t actually helping him: in reality, they’re precisely what is holding him back. The question that Americans ought to be asking these days, then, is this one: what happens when Ryan realizes that?

It’s enough to make Martha Washington nervous.

 

Par For The Course: Memorial Day, 2016

 

For you took what’s before me and what’s behind me
You took east and west when you would not mind me
Sun, moon and stars from me you have taken
And Christ likewise if I’m not mistaken.
“Dónal Óg.” Traditional.

 

None of us were sure. After two very good shots—a drive off the tee, and a three- or four-wood second—both ladies found themselves short of the green by more than forty yards. Two chips later, neither of which were close, both had made fives—scores that either were pars or bogies. But we did not know which scores they were; that is, we didn’t know what par was on the hole, the eighth on Medinah’s Course One. That was important because, while in normal play, the difference would hardly have mattered, it did matter in this case because our foursome was playing as part of a larger tournament, and the method of scoring of this tournament was what is called a “modified Stableford” format. In “modified Stableford,” points are assigned for each score: instead of the total number of strokes being added up or the number of holes being added up, in other words, as in stroke and match play scoring formats, under a modified Stableford format players receive zero points for a par, but lose a point for bogey. To know what the ladies had scored, then, it was important to know what the par was—and since Course One had only just reopened last year after a renovation, none of us knew if the par for ladies had changed with it. The tournament scorecard was no help—we needed a regular scorecard to check against, which we could only get when we returned towards the clubhouse after the ninth hole. When we did, we learned what we needed to know—and I learned just how much today’s women golfers still have in common with both French women, circa 1919, and the nation of France, today.

The eighth hole on Medinah Country Club’s Course One is, for men, a very long par four, measuring 461 yards from the back tee. For the most part it is straight, though with a slight curve from left to right along its length. Along with length, the hole is also defended with a devilish green that is highly sloped from the high side on the left to a low side on the right. It is an extremely difficult hole, ranked as the fifth-hardest hole on the golf course. And though the ladies do not play from the back tees, the eighth is still nearly 400 yards for them, which even for very good women players is quite long; it is not unusual to find ladies’ par fives at that distance. Hence, we had good reason to at least wish to question whether the tournament scorecard was printed in error.

Returning to the clubhouse, we went by the first tee where all the scorecards for Course One are kept. Picking one up, I quickly scanned it and found that, indeed, the par for the eighth hole was four for the ladies, as the tournament scorecard said. At that instant, one of the assistant pros happened by, and I asked him about it: “Well,” he said, “if the par’s the same for everyone it hardly matters—par’s just a number, anyway.” In a sense, of course, he was right: par really is, in one way, completely arbitrary. A golfer scores what she scores: whether that is “par” or not really makes little difference—par is just a name, it might be said. Except that in this case the name of the thing really did matter, because it had a direct effect on the scoring for the tournament as a whole … I could feel my brain slowly sinking into a mental abyss, as I tried to work out the possible consequences of what might appear to be merely an inconsequential name change.

What I immediately realized, at least, was that making the hole a par four greatly amplified the efforts of a long-hitting woman: being able to reach that green in two gave any woman even more of a huge advantage over her fellow competitors than she already had simply by hitting the ball further. Making the hole a par four made such a woman an electric guitar against everyone else’s acoustic: she would just drown everyone out. Furthermore, that advantage would multiply the more rounds the tournament played: the interest, in other words, would compound.

It’s in that sense that, researching another topic, I became interested in the fate of Frenchwomen in the year 1919—the year after the end of the Great War, or World War I. That war, as everyone knows, virtually wiped out an entire generation of young men: Britain, for example, lost nearly a million young men in battle, while France lost nearly one and half millions. (Germany, by comparison, lost nearly two millions.) Yet, although occasionally the point comes up during Veterans Day observations in America—what the Europeans call “Armistice Day” is, with good reason, a much bigger deal—or classroom discussions about writers of the 1920s in English classes (like Fitzgerald or Hemingway, the “Lost Generation”), the fact is treated sentimentally: we are supposed to be sad about those many, many deaths. But what we do not do is think about the long-term effect of losing so many young men (and, though less so, women) in their youth.

We do not, that is, consider the fact that, as writer Fraser Cameron observed in 2014, in France in “1919, the year after the war was over in France, there were 15 women for every man between the ages of 18 and 30.” We do not think about, as Cameron continues, “all of the lost potential, all of the writers, artists, teachers, inventors, and leaders that were killed.” Cameron neglects to consider all of the janitors that were killed also, but his larger point is solid: the fact of the Great War has had a measurable effect on France’s destiny as a nation, because all of those missing young men would have contributed to France’s total productivity, would have paid taxes, would have paid into pensions—and perhaps above all, would have had babies who would have done the same. And those missing French (and British and German and Russian and Italian …) babies still matter—and probably will forever.

“In the past two decades,” says Malcolm Gladwell of the New Yorker, in an article from a few years ago entitled, “The Risk Pool,” “Ireland has gone from being one of the most economically backward countries in Western Europe to being one of the strongest: its growth rate has been roughly double that of the rest of Europe.” Many explanations have been advanced for that growth, Gladwell says—but the most convincing explanation, he also says, may have been advanced by two Harvard economists, David Bloom and David Canning: “In 1979, restrictions on contraception that had been in place since Ireland’s founding”—itself a consequence, by the bye, of the millions of deaths on the Western Front—“were lifted, and the birth rate began to fall.” What had been an average of nearly four children per woman in the late 1960s became, by the mid-nineteen-nineties, less than two. And so Ireland, in those years, “was suddenly free of the enormous social cost of supporting and educating and caring for a large dependent population”—which, as it happens, coincides with the years when the Irish economy exploded. Bloom and Canning argue that this is not a coincidence.

It might then be thought, were you to take a somewhat dark view, that France in 1919 was thusly handed a kind of blessing: the French children that were born in 1919 would be analogous to Irish children in 1969, a tiny cohort easily supported by the rest of the nation. But actually, of course, the situation is rather the opposite: when French children of 1919 came of age, that meant there were many fewer of them to support the rest of the nation—and, as we know, Frenchmen born in 1919 were doubly the victims of fate: the year they turned twenty was the year Hitler invaded Poland. Hence, the losses first realized during the Great War were doubled down—not only was the 1919 generation many times less than there would have been had there been no general European war in the first decades of the twentieth-century, but now there would be many fewer of their grandchildren, too. And so it went: if you are ever at a loss for something to do, there is always the exercise of thinking about all of those millions of missing French (and Italian and English and Russian …) people down through the decades, and the consequences of their loss.

That’s an exercise that, for the most part, people do not do: although nearly everyone in virtually every nation on earth memorializes their war dead on some holiday or another, it’s very difficult to think of the ramifying, compounding costs of those dead. In that sense, the dead of war are a kind of “hidden” cost, for although they are remembered on each nation’s version of Memorial Day or Armistice Day or Veterans Day, they are remembered sentimentally, emotionally. But while that is, to be sure, an important ritual to be performed—because rituals are performed for the living, not the dead—it seems to me also important to remember just what it is that wars really mean: they are a kind of tax on the living and on the future, a tax that represents choices that can never be made and roads that may never be traveled. The dead are debt that can never be repaid and whose effects become greater, rather than less, with time—a compound interest of horror that goes on working like one of Blake’s “dark satanic mills” through all time.

Hidden costs, of course, are all around us, all of the time; very few of us have the luxury of wondering about how far a bullet fired during, say, the summer of 1916 or the winter of 1863 can really travel. For all of the bullets that ever found their mark, fired in all of the wars that were ever fought, are, and always will be, still in flight, onwards through the generations. Which, come to think of it, may have been what James Joyce meant at the end of what has been called “the finest short story in the English language”—a story entitled, simply, “The Dead.” It’s a story that, like the bullets of the Great War, still travels forward through history; it ends as the story’s hero, Gabriel Conroy, stands at the window during a winter’s night, having just heard from his wife—for the first time ever—the story of her youthful romance with a local boy, Michael Fury, long before she ever met Gabriel. At the window, he considers how Fury’s early death of tuberculosis affected his wife’s life, and thusly his own: “His soul swooned slowly as he heard the snow falling faintly through the universe and, faintly falling, like the descent of their last end, upon all the living and the dead.” As Joyce saw, all the snowflakes are still falling, all the bullets are still flying, and we will never, ever, really know what par is.

The Smell of Victory

To see what is in front of one’s nose needs a constant struggle.
George Orwell. “In Front of Your Nose”
    Tribune, 22 March 1946

 

Who says country clubs are irony-free? When I walked into Medinah Country Club’s caddie shack on the first day of the big member-guest tournament, the Medinah Classic, Caddyshack, that vicious class-based satire of country club stupid was on the television. These days, far from being patterned after Caddyshack’s Judge Smails (a pompous blowhard), most country club members are capable of reciting the lines of the movie nearly verbatim. Not only that—they’ve internalized the central message of the film, the one indicated by the “snobs against the slobs” tagline on the movie poster: the moral that, as another 1970s cinematic feat put it, the way to proceed through life is to “trust your feelings.” Like a lot of films of the 1970s—Animal House, written by the same team, is another example—Caddyshack’s basic idea is don’t trust rationality: i.e., “the Man.” Yet, as the phenomena of country club members who’ve memorized Caddyshack demonstrates, that signification has now become so utterly conventional that even the Man doesn’t trust the Man’s methods—which is how, just like O.J. Simpson’s jury, the contestants in this year’s Medinah Classic were prepared to ignore probabilistic evidence that somebody was getting away with murder.

That’s a pretty abrupt jump-cut in style, to be sure, particularly in regards to a sensitive subject like spousal abuse and murder. Yet, to get caught up in the (admittedly horrific) details of the Simpson case is to miss the trees for the forest—at least according to a short 2010 piece in the New York Times entitled “Chances Are,” by the Schurman Professor of Applied Mathematics at Cornell University, Steven Strogatz.

The professor begins by observing that the prosecution spent the first ten days of the six-month long trial establishing that O.J. Simpson abused his wife, Nicole. From there, as Strogatz says, prosecutors like Marcia Clark and Christopher Darden introduced statistical evidence that showed that abused women who are murdered are usually killed by their abusers. Thus, as Strogatz says, the “prosecution’s argument was that a pattern of spousal abuse reflected a motive to kill.” Unfortunately however the prosecution did not highlight a crucial point about their case: Nicole Brown Simpson was dead.

That, you might think, ought to be obvious in a murder trial, but because the prosecution did not underline the fact that Nicole was dead the defense, led on this issue by famed trial lawyer Alan Dershowitz, could (and did) argue that “even if the allegations of domestic violence were true, they were irrelevant.” As Dershowitz would later write, the defense claimed that “‘an infinitesimal percentage—certainly fewer than 1 of 2,500—of men who slap or beat their domestic partners go on to murder them.’” Ergo, even if battered women do tend to be murdered by their batterers, that didn’t mean that this battered woman (Nicole Brown Simpson) was murdered by her batterer, O.J. Simpson.

In a narrow sense, of course, Dershowitz’s claim is true: most abused women, like most women generally, are not murdered. So it is absolutely true that very, very few abusers are also murderers. But as Strogatz says, the defense’s argument was a very slippery one.

It’s true in other words that, as Strogatz says, “both sides were asking the jury to consider the probability that a man murdered his ex-wife, given that he previously battered her.” But to a mathematician like Strogatz, or his statistician colleague I.J. Good—who first tackled this point publicly—this is the wrong question to ask.

“The real question,” Strogatz writes, is: “What’s the probability that a man murdered his ex-wife, given that he previously battered her and she was murdered?” That’s the question that applied in the Simpson case: Nicole Simpson had been murdered. If the prosecution had asked the right question in turn, the answer to it—that is, the real question, not the poorly-asked or outright fraudulent questions put by both sides at Simpson’s trial—would have been revealed to be about 90 percent.

To run through the math used by Strogatz quickly (but still capture the basic points): of a sample of 100,000 battered American women, we could expect about 5 of them to be murdered by random strangers any given year, while we could also expect about 40 of them to be murdered by their batterers. So of the 45 battered women murdered each year per 100,000 battered women, about 90 percent of them are murdered by their batterers.

In a very real sense then, the prosecution lost its case against O.J. because it did not present its probabilistic evidence correctly. Interviewed years later for the PBS program, Frontline, Robert Ball, a lawyer for one of the jurors on the Simpson case, Brenda Moran, said that according to his client, the jury thought that for the prosecution “to place so much stock in the notion that because [O.J.] engaged in domestic violence that he must have killed her, created such a chasm in the logic [that] it cast doubt on the credibility of their case.” Or as one of the prosecutors, William Hodgman, said after the trial, the jury “didn’t understand why the prosecution spent all that time proving up the history of domestic violence,” because they “felt it had nothing to do with the murder case.” In that sense, Hodgman admitted, the prosecution failed because they failed to close the loop in the jury’s understanding—they didn’t make the point that Strogatz, and Good before him, say is crucial to understanding the probabilities here: the fact that Nicole Brown Simpson had been murdered.

I don’t know, of course, to what degree distrust of scientific or rational thought played in the jury’s ultimate decision—certainly, as has been discovered in recent years, it is the case that crime laboratories have often been accused of “massaging” the evidence, particularly when it comes to African-American defendants. As Spencer Hsu reported in the Washington Post, for instance, just in April of this year the “Justice Department and FBI … formally acknowledged that nearly every examiner in an elite FBI forensic unit gave flawed testimony in almost all trials in which they offered evidence.” Yet, while it’s obviously true that bad scientific thought—i.e., “thought” that isn’t scientific at all—ought to be quashed, it’s also I think true that there is a pattern of distrust of that kind of thinking that is not limited to jurors in Los Angeles County, as I discovered this weekend at the Medinah Classic.

The Classic is a member-guest tournament, and member-guests are golf tournaments consisting of two-man teams made up by a country club member and his guest. They are held by country clubs around the world, played according to differing formats but usually dependent upon each golfer’s handicap index: the number assigned by the United States Golf Association’s computer after the golfer pays a fee and inputs his scores into the USGA’s computer system. (It’s similar to the way that carrying weights allows horses of different sizes to race each other, or how different weight classes allows boxing or wrestling to be fair.) Medinah’s member-guest tournament is, nationally, one of the biggest because of the number of participants: around 300 golfers every year, divided into three flights according to handicap index (i.e. ability). Since Medinah has three golf courses, it can easily accommodate so many players—but what it can’t do, however, is adequately police the tournament’s entrants, as the golfers I caddied for discovered.

Our tournament began with the member shooting an amazing 30, after handicap adjustment, on the front nine of Medinah’s Course Three, the site of three U.S. Opens, two PGA Championships, numerous Western Opens (back when they were called Western Opens) and a Ryder Cup. A score of 30 for nine holes, on any golf course, is pretty strong—but how much more so on a brute like that course, and how much more so again in the worst of the Classic’s three flights? I thought so, and said so to the golfers I was caddieing for after our opening round. They were kind of down about the day’s ending—especially the guest, who had scored an eight on our last hole of the day. Despite that I told my guys that on the strength the member’s opening 30, if we weren’t just outright winning the thing we were top three. As it turned out, I was correct—but despite the amazing showing we had on the tournament’s first day, we would soon discover that there was no way we could catch the leading team.

In a handicapped tournament like the Classic, what matters isn’t so much what any golfer scores, but what he scores in relation to the handicap index. Thus, the member half of our member-guest team hadn’t actually shot a 30 on the front side of Medinah’s Course 3—which certainly would have been a record for an amateur tournament, and I think a record for any tournament at Medinah ever—but instead had shot a 30 considering the shots his handicap allowed. His score, to use the parlance, wasn’t gross but rather net: my golfer had shot an effective six under par according to the tournament rules.

Naturally, such an amazing score might raise questions: particularly when it’s shot as part of the flight reserved for the worst players. Yet my player has a ready explanation for why he was able to shoot a low number (in the mid 40s) and yet still have a legitimate handicap: he has a legitimate handicap—a congenital deformity in one of his ankles. The deformation is not enough to prevent him from playing, but as he plays—and his pain medications wear off—he usually tires, which is to say that he can very often shoot respectable scores in the first nine holes, and horrific scores on the second nine holes. His actual handicap, in other words, causes his golf handicap index to be askew slightly from reality.

Thus, he is like the legendary Sir Gawain, who according to Arthurian legend tripled his strength at noon but faded as the sun set—a situation that the handicap system is ill-designed to handle. Handicap indexes presume roughly the same ability at the beginning of a round as at the end, so in this Medinah member’s case his index understates his ability at the beginning of his round while wildly overstating it at the end. In a sense then it could perhaps be complained that this member benefits from the handicap system unfairly—unless you happen to consider that the man walks in nearly constant pain every day of his life. If that’s “gaming the system” it’s a hell of a way to do it: getting a literal handicap to pad your golf handicap would obviously be absurd.

Still, the very question suggests the great danger of handicapping systems, which is one reason why people have gone to the trouble of investigating whether there are ways to determine whether someone is taking advantage of the handicap system—without using telepathy or some other kind of magic to determine the golfer’s real intent. The most important of the people who have investigated the question is Dean L. Knuth—the former Senior Director of Handicapping for the United States Golf Association, a man whose nickname is the “Pope of Slope.” In that capacity Mr. Knuth developed the modern handicapping system—and a way to calculate the odds of a person of a given handicap shooting a particular score.

In this case, my information is that the team that ended up winning our flight—and won the first round—had a guest player who represented himself as possessing a handicap index of 23 when the tournament began. For those who aren’t aware, a 23 is a player who does not expect to play better than a score of ninety during a round of golf, when the usual par for most courses is 72. (In other words, a 23 isn’t a very good player.) Yet this same golfer shot a gross 79 during his second round for what would have been a net 56: a ridiculous number.

Knuth’s calculations reflect that: they judge that the odds of someone shooting a score so far below his handicap to be on the order of several tens of thousands to one, especially in tournament conditions. In other words, while my player’s handicap wasn’t a straightforward depiction of his real ability, it did adequately capture his total worth as a golfer. This other player’s handicap though sure appeared to many, including one of the assistant professionals who went out to watch him play, to be highly suspect.

That assistant professional, who is a five handicap himself, said that after watching this guest play he would hesitate to play him straight up, much less giving the fellow ten or more shots: the man not only was hitting his shots crisply, but also hit shots that even professionals fear, like trying to get a ball to stop on a downslope. So for the gentleman to claim to be a 23 handicap seemed, to this assistant professional, to be incredibly, monumentally, improbable. Observation then seems to confirm what Dean Knuth’s probability tables would suggest: the man was playing with an improper handicap.

What happened as the tournament went along also appears to indicate that at least Medinah’s head professional was aware that the man’s reported handicap index wasn’t legitimate: after the first round, in which that player shot a similarly suspect score as his second round 79 (I couldn’t discover what it was precisely), his handicap was adjusted downwards, and after that second round 79 more shots got knocked off his initial index. Yet although there was a lot of complaining on the part of fellow competitors, no one was willing to take any kind of serious action.

Presumably, this inaction was on a theory similar to the legal system’s presumption of innocence: maybe the man just really had “found his swing” or “practiced really hard” or gotten a particularly good lesson just before arriving at Medinah’s gates. But to my mind, such a presumption ignores, like the O.J. jury did, the really salient issue: in the Simpson case, that Nicole was dead; in the Classic, the fact that this team was leading the tournament. That was the crucial piece of data: it wasn’t just that this team could be leading the tournament, it was that they were leading the tournament—just in the same way that, while you couldn’t use statistics to predict whether O.J. Simpson would murder his ex-wife Nicole, you certainly can use statistics to say that O.J. probably murdered Nicole once Nicole was murdered.

The fact in other words that this team of golfers was winning the tournament was itself evidence they were cheating—why would anyone cheat if they weren’t going to win as a result? That doesn’t mean, to be sure, that winning constitutes conclusive evidence of fraud—just as probabilistic evidence doesn’t mean that O.J. must have killed Nicole—but it does indicate the need for further investigation, and suggests what presumption an investigation ought to pursue. Particularly by the amount of the lead: by the end of the second day, that team was leading by more than twenty shots over the next competitors.

Somehow however it seems that Americans have lost the ability to see the obvious. Perhaps that’s through the influence of films from the 1970s like Caddyshack or Star Wars: both films, interestingly, feature scenes where one of the good guys puts on a blindfold in order to “get in touch” with some cosmic quality that lies far outside the visible spectrum. (The original Caddyshack script actually cites the Star Wars scene.) But it is not necessary to blame just those films themselves: as Thomas Frank says in his book The Conquest of Cool, one of America’s outstanding myths represents the world as a conflict between all that is “tepid, mechanical, and uniform” versus the possibility of a “joyous and even a glorious cultural flowering.” In the story told by cultural products like Caddyshack, it’s by casting aside rational methods—like Luke Skywalker casting aside his targeting computer in the trench of the Death Star—that we are all going to be saved. (Or, as Rodney Dangerfield’s character puts it at the end of Caddyshack, “We’re all going to get laid!”) That, I suppose, might be true—but perhaps not for the reasons advertised.

After all, once we’ve put on the blindfold, how can we be expected to see?

Public Enemy #2

Why Steve Stricker Is Way More Dangerous Than Anyone Can Imagine

Words pay no debts…
Troilus and Cressida III, 2

Dustin Johnson won the Tournament of Champions, the first PGA tournament of the new season (though it won’t be, as we shall see, next season), by beating Steve Stricker in the last round; afterwards, Stricker announced he is going into a “semi-retirement.” Some rather sour people might say that’s a season too late, given Stricker’s disappointing performance at Medinah last fall, but for others the tour loses a man widely regarded as one the good guys: “Stricker is your nice and genuinely down-to-earth Midwesterner” wrote Stephanie Wie of Golf.com. Stricker’s been ranked as high as #2 in the World Rankings, yet nobody would ever confuse him with Tiger Woods: he’s simply not competitive in the way Tiger is. Yet it is, maybe oddly, Steve Stricker who is a bigger threat to golf’s future than Tiger Woods.

Admittedly that’s a strange sentiment: when Tiger’s indiscretions became public a few years ago, a lot of people thought he’d lost huge numbers of fans to the sport, particularly women. Undoubtedly, that fear drove Tiger’s corporate sponsors, like Buick and the rest, to abandon their deals with him by invoking whatever the “moral turpitude” clauses in his contracts were. And in some sense those predictions are right: some casual fans surely did stop watching after Woods’ trouble. But just as surely, the television ratings indicate that such an effect, if it mattered at all, hasn’t mattered much: what those numbers show is that what matters now, as it has since Tiger first turned pro, is whether Woods is playing in the tournament or not. People watch when he is, and they don’t when he isn’t.

Maybe more of them are rooting for Tiger to fail these days—there were always some before the scandal, too—but the numbers say that Tiger is, if anything, a boon to the sport. Not so Stricker: nobody, aside from maybe his family and friends, watches the PGA Tour to see how Stricker is doing unless, as at Medinah last year, they are watching him represent the United States in some team competition or other. Still, that’s not why I say that Stricker represents a threat to the sport: sure, he’s pretty dull, and doesn’t emote anything like Tiger does (at least on the course), but that doesn’t pose any kind of existential crisis. No, what makes Stricker pose a threat to the game isn’t, in fact, his play during this century at all: it’s his play from the beginning of his career, not the end, that is the threat.

That beginning is referred to in John Feinstein’s sequel to A Good Walk Spoiled: that somewhat tedious tome entitled The Majors. Even there, Feinstein only refers to the events in question in passing, either not realizing or downplaying their significance. The crucial paragraph is this:

it had been a U.S. Open qualifier in 1993 that had jump-started his career. He had qualified in Chicago and finished as co-medalist to get into his first Open. he went on to make the cut at Baltusrol, which convinced him he was good enough to play with the big boys. That had led to his solid summer in Canada, which had gotten him an exemption into the Canadian Open. Totally unknown at the time, he led the Canadian for two rounds and ended up finishing fourth. Then he made it through all three stages of Q-School to get his PGA Tour card.

The story this paragraph tells is, at least on the surface, a heartwarming one: the story of a Midwestern kid made, suddenly, good. It makes for excellent copy and reminds us of all those other archetypal American stories. Just as another archetypal American story does, though that one also reminds us of just why we ought not to shut off our critical ears when listening to them.

That story is called The Great Gatsby, Midwesterner-who-made-good F. Scott Fitzgerald’s answer to the “Midwesterner-who-made-good” story. As you’ll recall from high school English, Gatsby is the story of how poor Jimmy Gatz becomes rich Jay Gatsby, and how, no matter how much wealth he piles up, the powers-that-be never will let him into the inner circle of power, which will always escape down another corridor, through another side-door. Still, all that depressing narrative isn’t really why Fitzgerald’s novel is important here: the consequential point, so it seems to me, comes in a single sentence in Chapter One, before things have barely begun at all.

“If personality is an unbroken series of successful gestures,” wrote Fitzgerald about Gatsby, “then there was something gorgeous about him, some heightened sensitivity to the promises of life, as if he were related to one of those intricate machines that register earthquakes ten thousand miles away.” It’s a sentence with its own beauty, to be sure: it begins with an obscure generalization, before rushing down to that indelible use of a Richter machine in a simile. But the crucial part of the sentence, for my purposes here, is that first phrase, about the “unbroken series.”

To know why requires reference to yet another book, one I’ve referred to before: Fooled by Randomness, by one Nassim Taleb. In that book, Taleb writes of what he calls the “lucky fool”—a category that, if you aren’t one yourself, ought to be fairly self-explanatory. “It has been shown,” Taleb says (though he doesn’t cite his sources, unfortunately), “that monkeys injected with serotonin”—a neurotransmitter that appears to play a great role with our moods and dispositions—“will rise in the pecking order”—apes being hierarchical creatures—“which in turn causes an increase of the serotonin level in their blood—until the virtuous cycle breaks and starts a vicious one.” The monkey references aside, it’s difficult to think of a more concise description of Steve Stricker’s summer of 1993.

“‘I went from nowhere going into that Open qualifier in ’93 to being on the tour in six months,’” Feinstein reports Stricker saying. It’s a heartwarming tale, speaking to the hope that golf, and perhaps sport in general, can represent. But it also represents something darker: a threat, as I said, to golf itself: “When you have large numbers of teenagers who are successful major league pitchers, isn’t that persuasive evidence that the quality of play is not the same?” wrote the sabermetrician—baseball stat-head—Bill James about the difference between nineteenth-century and twentieth-century baseball. James’ point is that a sport whose most successful practitioners are men in their primes, not the extraordinarily-young or other kinds of outliers, is a sign that the question is actually a sport: a game of skill, not a game of chance.

Stricker’s run to the PGA Tour threatens the notion that golf is a sport because it suggests that golf really is that which a lot of amateurs say golf is: a “head game,” or a game whose major determinating factor is psychological. As Tom Weiskopf once said about Jack Nicklaus: “Jack knew he was going to beat you. You knew Jack was going to beat you. And Jack knew that you knew that he was going to beat you.” To some, of course, such conditions are the essence of sport: we’re used to the usual kinds of athletic blather, usually spouted by football coaches, about the importance of will in sports, and all the rest of that.

The reality though is that a “sport” whose determining factor was the athletes’ respective “willpowers” would be ridiculous. What a combination of Taleb’s suggestion and Weiskopf’s observation about Nicklaus might create would be a picture of a “sport” played by players who had happened—not by their own merit, but simply on the fact that somebody has to win every contest—to win enough, at the right times, to create the serotonin levels sufficient to defeat most others most times. (This is not even to speak of the way in which golf is structured to reward veteran players at the expense at newcomers.) Golf would be, so to speak, a kind of biochemical aristocracy: entry would be determined, essentially, via lottery, not by effort.

There is only one way to counter an allegation like that: to allow the players to display their skill as often as possible, or in other words to make the sample sizes as large as possible. It’s that point that the PGA Tour has addressed by changing the structure of the professional game in a way that will allow Johnson’s win at the first tournament of the 2013 season to make him the defending champ at the sixth tournament of 2014—without changing any dates.

The Fry’s.com Open, at Corde de Valle on October 10th, will start what will be a kind of counterpart to the FedEx Cup: though instead of playing for a ten-million dollar bonus, as the top-ranked players will be, this tournament will be for the bottom dwellers on the PGA Tour. Those low-ranked players from the big tour will play the high-ranked players on the Web.com Tour (the farm system for the big tour, formerly known as the Nationwide Tour) in a battle for access to the big paydays on the PGA Tour.

That method will replace the old Q-School, the finals of which—a six-day tournament usually played somewhere like the tough PGA West Stadium course—used to give away PGA Tour cards. But for some years the Web.com Tour has overtaken Q-School as a means of becoming a PGA Tour player: slowly but surely the numbers of cards available to Q-School grads has fallen, and those for Web.com grads has risen. The reason has been to address just that potential criticism: players from the developmental tour have, presumably, had more opportunity to prove their talents, and thus their success is more likely to be due to their own merit rather than being on the receiving end of a lucky draw.

The trouble is, however, that it isn’t clear that increasing those sample sizes has really done anything to reward actual talent as opposed to luck. “For four years from 2007 through 2010, 34 of 106 (32.1%) players who made it to the PGA Tour via Q-school retained their cards that year,” as Gary Van Sickle pointed out on Golf.com last March, “while 31 of 100 players (31%) who reached the PGA Tour via the Nationwide retained their cards.” In 2011 those numbers remained about the same. In other words, the differences in sample sizes—a whole season versus one week—does not appear to have much effect on determining who advances or does not advance to the big tour. That is, to put it mildly, a bit troubling.

Steve Stricker has earned roughly $35 million on the PGA Tour; it’s the highest figure for anyone who’s never won a major championship. By contrast, the career money leader on the Web.com Tour is Darron Stiles, who’s won just over $1.8 million. It’s an indication of just how skewed the pay structure is between the two tours: roughly speaking, the total purse at a PGA Tour event is roughly ten times what a comparable event on the other tour is. Yet, as mentioned, it can be difficult to distinguish between the two tours’ players’ respective merits. If so, that could mean that the difference in pay isn’t due to what the players put out on the playing field. Huge differences in pay that can’t be easily explained is, of course, cause for concern: one reason why Steve Stricker, resident of a nation where a CEO can be compensated hundreds of times more than workers on the lowest rung of the ladder and congressmen can be elected for decades to districts made safe by gerrymandering, might be a threat to graver matters than golf.

The World Forgetting

In August was the Jackal born;
The Rains came in September;
‘Now such a fearful flood as this,’
Says he, ‘I can’t remember!”
—Rudyard Kipling.
The Second Jungle Book. 1895.


“In the beginning,” wrote Pat Ward-Thomas, whose career as golf writer for the Guardian began in 1950, “it knew no architect but nature; it came into being by evolution rather than design, and on no other course is the hand of man less evident.” He was, obviously, speaking of the Old Course, at St. Andrews; the place where many say the game began and, it seems by the hysteria overtaking certain sectors of the golf world, is about to end. “I was horrified,” the golf architect Tom Doak—who is supervising the renovation of Medinah’s Course #1—recently wrote to the presidents of the American, Australian, and European societies of golf course architects, “to read of the changes proposed to the Old Course at St. Andrews.” The Old Course is aiming to beef up the course once again and Doak, for one, objects, on the grounds suggested by Ward-Thomas. But while Doak may be right to object, the reasons he gives for objecting are wrong.

Before getting to that, though, it needs to be established that there is some kind of hysteria. Luckily, Ian Poulter is involved. “I know lets draw a Moustache on the Mona Lisa” reads one of Poulter’s ungrammatical tweets (which is how you know it’s really from him). Another reads “if they make changes to the Old Course St Andrews they are insane.” I’d love to be able to reproduce the image here, but it’s worth remembering the look on Poulter’s face at Medinah during the late afternoon on Saturday. (Try here: http://www.cbssports.com/golf/blog/eye-on-golf/20408062/usa-10-europe-6-ian-poulter-goes-absolutely-crazy-to-give-europe-a-chance).

Instead of reproducing Poulter’s look, however, et’s look at the changes a bit more dispassionately. The R & A’s architect, Martin Hawtree, plans to work this winter on the second, seventh, eleventh, and seventeenth holes, while next winter working on the third, fourth, sixth, ninth, and fifteenth holes. The headline event seems to be the widening of the Road Hole Bunker—the infamous “Sands of Nakajima”—but most of the other work appears relatively innocuous: bringing the greenside bunkers a bit closer in on the second hole, for instance, or lowering a bit of the eleventh green to create a few more pin spots. According to the R & A, in short, all this seems just so many nips and tucks.

The reasons for the steps taken by the Royal and Ancient Golf Club of St. Andrews, the body responsible for the Old Course, are clear: Stephen Gallacher for instance, who won the Dunhill Links Championship at St. Andrews in 2004, told the Scotsman “I take it they don’t want 59 shot on it.” The increasing distances hit by the professionals requires, as it has worldwide, longer and tougher courses, and the Old Course is no longer judged to be invulnerable to the modern power game. Most of the changes appear, without seeing a detailed map, designed to force professionals to be a bit more precise, whether off the tee or approaching the green.

Doak however views all this as, quite literally, sacrilege: “I have felt,” he says in his letter, “for many years that the Old Course was sacred ground to golf architects.” He appeals to history: “It [the Old Course] has been untouched architecturally since 1920, and I believe that it should remain so.” In so saying, Doak casts his lot with Ward-Thomas’ view of the Old Course as the world’s only “natural” course: built, as they say, by sheep and the winds blowing off the North Sea. In this, Doak is not only just in some technical sense off, but spectacularly wrong. The Old Course has the “hand of man” all over it.

“We do not know exactly when or how the current layout of the Old Course at St. Andrews developed,” writes the anonymous author of Scottish Golf History at the eponymous website, but as it happens this is not true, as the author somewhat uneasily relays within the same sentence as the above: “by 1764 St. Andrews consisted of twelve holes, ten of which were played twice, making a round of twenty-two holes in all.” It was in that year that the Royal & Ancient (not yet known by that name) decided that the first four holes, “which were also the last four holes” were too short, and turned them into two holes instead. But this was only one of a long line of changes.

These days the Old Course is played in a counter-clockwise fashion: the nine “out” holes lie closest to the North Sea to the town’s east and the nine “in” holes lie just inland. But prior to the nineteenth century the Old Course played clockwise: since there were no separate tee boxes then, play proceeded from the eighteenth green to what is now the seventeenth green, and so on. That created, as it might be imagined, some issues: “Because the middle holes … were played in both directions, it meant that golfers might often be waiting, not just for the group in front to clear the green, as today, but also for a party playing in the opposite direction to do the same.” One can only suppose there were the occasional disagreements.

The Old Course, as it stands today, is the handiwork of one man: “Old” Tom Morris, the legendary four-time winner of the Open Championship (the British Open to us on the left-hand of the Atlantic), and father of another four-time winner (“Young” Tom Morris). “Old” Tom seemingly had a hand in half the courses built in the British Isles at the end of the nineteenth century and from his shop virtually all of the great players and designers of the following generation or so issued. It was Old Tom who decreed that the Old Course should be played counter-clockwise (or widdershins). It was he who built the first and eighteenth greens. And, maybe most interestingly at this time of year, he introduced the concept of mowing to golf. (“Golf was a winter game until the middle of the nineteenth century,” says Scottish Golf History, “when mechanical grass cutters allowed play in the summer as well.”)

In any case, any serious investigation will demonstrate not only that the Old Course wasn’t designed by “Nature” but that long after Old Tom had been buried in the town cemetery, the Old Course was still undergoing changes. New bunkers, for instance, were constructed in 1949, which is one reason why Peter Dawson, leader of the R & A, said that the course has been “largely” unaltered over its history in the press release regarding the changes: Dawson, knowing the real history of the course, knows it has been tweaked many times.

Doak and Poulter’s stance, in other words, is historically inaccurate. That isn’t really, though, what’s so bothersome about their position. It isn’t in the facts, but rather in their logic, that their argument is ultimately faulty. But to understand why requires knowing something about a human activity whose origins also lie in Scotland; more specifically, just south of the Grampian Mountains.

That’s where Charles Lyell was born in 1797, within sight of the Highlands. He grew to become a lawyer, but it is for his book The Principles of Geology that he is best-known for today. And the reason why he is known for that book is because it expounded Lyell’s contention that “the present is the key to the past”: what Lyell argued was that it is by examining what happens today that geologists can learn about what happened to the earth ages ago, not by consulting religious books for signs of supernatural intervention.

What Lyell taught, in other words, is that in order to investigate the past the researcher should presume that processes existing today also existed then; that there wasn’t any sharp break between the present and the past. But Doak and Poulter’s argument necessarily implies a break with the past: if we should know so much regarding the changes in the Old Course since the nineteenth century, why should we presume that—prior to the intervention of “Old” Tom—the course, as Ward-Thomas put it, “knew no architect but nature?”

What Doak and Poulter’s argument rests on, in other words, isn’t an assertion about the superiority of God and/or Nature over Man, but rather on the superiority of “Old” Tom Morris as opposed to all other golf architects before or since. Which, it must be pointed out, is entirely arguable: as mentioned, at times it seems that Morris had a hand in half the golf courses in Britain. Still, there’s a considerable difference between chalking up a design to the hand of the Nature (Or the wanderings of sheep) and a particular man. Doak certainly may argue that Morris’ conception of the Old Course ought to be preserved—but he’s wrong to suggest it might be flouting the Divine Will to tinker with it.

Now and Forever

[B]ehold the … ensign of the republic … bearing for its motto, no such miserable interrogatory as “What is all this worth?” nor those other words of delusion and folly, “Liberty first and Union afterwards” …
—Daniel Webster. Second Reply to Hayne. 27 January 1830. 

       

       

The work on Medinah’s Course #1, older-but-not-as-accomplished brother to Course #3, began almost as soon as the last putt was struck during this year’s Ryder Cup. Already the ‘scape looks more moon than land, perhaps like a battlefield after the cannon have been silenced. Quite a few trees have been taken out, in keeping with Tom Doak’s philosophy of emphasizing golf’s ground (rather than aerial) game. Still, as interesting as it might be to discuss the new routing Doak is creating, the more significant point about Medinah’s renovation is that it is likely one of the few projects that Doak, or any other architect, has going on American soil right now. Yet today might be one of the best opportunities ever for American golf architecture—assuming, that is, Americans can avoid two different hazards.

The first hazard might be presented by people who’d prefer we didn’t remember our own history: in this case, the fact that golf courses were once weapons in the fight against the Great Depression. While immediately on assuming office in early 1933 Franklin Roosevelt began the Federal Emergency Relief Agency—which, as Encyclopedia.com reminds us, had the “authority to make direct cash payments to those with no other means of support,” amazing enough in this era when even relief to previously-honored homeowners is considered impossible—by 1935 that program had evolved into the Works Project Administration. By 1941, the WPA had invested $11.3 billion (in 1930s dollars!) in 8 million workers and such projects as 1,634 schools, 105 airports, 3,000 tennis courts, 3,300 dams, 5,800 mobile libraries. And lastly, but perhaps not leastly, 103 golf courses.

As per a fine website called The Living New Deal, dedicated to preserving the history of the New Deal’s contributions to American life, it’s possible to find that not only did these courses have some economic impact on their communities and the nation as a whole, but that some good courses got built—good enough to have had an impact on professional golf. The University of New Mexico’s North Course, for instance, was the first golf course in America to measure more than 7000 yards—today is the standard for professional-length golf courses—and was the site of a PGA Tour stop in 1947. The second 18-hole course in New Orleans’ City Park—a course built by the WPA—was host to the New Orleans Open for decades.

Great architects designed courses built by the WPA. Donald Ross designed the George Wright Golf Course in Boston, opened in 1938. A.W. Tillinghast designed the Black course at Bethpage State Park, opened in the depths of the Depression in 1936. George Wright is widely acclaimed as one of Ross’ best designs, while the Black hosted the first U.S. Open held at a government-owned golf course, in 2002, and then held an encore in 2009. Both Opens were successful: Tiger won the first, Lucas Glover the second, and six players, total, were under par in the two tournaments. In 2012, Golf Digest rated it #5 in its list of America’s toughest courses—public or private. (Course #3 at Medinah ranked 16th.)

Despite all that, some time ago one Raymond Keating at the Foundation for Economic Education wrote that “Bethpage represents what is wrong with … golf.” He also claimed that “there is no justification whatsoever for government involvement in the golf business.” But, aside from the possibility of getting another Bethpage Black, there are a number of reasons for Americans to invest in golf courses or other material improvements to their lives, whether it be high-speed rail or re-constructed bridges, at the moment.

The arguments by the economists can be, and are, daunting, but one point that everyone may agree on is that it is unlikely that Americans will ever again be able to borrow money on such attractive terms: as Elias Isquith put it at the website The League of Ordinary Gentlemen, the bond market is “still setting interest rates so low it’s almost begging the US to borrow money.” The dollars that we repay these loans with, in short, will in all likelihood—through the workings of time and inflation—be worth less than the ones on offer now. That’s one reason why Paul Krugman, the Nobel Prize-winning economist, says that “the danger for next year is not that the [federal] deficit will be too large but that it will be too small, and hence plunge America back into recession.” By not taking advantage of this cheap money that is, essentially, just lying there, America is effectively leaving productive forces (like Tom Doak’s company) idle, instead of engaging them in work: the labor that grows our economy.

America, thusly, has an historic opportunity for golf: given that American companies, like Tom Doak’s or Rees Jones’ or Pete Dye’s or Ben Crenshaw and Bill Coore’s, or any number of others, are at the forefront of golf design today, it would be possible to create any number of state-of-the-art golf courses that would first, stimulate our economy, and secondly, reward American citizens with some of the finest facilities on the planet at what would be essentially little to no cost. And, it might be worth bringing up, maybe that could help us with regard to that troublesome series of golf events known as the Ryder Cup: maybe a generation of golfers weaned on fine public, instead of private, courses might understand the ethos of team spirit better than the last several ensembles fielded by our side.

Unless, that is, another faction of American citizens has their sway. On the outskirts of San Francisco, there is a golf course known as Sharp Park. It was originally designed by Alastir MacKenzie, the architect who also designed Cypress Point and Pasatiempo, in California, and public golf courses for both the University of Michigan and the Ohio State University (both thought to be among the finest college courses in the world)—and also a course for a small golf club named the Augusta National Golf Club. Sharp Park remains the only public course MacKenzie designed on the ocean, and MacKenzie’s goal in designing it was to create “the finest municipal golf course in America”—a goal that, present-day conditioning aside, many experts would say he succeeded, or nearly succeeded, in doing.

Unfortunately, a small number of “environmentalists,” as reported by San Francisco’s “alternate” newspaper, SFWeekly, now “want the site handed over to the National Park Service for environmental restoration.” According to a story by Golf Digest, the activists “contend it harms two endangered species, the San Francisco garter snake and California red-legged frog.” A year ago, though, a federal judge found that, contrary to the environmentalists’ accusations, “experts for both sides agree[d] that the overall Sharp Park frog population has increased during the last 20 years.” Ultimately, in May of this year, the judge found the evidence that the golf course’s existence harmed the two endangered species so weak that the court in effect dismissed the lawsuit, saying it were better that the public agencies responsible for monitoring the two species continued to do their job, rather than the judiciary.

I bring all of this up because, in investigating the case of Sharp Park, it is hard to avoid considering that the source of the environmentalists’ actions wasn’t so much concern for the two species—which, it must be pointed out, appear to be doing fine, at least within the boundaries of the park—as it was animosity towards the sport of golf itself. The “anti-Sharp Park” articles I consulted, for instance, such as the SF Weekly piece I mentioned above, did not see fit to note Alister MacKenzie’s involvement in the course’s design. Omissions like that are a serious weakness, in my view, to any claim of objectivity regarding the case.

Still, regardless of the facts in this particular case, the instance of Sharp Park may be illustrative of a particular form of “leftism” can be, in its own way, as defeatist and gloomy as that species of “conservatism” that would condemn us to lifetimes of serving the national debt. Had we a mass “environmental movement” in the 1930s, in other words, how many of those golf courses—not to mention all of the other projects constructed by the WPA and other agencies—would have gotten built?

That isn’t to say, of course, that anyone is in favor of dirty air or water; far from it. It is, though, to say that, for a lot of so-called leftists, the problem with America is Americans, and that that isn’t too far from saying, with conservatives and Calvin Coolidge, that the “business of the American people is business.” We can choose to serve other masters, one supposes—whether they be of the future or the past—but I seem to recall that America isn’t supposed to work that way. The best articulation of the point, as it so occurs, may have been delivered precisely one hundred and forty-nine years ago on the 19th of November, over a shredded landscape over which the guns had drawn quiet.

I’ll give you a hint: it included the phrase “of the people, by the people, for the people.”