Buck Dancer’s Choice

Buck Dancer’s Choice: “a tune that goes back to Saturday-night dances, when the Buck, or male partner, got to choose who his partner would be.”
—Taj Mahal. Oooh So Good ‘n’ Blues. (1973).

 

“Goddamn it,” Scott said, as I was driving down the Kennedy Expressway towards Medinah Country Club. Scott is another caddie I sometimes give rides to; he’s living in the suburbs now and has to take the train into the city every morning to get his methadone pill, where I pick him up and take him to work. On this morning, Scott was distracting himself, as he often does, from the traffic outside by playing, on his phone, the card game known as spades—a game in which, somewhat like contract bridge, two players team up against an opposing partnership. On this morning, he was matched with a bad partner—a player who had, it came to light later, not trumped a ten of spades with the king the other player had in possession, and instead had played a three of spades. (In so doing, Scott’s incompetent partner thereby negated the value of the latter while receiving nothing in return.) Since, as I agree, that sounds relentlessly boring, I wouldn’t have paid much attention to the whole complaint—until I realized that not only did Scott’s grumble about his partner essentially describe the chief event of the previous night’s baseball game, but also why so many potential Democratic voters will likely sit out this election. After all, arguably the best Democratic candidate for the presidency this year will not be on the ballot in November.

What had happened the previous night was described on ESPN’s website as “one of the worst managerial decisions in postseason history”: in a one-game, extra-innings, playoff between the Baltimore Orioles and and the Toronto Blue Jays, Orioles manager Buck Showalter used six relief pitchers after starter Chris Tillman got pulled in the fifth inning. But he did not order his best reliever, Zach Britton, into the game at all. During the regular season, Britton had been one of the best relief pitchers in baseball; as ESPN observed, Britton had allowed precisely one earned run since April, and as Jonah Keri wrote for CBS Sports, over the course of the year Britton posted an Earned Run Average (.53) that was “the lowest by any pitcher in major league history with that many innings [67] pitched.” (And as Deadspin’s Barry Petchesky remarked the next day, Britton had “the best ground ball rate in baseball”—which, given that Orioles ultimately lost on a huge, moon-shot walk-off home run by Edwin Encarnacion, seems especially pertinent.) Despite the fact that the game went 11 innings, Showalter did not put Britton on the mound even once—which is to say that the Orioles ended their season with one of their best weapons sitting on the bench.

Showalter had the king of spades in his hand—but neglected to play him when it mattered. He defended himself later by saying, essentially, that he is the manager of the Baltimore Orioles, and that everyone else was lost in hypotheticals. “That’s the way it went,” the veteran manager said in the post-game press conference—as if the “way it went” had nothing to do with Showalter’s own choices. Some journalists speculated, in turn, that Showalter’s choices were motivated by what Deadspin called “the long-held, slightly-less-long-derided philosophy that teams shouldn’t use their closers in tied road games, because if they’re going to win, they’re going to need to protect a lead anyway.” In this possible view, Showalter could not have known how long the game would last, and could only know that, until his team scored some runs, the game would continue. If so, then it might be possible to lose by using your ace of spades too early.

Yet, not only did Showalter deny that such was a factor in his thinking—“It [had] nothing to do with ‘philosophical,’” he said afterwards—but such a view takes things precisely backward: it’s the position that imagines the Orioles scoring some runs first that’s lost in hypothetical thinking. Indisputably, the Orioles needed to shut down the Jays in order to continue the game; the non-hypothetical problem presented to the Orioles manager was that the O’s needed outs. Showalter had the best instrument available to him to make those outs … but didn’t use him. And that is to say that it was Showalter who got lost in his imagination, not the critics. By not using his best pitcher Showalter was effectively reacting to an imaginative hypothetical scenario, instead of responding to the actual facts playing out before him.

What Showalter was flouting, in other words, was a manner of thinking that is arguably the reason for what successes there are in the present world: probability, the first principle of which is known as the Law of Large Numbers. First conceived by a couple of Italians—Gerolamo Cardano, the first man known to have devised the idea, during the sixteenth century, and Jacob Bernoulli, who publicized it during the eighteenth—the Law of Large Numbers holds that, as Bernoulli put it in his Ars Conjectandi from 1713, “the more observations … are taken into account, the less is the danger of straying.” Or, that the more observations, the less the danger of reaching wrong conclusions. What Bernoulli is saying, in other words, is that in order to demonstrate the truth of something, the investigator should look at as many instances as possible: a rule that is, largely, the basis for science itself.

What the Law of Large Numbers says then is that, in order to determine a course of action, it should first be asked, “what is more likely to happen, over the long run?” In the case of the one-game playoff, for instance, it’s arguable that Britton, who has one of the best statistical records in baseball, would have been less likely to give up the Encarnacion home run than the pitcher who did (Ubaldo Jimenez, 2016 ERA 5.44) was. Although Jimenez, for example, was not a bad ground ball pitcher in 2015—he had a 1.85 ground ball to fly ball ratio that season, putting him 27th out of 78 pitchers, according to SportingCharts.com—his ratio was dwarfed by Britton’s: as J.J. Cooper observed just this past month for Baseball America, Britton is “quite simply the greatest ground ball pitcher we’ve seen in the modern, stat-heavy era.” (Britton faced 254 batters in 2016; only nine of them got an extra-base hit.) Who would you rather have on the mound in a situation where a home run (which is obviously a fly ball) can end not only the game, but the season?

What Bernoulli (and Cardano’s) Law of Large Numbers does is define what we mean by the concept, “the odds”: that is, the outcome that is most likely to happen. Bucking the odds is, in short, precisely the crime Buck Showalter committed during the game with the Blue Jays: as Deadspin’s Petchesky wrote, “the concept that you maximize value and win expectancy by using your best pitcher in the highest-leverage situations is not ‘wisdom’—it is fact.” As Petchesky goes on to say “the odds are the odds”—and Showalter, by putting all those other pitchers on the mound instead of Britton, ignored those odds.

As it happens, “bucking the odds” is just what the Democratic Party may be doing by adopting Hillary Clinton as their nominee instead of Bernie Sanders. As a number of articles this past spring noted, at that time many polls were saying that Sanders had better odds of beating Donald Trump than Clinton did. In May, Linda Qiu and Louis Jacobson noted in The Daily Beast Sanders was making the argument that “he’s a better nominee for November because he polls better than Clinton in head-to-head matches against” Trump. (“Right now,” Sanders said then on the television show, Meet the Press, “in every major poll … we are defeating Trump, often by big numbers, and always at a larger margin than Secretary Clinton is.”) Then, the evidence suggested Sanders was right: “Out of eight polls,” Qiu and Jacobson wrote, “Sanders beat Trump eight times, and Clinton beat Trump seven out of eight times,” and “in each case, Sanders’s lead against Trump was larger.” (In fact, usually by double digits.) But, as everyone now knows, that argument did not help to secure the nomination for Sanders: in August, Clinton became the Democratic nominee.

To some, that ought to be the end of the story: Sanders tried, and (as Showalter said after his game), “it didn’t work out.” Many—including Sanders himself—have urged fellow Democrats to put the past behind them and work towards Clinton’s election. Yet, that’s an odd position to take regarding a campaign that, above everything, was about the importance of principle over personality. Sanders’ campaign was, if anything, about the same point enunciated by William Jennings Bryan at the 1896 Democratic National Convention, in the famous “Cross of Gold” speech: the notion that the “Democratic idea … has been that if you legislate to make the masses prosperous, their prosperity will find its way up through every class which rests upon them.” Bryan’s idea, as ought to clear, has certain links to Bernoulli’s Law of Large Numbers—among them, the notion that it’s what happens most often (or to the most people) that matters.

That’s why, after all, Bryan insisted that the Democratic Party “cannot serve plutocracy and at the same time defend the rights of the masses.” Similarly—as Michael Kazin of Georgetown University described the point in May for The Daily Beast—Sanders’ campaign fought for a party “that would benefit working families.” (A point that suggests, it might be noted, that the election of Sanders’ opponent, Clinton, would benefit others.) Over the course of the twentieth century, in other words, the Democratic Party stood for the majority against the depredations of the minority—or, to put it another way, for the principle that you play the odds, not hunches.

“No past candidate comes close to Clinton,” wrote FiveThirtyEight’s Harry Enten last May, “in terms of engendering strong dislike a little more than six months before the election.” It’s a reality that suggests, in the first place, that the Democratic Party is hardly attempting to maximize their win expectancy. But more than simply those pragmatic concerns regarding her electability, however, Clinton’s candidacy represents—from the particulars of her policy positions, her statements to Wall Street financial types, and the existence of electoral irregularities in Iowa and elsewhere—a repudiation, not simply of Bernie Sanders the person, but of the very idea about the importance of the majority the Democratic Party once proposed and defended. What that means is that, even were Hillary Clinton to be elected in November, the Democratic Party—and those it supposedly represents—will have lost the election.

But then, you probably don’t need any statistics to know that.

The Color of Water

No one gets lucky til luck comes along.
Eric Clapton
     “It’s In The Way That You Use It”
     Theme Song for The Color of Money (1986).

 

 

The greenish tint to the Olympic pool wasn’t the only thing fishy about the water in Rio last month: a “series of recent reports,” Patrick Redford of Deadspin reported recently, “assert that there was a current in the pool at the Rio Olympics’ Aquatic Stadium that might have skewed the results.” Or—to make the point clear in a way the pool wasn’t—the water in the pool flowed in such a way that it gave the advantage to swimmers starting in certain lanes: as Redford writes, “swimmers in lanes 5 through 8 had a marked advantage over racers in lanes 1 through 4.” According, however, to ESPN’s Michael Wilbon—a noted African-American sportswriter—such results shouldn’t be of concern to people of color: “Advanced analytics,” Wilbon wrote this past May, “and black folks hardly ever mix.” To Wilbon, the rise of statistical analysis poses a threat to African-Americans. But Wilbon is wrong: in reality, the “hidden current” in American life holding back both black Americans and all Americans is not analytics—it’s the suspicions of supposedly “progressive” people like Michael Wilbon.

The thesis of Wilbon’s piece, “Mission Impossible: African-Americans and Analytics”—published on ESPN’s race-themed website, The Undefeated—was that black people have some kind of allergy to statistical analysis: “in ‘BlackWorld,’” Wilbon solemnly intoned, “never is heard an advanced analytical word.” Whereas, in an earlier age, white people like Thomas Jefferson questioned black people’s literacy, nowadays, it seems, it’s ok to question their ability to understand mathematics—a “ridiculous” (according to The Guardian’s Dave Schilling, another black journalist) stereotype that Wilbon attempts to paint as, somehow, politically progressive: Wilbon, that is, excuses his absurd beliefs on the basis that analytics “seems to be a new safe haven for a new ‘Old Boy Network’ of Ivy Leaguers who can hire each other and justify passing on people not given to their analytic philosophies.” Yet, while Wilbon isn’t alone in his distrust of analytics, it’s actually just that “philosophy” that may hold the most promise for political progress—not only for African-Americans, but every American.

Wilbon’s argument, after all, depends on a common thesis heard in the classrooms of American humanities departments: when Wilbon says the “greater the dependence on the numbers, the more challenged people are to tell (or understand) the narrative without them,” he is echoing a common argument deployed every semester in university seminar rooms throughout the United States. Wilbon is, in other words, merely repeating the familiar contention, by now essentially an article of faith within the halls of the humanities, that without a framework—or (as it’s sometimes called), “paradigm”—raw statistics are meaningless: the doctrine sometimes known as “social constructionism.”

That argument is, as nearly everyone who has taken a class in the departments of the humanities in the past several generations knows, that “evidence” only points in a certain direction once certain baseline axioms are assumed. (An argument first put about, by the way, by the physician Galen in the second century AD.) As American literary critic Stanley Fish once rehearsed the argument in the pages of the New York Times, according to its terms investigators “do not survey the world in a manner free of assumptions about what it is like and then, from that (impossible) disinterested position, pick out the set of reasons that will be adequate to its description.” Instead, Fish went on, researchers “begin with the assumption (an act of faith) that the world is an object capable of being described … and they then develop procedures … that yield results, and they call those results reasons for concluding this or that.” According to both Wilbon and Fish, in other words, the answers people find depends not the structure of reality itself, but instead on the baseline assumptions the researcher begins with: what matters is not the raw numbers, but the contexts within which the numbers are interpreted.

What’s important, Wilbon is saying, is the “narrative,” not the numbers: “Imagine,” Wilbon says, “something as pedestrian as home runs and runs batted in adequately explaining [Babe] Ruth’s overall impact” on the sport of baseball. Wilbon’s point is that a knowledge of Ruth’s statistics won’t tell you about the hot dogs the great baseball player ate during games, or the famous “called shot” during the 1932 World Series—what he is arguing is that statistics only point toward reality: they aren’t reality itself. Numbers, by themselves, don’t say anything about reality; they are only a tool with which to access reality, and by no means the only tool available: in one of Wilbon’s examples Stef Curry, the great guard for the NBA’s Golden State Warriors, knew he shot better from the corners—an intuition that later statistical analysis bore out. Wilbon’s point is that both Curry’s intuition and statistical analysis told the same story, implying that there’s no fundamental reason to favor one road to truth over the other.

In a sense, to be sure, Wilbon is right: statistical analysis is merely a tool for getting at reality, not reality itself, and certainly other tools are available. Yet, it’s also true that, as statistician and science fiction author Michael F. Flynn has pointed out, astronomy—now accounted one of the “hardest” of physical sciences, because it deals with obviously real physical objects in space—was once not an observational science, but instead a mathematical one: in ancient times, Chinese astronomers were called “calendar-makers,” and a European astronomer was called a mathematicus. As Flynn says, “astronomy was not about making physical discoveries about physical bodies in the sky”—it was instead “a specialized branch of mathematics for making predictions about sky events.” Without telescopes, in other words, astronomers did not know what, exactly, say, the planet Mars was: all they could do was make predictions, based on mathematical analysis, about what part of the sky it might appear in next—predictions that, over the centuries, became perhaps-startlingly accurate. But as a proto-Wilbon might have said in (for instance) the year 1500, such astronomers had no more direct knowledge of what Mars is than a kindergartner has of the workings of the Federal Reserve.

In the same fashion, Wilbon might point out about the swimming events in Rio, there is no direct evidence of a current in the Olympic pool: the researchers who assert that there was such a current base their arguments on statistical evidence of the races, not examination of the conditions of the pool. Yet the evidence for the existence of a current is pretty persuasive: as the Wall Street Journal reported, fifteen of the sixteen swimmers, both men and women, who swam in the 50-meter freestyle event finals—the one event most susceptible to the influence of a current, because swimmers only swim one length of the pool in a single direction—swam in lanes 4 through 8, and swimmers who swam in outside lanes in early heats and inside lanes in later heats actually got slower. (A phenomena virtually unheard of in top level events like the Olympics.) Barry Revzin, of the website Swim Swam, found that a given Olympic swimmer picked up “a 0.2 percent advantage for each lane … closer to [lane] 8,” Deadspin’s Redford reported, and while that could easily seem “inconsequentially small,” Redford remarked, “it’s worth pointing out that the winner in the women’s 50 meter freestyle only beat the sixth-place finisher by 0.12 seconds.” It’s a very small advantage, in other words, which is to say that it’s very difficult to detect—except by means of the very same statistical analysis distrusted by Wilbon. But although it is a seemingly-small advantage, it is enough to determine the winner of the gold medal. Wilbon in other words is quite right to say that statistical evidence is not a direct transcript of reality—he’s wrong, however, if he is arguing that statistical analysis ought to be ignored.

To be fair, Wilbon is not arguing exactly that: “an entire group of people,” he says, “can’t simply refuse to participate in something as important as this new phenomenon.” Yet Wilbon is worried about the growth of statistical analysis because he views it as a possible means for excluding black people. If, as Wilbon writes, it’s “the emotional appeal,” rather than the “intellect[ual]” appeal, that “resonates with black people”—a statement that, if it were written by a white journalist, would immediately cause a protest—then Wilbon worries that, in a sports future run “by white, analytics-driven executives,” black people will be even further on the outside looking in than they already are. (And that’s pretty far outside: as Wilbon notes, “Nate McMillan, an old-school, pre-analytics player/coach, who was handpicked by old-school, pre-analytics player/coach Larry Bird in Indiana, is the only black coach hired this offseason.”) Wilbon’s implied stance, in other words—implied because he nowhere explicitly says so—is that since statistical evidence cannot be taken at face value, but only through screens and filters that owe more to culture than to the nature of reality itself, therefore the promise (and premise) of statistical analysis could be seen as a kind of ruse designed to perpetuate white dominance at the highest levels of the sport.

Yet there are at least two objections to make about Wilbon’s argument: the first being the empirical observation that in U.S. Supreme Court cases like McCleskey v. Kemp for instance (in which the petitioner argued that, according to statistical analysis, murderers of white people in Georgia were far more likely to receive the death penalty than murderers of black people), or Teamsters v. United States, (in which—according to Encyclopedia.com—the Court ruled, on the basis of statistical evidence, that the Teamsters union had “engaged in a systemwide practice of minority discrimination”), statistical analysis has been advanced to demonstrate the reality of racial bias. (A demonstration against which, by the way, time and again conservatives have countered with arguments against the reality of statistical analysis that essentially mirror Wilbon’s.) To think then that statistical analysis could be inherently biased against black people, as Wilbon appears to imply, is empirically nonsense: it’s arguable, in fact, that statistical analysis of the sort pioneered by people like sociologist Gunnar Myrdal has done at least as much, if not more, as (say) classes on African-American literature to combat racial discrimination.

The more serious issue, however, is a logical objection: Wilbon’s two assertions are in conflict with each other. To reach his conclusions, Wilbon ignores (like others who make similar arguments) the implications of his own reasoning: statistics ought to be ignored, he says, because only “narrative” can grant meaning to otherwise meaningless numbers—but, if it is so that numbers themselves cannot “mean” without a framework to grant them meaning, then they cannot pose the threat that Wilbon says they might. In other words, if Wilbon is right that statistical analysis is biased against black people, then it means that numbers do have meaning in themselves, while conversely if numbers can only be interpreted within a framework, then they cannot be inherently biased against black people. By Wilbon’s own account, in other words, nothing about statistical analysis implies that such analysis can only be pursued by white people, nor could the numbers themselves demand only a single (oppressive) use—because if that were so, then numbers would be capable of providing their own interpretive framework. Wilbon cannot logically advance both propositions simultaneously.

That doesn’t mean, however, that Wilbon’s argument—the argument, it ought to be noted, of many who think of themselves as politically “progressive”—is not having an effect: it’s possible, I think, that the relative success of this argument is precisely what is causing Americans to ignore a “hidden current” in American life. That current is could be described by an “analytical” observation made by professors Sven Steinmo and Jon Watts some two decades ago: “No other democratic system in the world requires support of 60% of legislators to pass government policy”—an observation that, in turn, may be linked to the observable reality that, as political scientists Frances E. Lee and Bruce Oppenheimer have noted, “less populous states consistently receive more federal funding than states with more people.” Understanding the impact of these two observations, and their effects on each other would, I suspect, throw a great deal of light on the reality of American lives, white and black—yet it’s precisely the sort of reflection that the “social construction” dogma advanced by Wilbon and company appears specifically designed to avoid. While to many, even now, the arguments for “social construction” and such might appear utterly liberatory, it’s possible to tell a tale in which it is just such doctrines that are the tools of oppression today.

Such an account would be, however—I suppose Michael Wilbon or Stanley Fish might tell us—simply a story about the one that got away.

Lawyers, Guns, and Caddies

Why should that name be sounded more than yours?
Julius Caesar. Act I, Scene 2.

 

One of Ryan’s steady golfers—supposedly the youngest man ever to own an American car dealership—likes to call Ryan, one of the better caddies I know at Medinah, his “lawyer-caddie.” Ostensibly, it’s meant as a kind of joke, although it’s not particularly hard to hear it as a complicated slight mixed up with Schadenfreude: the golfer, involved in the tiring process of piling up cash by snookering old ladies with terrible trade-in deals, never bothered to get a college degree—and Ryan has both earned a law degree and passed the Illinois bar, one of the hardest tests in the country. Yet despite his educational accomplishments Ryan still earns the bulk of his income on the golf course, not in the law office. Which, sorry to say, is not surprising these days: as Alexander Eichler wrote for The Huffington Post in 2012, not only are “jobs … hard to come by in recent years” for would-be lawyers, but the jobs that there are come in two flavors—either “something that pays in the modest five figures” (which implies that Ryan might never get out of debt), “or something that pays much better” (the kinds of jobs that are about as likely as playing in the NBA). The legal profession has in other words bifurcated: something that, according to a 2010 article called “Talent Grab” by New Yorker writer Malcolm Gladwell, is not isolated to the law. From baseball players to investment bankers, it seems, the cream of nearly every profession has experienced a great rise in recent decades, even as much of the rest of the nation has been largely stuck in place economically: sometime in the 1970s, Gladwell writes, “salaries paid to high-level professionals—‘talent’—started to rise.” There’s at least two possible explanations for that rise: Gladwell’s is that “members of the professional class” have learned “from members of the working class”—that, in other words, “Talent” has learned the atemporal lessons of negotiation. The other, however, is both pretty simple to understand and (perhaps for that reason) might be favored by campus “leftists”: to them, widening inequality might be explained by the same reason that, surprisingly enough, prevented Lord Cornwallis from burning Mount Vernon and raping Martha Washington.

That, of course, will sound shocking to many readers—but in reality, Lord Cornwallis’ forbearance really is unexpected if the American Revolution is compared to some other British colonial military adventures. Like, for instance, the so-called “Mau Mau Uprising”—also known as the “Kenya Emergency”—during the 1950s: although much of the documentation only came out recently, after a long legal battle—which is how we know about this in the detail we do now at all—what happened in Kenya in those years was not an atypical example of British colonial management. In a nutshell: after World War II, many Kenyans, like a lot of other European colonies, demanded independence, and like a lot of other European powers, Britain would not give it to them. (A response with which Americans ought to be familiar through our own history.) Therefore, the two sides fought to demonstrate their sincerity.

Yet unlike the American experience, which largely consisted—nearly anomalously in the history of wars of independence—of set-piece battles that pitted conventionally-organized troops against each other, what makes the Kenyan episode relevant is that it was fought using the doctrines of counterinsurgency: that is, the “best practices” for the purposes of ending an armed independence movement. In Kenya, this meant “slicing off ears, boring holes in eardrums, flogging until death, pouring paraffin over suspects who were then set alight, and burning eardrums with lit cigarettes,” as Mark Curtis reported in 2003’s Web of Deceit: Britain’s Real Role in the World. It also meant gathering, according to Wikipedia, somewhere around half a million Kenyans into concentration camps, while more than a million were held in what were called “enclosed villages.” Those gathered were then “questioned” (i.e., tortured) in order to find those directly involved in the independence movement, and so forth. It’s a catalogue of horror, but what’s more horrifying is that the methods being used in Kenya were also being used, at precisely the same moment, half a world away, by more or less the same people: at the same time as the “Kenya Emergency,” the British Empire was also fighting in what’s called the “Malay Emergency.”

In Malaysia, from 1948 to 1960 the Malayan Communist Party fought a guerrilla war for independence against the British Army—a war that became such a model for counterinsurgency war that one British leader, Sir Robert Thompson, later became a senior advisor to the American effort in Vietnam. (Which itself draws attention to the fact that France was also involved in counterinsurgency wars at the time: not only in Vietnam, but also in Algeria.) And in case you happen to think that all of this is merely an historical coincidence regarding the aftershocks of the Second World War, it’s important to remember that the very word “concentration camp” was first widely used in English during the Second Boer War of 1899-1902. “Best practice” in fighting colonial wars, that is, was pretty standardized: go in, grab the wives and kids, threaten them, and then just follow the trail back to the ringleaders. In other words, Abu Ghraib—but also, the Romans.

It’s perhaps no coincidence, in other words, that the basis of elite education in the Western world for millennia began with Julius Caesar’s Gallic Wars, usually the first book assigned to beginning students of Latin. Often justified educationally on the basis of its unusually clear rhetoric (the famously deadpan opening line: “Gaul is divided into three parts …”), the Gallic Wars could also be described as a kind of “how to” manual regarding “pacification” campaigns: in this case, the failed rebellion of Vercingetorix in 52 BCE, who, according to Caesar, “urged them to take up arms in order to win liberty for all.” In Gallic Wars, Caesar details such common counterinsurgency techniques as, say, hostage-taking: in negotiations with the Helvetii in Book One, for instance, Caesar makes the offer that “if hostages were to be given by them [the Helvetii] in order that he may be assured these will do what they promise … he [Caesar] will make peace with them.” The book also describes torture in several places throughout (though, to be sure, it is usually described as the work of the Gauls, not the Romans). Hostage-taking and torture was all, in other words, common stuff in elite European education—the British Army did not suddenly create these techniques during the 1950s. And that, in turn, begs the question: if British officers were aware of the standard methods of “counterinsurgency,” why didn’t the British Army use them during the “American Emergency” of the 1770s?

According to Pando Daily columnist “Gary Brecher” (a pseudonym for John Dolan), perhaps the “British took it very, very easy on us” during the Revolution because Americans “were white, English-speaking Protestants like them.” In fact, that leniency may have been the reason the British lost the war—at least, according to Lieutenant Colonel Paul Montanus’ (U.S.M.C.) paper for the U.S. Army War College, “A Failed Counterinsurgency Strategy: The British Southern Campaign, 1780-1781.” To Montanus, the British Army “needed to execute a textbook pacification program”—instead, the actions that army took “actually inflamed the [populace] and pushed them toward the rebel cause.” Montanus, in other words, essentially asks the question: why didn’t the Royal Navy sail up the Potomac and grab Martha Washington? Brecher’s point is pretty valid: there simply aren’t a lot of reasons to explain just why Lord Cornwallis or the other British commanders didn’t do that other than the notion that, when British Army officers looked at Americans, they saw themselves. (Yet, it might be pointed out that just what the British officers saw is still an open question: did they see “cultural Englishmen”—or simply rich men like themselves?)

If Gladwell were telling the story of the American Revolution, however, he might explain American independence as a result simply of the Americans learning to say no—at least, that is what he advances as a possible explanation for the bifurcation Gladwell describes in the professions in American life these days. Take, for instance, the profession with which Gladwell begins: baseball. In the early 1970s, Gladwell tells us, Marvin Miller told the players of the San Francisco Giants that “‘If we can get rid of the system as we now know it, then Bobby Bond’s son, if he makes it to the majors, will make more in one year than Bobby will in his whole career.’” (Even then, when Barry Bonds was around ten years old, people knew that Barry Bonds was a special kind of athlete—though they might not have known he would go on to shatter, as he did in 2001, the single season home run record.) As it happens, Miller wildly understated Barry Bonds’ earning power: Barry Bonds “ended up making more in one year than all the members of his father’s San Francisco Giants team made in their entire careers, combined” (emp. added). Barry Bonds’ success has been mirrored in many other sports: the average player salary in the National Basketball Association, for instance, increased more than 800 percent from the 1984-5 season to the 1998-99 season, according to a 2000 article by the Chicago Tribune’s Paul Sullivan. And so on: it doesn’t take much acuity to know that professional athletes have taken a huge pay jump in recent decades. But as Gladwell says, that increase is not limited just to sportsmen.

Take book publishing, for instance. Gladwell tells an anecdote about the sale of William Safire’s “memoir of his years as a speechwriter in the Nixon Administration to William Morrow & Company”—a book that might seem like the kind of “insider” account that often finds its way to publication. In this case, however, between Safire’s sale to Morrow and final publication Watergate happened—which caused Morrow to rethink publishing a book from a White House insider that didn’t mention Watergate. In those circumstances, Morrow decided not to publish—and could they please have the advance they gave to Safire back?

In book contracts in those days, the publisher had all the cards: Morrow could ask for their money back after the contract was signed because, according to the terms of a standard publishing deal, they could return a book at any time, for more or less any reason—and thus not only void the contract, but demand the return of the book’s advance. Safire’s attorney, however—Mort Janklow, a corporate attorney unfamiliar with the ways of book publishing—thought that was nonsense, and threatened to sue. Janklow told Morrow’s attorney (Maurice Greenbaum, of Greenbaum, Wolff & Ernst) that the “acceptability clause” of the then-standard literary contract—which held that a publisher could refuse to publish a book, and thereby reclaim any advance, for essentially any reason—“‘was being fraudulently exercised’” because the reason Morrow wanted to reject Safire’s book wasn’t due to the reason Morrow said they wanted to reject it (the intrinsic value of the content) but simply because an external event—Watergate—had changed Morrow’s calculations. (Janklow discovered documentary evidence of the point.) Hence, if Morrow insisted on taking back the advance, Janklow was going to take them to court—and when faced with the abyss, Morrow crumbled, and standard contracts with authors have become (supposedly) far less weighted towards publishing houses. Today, bestselling authors (like, for instance, Gladwell) now have a great deal of power: they more or less negotiate with publishing houses as equals, rather than (as before) as, effectively, servants. And not just in publishing: Gladwell goes on to tell similar anecdotes about modeling (Lauren Hutton), moviemaking (George Lucas), and investing (Teddy Forstmann). In all of these cases, the “Talent” (Gladwell’s word) eventually triumphs over “Capital.”

As I mentioned, for a variety of reasons—in the first place, the justification for the study of “culture,” which these days means, as political scientist Adolph Reed of the University of Pennsylvania has remarked, “the idea that the mass culture industry and its representational practices constitute a meaningful terrain for struggle to advance egalitarian interests”—to a lot of academic leftists these days that triumph would best be explained by the fact that, say, George Lucas and the head of Twentieth-Century Fox at the time, George Stulberg, shared a common rapport. (Perhaps they gossiped over their common name.) Or to put it another way, that “Talent” has been rewarded by “Capital” because of a shared “culture” between the two (apparent) antagonists—just as in the same way that Britain treated their American subjects different than their Kenyan ones because the British shared something with the Americans that they did not with the Kenyans (and the Malaysians and the Boer …). (Which was either “culture”—or money.) But there’s a problem with this analysis: it doesn’t particularly explain Ryan’s situation. After all, if this hypothesis correct that would appear to imply that—since Ryan shares a great deal “culturally” with the power elite that employs him on the golf course—that Ryan ought to have a smooth path towards becoming a golfer who employs caddies, not a caddie who works for golfers. But that is not, obviously, the case.

Gladwell, on the other hand, does not advance a “cultural” explanation for why some people in a variety of professions have become compensated far beyond that even of their fellows within the profession. Instead, he prefers to explain what happened beginning in the 1970s as being instances of people learning how to use a tool initially widely used by organized labor: the strike.

It’s an explanation that has an initial plausibility about it, in the first place, because of Marvin Miller’s personal history: he began his career working for the United Steelworkers before becoming an employee of the baseball players’ union. (Hence, there is a means of transmission.) But even aside from that, it seems clear that each of the “talents” Gladwell writes about made use of either a kind of one-person strike, or the threat of it, to get their way: Lauren Hutton, for example, “decided she would no longer do piecework, the way every model had always done, and instead demanded that her biggest client, Revlon, sign her to a proper contract”; in 1975 “Hollywood agent Tom Pollock,” demanded “that Twentieth Century Fox grant his client George Lucas full ownership of any potential sequels to Star Wars”; and Mort Janklow … Well, here is what Janklow said to Gladwell regarding how he would negotiate with publishers after dealing with Safire’s book:

“The publisher would say, ‘Send back that contract or there’s no deal,’ […] And I would say, ‘Fine, there’s no deal,’ and hang up. They’d call back in an hour: ‘Whoa, what do you mean?’ The point I was making was that the author was more important than the publisher.”

Each of these instances, I would say, is more or less what happens when a group of industrial workers walk out: Mort Janklow (whose personal political opinions, by the way, are apparently the farthest thing from labor’s), was for instance telling the publishers that he would withhold the labor product until his demands were met, just as the United Autoworkers shut down General Motors’ Flint, Michigan assembly plant in the Sit-Down Strike of 1936-37. And Marvin Miller did take baseball players out on strike: the first baseball strike was in 1972, and lasted all of thirteen days before management crumbled. What all of these people learned, in other words, was to use a common technique or tool—but one that is by no means limited to unions.

In fact, it’s arguable that one of the best examples of it in action is a James Dean movie—while another is the fact the world has not experienced a nuclear explosion delivered in anger since 1945. In the James Dean movie, Rebel Without a Cause, there’s a scene in which James Dean’s character gets involved in what the kids in his town call a “chickie run”—what some Americans know as the game of “Chicken.” In the variant played in the movie, two players each drive a car towards the edge of a cliff—the “winner” of the game is the one who exits his car closest to the edge, thus demonstrating his “courage.” (The other player is, hence, the “chicken,” or coward.) Seems childish enough—until you realize, as the philosopher Bertrand Russell did in a book called Common Sense and Nuclear Warfare, that it was more or less this game that the United States and the Soviet Union were playing throughout the Cold War:

Since the nuclear stalemate became apparent, the Governments of East and West have adopted the policy which Mr. Dulles calls “brinksmanship.” This is a policy adapted from a sport which, I am told, is practised [sic] by some youthful degenerates. This sport is called “Chicken!” …

As many people of less intellectual firepower than Bertrand Russell have noticed, Rebel Without A Cause thusly describes what happened between Moscow and Washington D.C. faced each other in October 1962, the incident later called the Cuban Missile Crisis. (“We’re eyeball to eyeball,” then-U.S. Secretary of State Dean Rusk said later about those events, “and I think the other fellow just blinked.”) The blink was, metaphorically, the act of jumping out of the car before the cliff of nuclear annihilation: the same blink that Twentieth Century Fox gave when it signed over the rights to sequels to Star Wars to Lucas, or Revlon did when it signed Lauren Hutton to a contract. Each of the people Gladwell describes played “Chicken”—and won.

To those committed to a “cultural” explanation, of course, the notion that all these incidents might instead have to do with a common negotiating technique rather than a shared “culture” is simply question begging: after all, there have been plenty of people, and unions, that have played games of “Chicken”—and lost. So by itself the game of “Chicken,” it might be argued, explains nothing about what led employers to give way. Yet, at two points, the “cultural” explanation also is lacking: in the first place, it doesn’t explain how “rebel” figures like Marvin Miller or Janklow were able to apply essentially the same technique across many industries. If it were a matter of “culture,” in other words, it’s hard to see how the same technique could work no matter what the underlying business was—or, if “culture” is the explanation, it’s difficult to see how that could be distinguished from saying that an all-benevolent sky fairy did it. As an explanation, in other words, “culture” is vacuous: it explains both too much and not enough.

What needs to be explained, in other words, isn’t why a number of people across industries revolted against their masters—just as it likely doesn’t especially need to be explained why Kenyans stopped thinking Britain ought to run their land any more. What needs to explained instead is why these people were successful. In each of these industries, eventually “Capital” gave in to “Talent”: “when Miller pushed back, the owners capitulated,” Gladwell says—so quickly, in fact, that even Miller was surprised. In all of these industries, “Capital” gave in so easily that it’s hard to understand why there was any dispute in the first place.

That’s precisely why the ease of that victory is grounds for being suspicious: surely, if “Capital” really felt threatened by this so-called “talent revolution” they would have fought back. After all, American capital was (and is), historically, tremendously resistant to the labor movement: blacklisting, arrest, and even mass murder were all common techniques capital used against unions prior to World War II: when Wyndham Mortimer arrived in Flint to begin organizing for what would become the Sit-Down Strike, for instance, an anonymous caller phoned him at his hotel within moments of his arrival to tell him to leave town if the labor organizer didn’t “want to be carried out in a wooden box.” Surely, although industries like sports or publishing are probably governed by less hard-eyed people than automakers, neither are they so full of softies that they would surrender on the basis of a shared liking for Shakespeare or the films of Kurosawa, nor even the fact that they shared a common language. On the other hand, however, neither does it seem likely that anyone might concede after a minor threat or two. Still, I’d say that thinking about these events using Gladwell’s terms makes a great deal more sense than the “cultural” explanation—not because of the final answer they provide, but because of the method of thought they suggest.

There is, in short, another possible explanation—one that, however, will mean trudging through yet another industry to explain. This time, that industry is the same one where the “cultural” explanation is so popular: academia, which has in recent decades also experienced an apparent triumph of “Talent” at the expense of “Capital”; in this case, the university system itself. As Christopher Shea wrote in 2014 for The Chronicle of Higher Education, “the academic star system is still going strong: Universities that hope to move up in the graduate-program rankings target top professors and offer them high salaries and other perks.” The “Talent Revolution,” in short, has come to the academy too. Yet, if so, it’s had some curious consequences: if “Talent” were something mysterious, one might suspect that it might come from anywhere—yet academia appears to think that it comes from the same sources.

As Joel Warner of Slate and Aaron Clauset, an assistant professor of computer science at the University of Colorado wrote in Slate recently, “18 elite universities produce half of all computer science professors, 16 schools produce half of all business professors, and eight schools account for half of all history professors.” (In fact, when it comes to history, “the top 10 schools produce three times as many future professors as those ranked 11 through 20.”) This, one might say, is curious indeed: why should “Talent” be continually discovered in the same couple of places? It’s as if, because William Wilkerson  discovered Lana Turner at the Top Hat Cafe on Sunset Boulevard  in 1937, every casting director and talent agent in Hollywood had decided to spend the rest of their working lives sitting on a stool at the Top Hat waiting for the next big thing to walk through that door.

“Institutional affiliation,” as Shea puts the point, “has come to function like inherited wealth” within the walls of the academy—a fact that just might explain another curious similarity between the academy and other industries these days. Consider, for example, that while Marvin Miller did have an enormous impact on baseball player salaries, that impact has been limited to major league players, and not their comrades at lower levels of organized baseball. “Since 1976,” Patrick Redford noted in Deadspin recently, major leaguers’ “salaries have risen 2,500 percent while minor league salaries have only gone up 70 percent.” Minor league baseball players can, Redford says, “barely earn a living while playing baseball”—it’s not unheard of, in fact, for ballplayers to go to bed hungry. (Glen Hines, a writer for The Cauldron, has a piece for instance describing his playing days in the Jayhawk League in Kansas: “our per diem,” Hines reports, “was a measly 15 dollars per day.”) And while it might difficult to have much sympathy for minor league baseball players—They get to play baseball!—that’s exactly what makes them so similar to their opposite numbers within academia.

That, in fact, is the argument Major League Baseball uses to deny minor leaguers are subject to the Fair Labor Standards Act: as the author called “the Legal Blitz” wrote for Above the Law: Redline, “Major League Baseball claims that its system [of not paying minimum wage] is legal as it is not bound by the FLSA [Fair Labor Standards Act] due to an exemption for seasonal and recreational employers.” In other words, because baseball is a “game” and not a business, baseball doesn’t have to pay the workers at the low end of the hierarchy—which is precisely what makes minor leaguers like a certain sort of academic.

Like baseball, universities often argue (as Yale’s Peter Brooks told the New York Times when Yale’s Graduate Employees and Student Organization (GESO) went out on strike in the late 1990s) that adjunct faculty are “among the blessed of the earth,” not its downtrodden. As Emily Eakin reported for the now-defunct magazine Lingua Franca during that same strike, in those days Yale’s administration argued “that graduate students can’t possibly be workers, since they are admitted (not hired) and receive stipends (not wages).” But if the pastoral rhetoric—a rhetoric that excludes considerations common to other pursuits, like gambling—surrounding both baseball and the academy is cut away, the position of universities is much the same as Major League Baseball’s, because both academia and baseball (and the law, and a lot of other professions) are similar types of industries at least in one respect: as presently constituted, they’re dependent on small numbers of highly productive people—which is just why “Capital” should have tumbled so easily in the way Gladwell described in the 1970s.

Just as scholars are only very rarely productive early in their careers, in other words, so too are baseball players: as Jim Callis noted for Baseball America (as cited by the paper, “Initial Public Offerings of Baseball Players” by John D. Burger, Richard D. Grayson, and Stephen Walters), “just one of every four first-round picks ultimately makes a non-trivial contribution to a major league team, and a mere one in twenty becomes a star.” Similarly, just as just a few baseball players hit most of the home runs or pitch most of the complete games, most academic production is done by just a few producers, as a number of researchers discovered in the middle of the twentieth century: a verity variously formulated as “Price’s Law,” “Lotka’s Law,” or “Bradford’s Law.” (Or, there’s the notion described as “Sturgeon’s Law”: “90% of everything is crap.”) Hence, rationally enough, universities (and baseball teams) only want to pay for those high-producers, while leaving aside the great mass of others: why pay for a load of .200 hitters, when with the same money you can buy just one superstar?

That might explain just why it is that William Morrow folded when confronted by Mort Janklow, or why Major League Baseball collapsed when confronted by Marvin Miller. They weren’t persuaded by the justice of the case Janklow or Miller brought—rather, they decided that it was in their long-term interests to reward wildly the “superstars” because that bought them the most production at the cheapest rate. Why pay for a ton of guys who hit all of the home runs, you might say—when, for much less, you can buy Barry Bonds? (In 2001, all major leaguers collectively hit over 5000 home runs, for instance—but Barry Bonds hit 73 of them, in a context in which the very best players might hit 20.) In such a situation, it makes sense (seemingly) to overpay Barry Bonds wildly (so that he made more money in a single season than all of his father’s teammates did for their entire careers): given that Barry Bonds is so much more productive than his peers, it’s arguable that, despite his vast salary, he was actually underpaid.

If you assign a value per each home run, that is, Bonds got a lower price per home run than his peers did: despite his high salary he was—in a sense—a bargain. (The way to calculate the point is to take all the home runs hit by all the major leaguers in a given season, and then work out the average price per home run. Although I haven’t actually done the calculation, I would bet that the average price is more than the price per home run received by Barry Bonds—which isn’t even to get into how standard major league rookie contracts deflate the market: as Newsday reported in March, Bryce Harper of the Washington Nationals, who was third on the 2015 home run list, was paid only $59,524 per home run—when virtually every other top ten home run hitter in the major leagues made at least a quarter of a million dollars.) Similarly, an academic superstar is also, arguably, underpaid: even though, according to citation studies, a small number of scholars might be responsible for 80 percent of the citations in a given field, there’s no way they can get 80 percent of the total salaries being paid in that field. Hence, by (seemingly) wildly overpaying a few superstars, major league owners (like universities) can pocket the difference between those salaries and what they (wildly underpay) to the (vastly more) non-superstars.

Not only that, but wildly overpaying also has a secondary benefit, as Walter Benn Michaels has observed: by paying “Talent” vastly more money, not only are they actually getting a bargain (because no matter what “Talent” got paid, they simply couldn’t be paid what they were really “worth”), but also “Talent’s” (seemingly vast, but in reality undervalued) salaries enable the system to be performed as  “fair”—if you aren’t getting paid what, say, Barry Bonds or Nobel Prize-winning economist Gary Becker is getting paid, in other words, then that’s because you’re not smart enough or good enough or whatever enough, jack. That is what Michaels is talking about when he discusses how educational “institutions ranging from U.I.C. to Harvard” like to depict themselves as “meritocracies that reward individuals for their own efforts and abilities—as opposed to rewarding them for the advantages of their birth.” Which, as it happens, just might explain why it is that, despite his educational accomplishments, Ryan is working on a golf course as a servant instead of using his talent in a courtroom or boardroom or classroom—as Michaels says, the reality of the United States today is that the “American Dream … now has a better chance of coming true in Sweden than it does in America, and as good a chance of coming true in western Europe (which is to say, not very good) as it does here.” That reality, in turn, is something that American universities, who are supposed to pay attention to events like this, have rapidly turned their heads away from: as Michaels says, “the intellectual left has responded to the increase in economic inequality”—that is, the supposed “Talent Revolution”—“by insisting on the importance of cultural identity.” In other words, “when it comes to class difference” (as Michaels says elsewhere), even though liberal professors “have understood our universities to be part of the solution, they are in fact part of the problem.” Hence, Ryan’s educational accomplishments (remember Ryan? There’s an essay about Ryan) aren’t actually helping him: in reality, they’re precisely what is holding him back. The question that Americans ought to be asking these days, then, is this one: what happens when Ryan realizes that?

It’s enough to make Martha Washington nervous.

 

Human Events

Opposing the notion of minority rule, [Huger] argued that a majority was less likely to be wrong than a minority, and if this was not so “then republicanism must be a dangerous fallacy, and the sooner we return to the ‘divine rights’ of the kings the better.”
—Manisha Sinha. The Counterrevolution of Slavery. 2001.

Note that agreement [concordantia] is particularly required on matters of faith and the greater the agreement the more infallible the judgment.
—Nicholas of Cusa. Catholic Concordance. 1432.

 

It’s perhaps an irony, though a mild one, that the weekend of the celebrations of American independence the most notable sporting events are the Tour de France, soccer’s European Cup, and Wimbledon—maybe all the more so now that Great Britain has voted to “Brexit,” i.e., to leave the European Union.  A number of observers have explained that vote as at least somewhat analogous to the Donald Trump movement in the United States, in the first place because Donald himself called the “Brexit” decision a “great victory” at a press conference the day after the vote, and a few days later “praised the vote as a decision by British voters to ‘take back control of their economy, politics and borders,’” as The Guardian said Thursday. To the mainstream press, the similarity between the “Brexit” vote and Donald Trump’s candidacy is that—as Emmanuel Macron, France’s thirty-eight-year-old economy minister said about “Brexit”—both are a conflict between those “content with globalization” and those “who cannot find” themselves within the new order. Both Trump and “Brexiters” are, in other words, depicted as returns of—as Andrew Solomon put it in The New Yorker on Tuesday—“the Luddite spirit that led to the presumed arson at Albion Mills, in 1791, when angry millers attacked the automation that might leave them unemployed.” “Trumpettes” and “Brexiters” are depicted as wholly out of touch and stuck in the past—yet, as a contrast between Wimbledon and the Tour de France may help illuminate, it could also be argued that it is, in fact, precisely those who make sneering references both to Trump and to “Brexiters” who represent, not a smiling future, but instead the return of the ancien régime.

Before he outright won the Republican nomination through the primary process, after all, Trump repeatedly complained that the G.O.P.’s process was “rigged”: that is, it was hopelessly stacked against an outsider candidate. And while a great deal of what Trump has said over the past year has been, at best, ridiculously exaggerated when not simply outright lying, in that contention Trump has a great deal of evidence: as Josh Barro put it in Business Insider (not exactly a lefty rag) back in April, “the Republican nominating rules are designed to ignore the will of the voters.” Barro cites the example of Colorado’s Republican Party, which decided in 2015 “not to hold any presidential preference vote”—a decision that, as Barro rightly says, “took power away from regular voters and handed it to the sort of activists who would be likely … [to] participat[e] in party conventions.” And Colorado’s G.O.P. was hardly alone in making, quite literally, anti-democratic decisions about the presidential nominating process over the past year: North Dakota also decided against a primary or even a caucus, while Pennsylvania did hold a vote—but voters could only choose uncommitted delegates; i.e., without knowing to whom those delegates owed allegiance.

Still, as Mother Jones—which is a lefty rag—observed, also back in April, this is an argument that can easily be worked against as for Trump: in New York’s primary, for instance, “Kasich and Cruz won 40 percent of the vote but only 4 percent of the delegates,” while on Super Tuesday Trump’s opponents “won 66 percent of the vote but only 57 percent of the delegates.” And so on. Other critics have similarly attacked the details of Trump’s arguments: many, as Mother Jones’ Kevin Drum says, have argued that the details of the Republican nominating process could just as easily be used as evidence for “the way the Republican establishment is so obviously in the bag for Trump.” Those critics do have a point: investigating the whole process is exceedingly difficult because the trees overwhelm any sense of the forest.

Yet, such critics often use those details (about which they are right) to make an illicit turn. They have attacked, directly or indirectly, the premise of the point Trump tried to make in an op-ed piece in The Wall Street Journal this spring that—as Nate Silver paraphrased it on FiveThirtyEight—“the candidate who gets the most votes should be the Republican nominee.” In other words, they make an argumentative turn from the particulars of this year’s primary process to take a very disturbing swerve toward attacking the very premises of democratic government itself: by disputing this or that particular they obscure whether or not the will of the voters should be respected. Hence, even if Trump’s whole campaign is, at best, wholly misdirected, the point he is making—a point very similar to the one made by Bernie Sanders’ campaign—is not something to be treated lightly. But that, it seems, is something that elites are, despite their protests, skirting close to doing: which is to say that, despite the accusations directed at Trump that he is leading a fascistic movement, it is actually arguable that it is Trump’s supposedly “liberal” opponents who are far closer to authoritarianism than he is because they have no respect for sanctity of the ballot. Or, to put it another way, that it is Trump’s voters—and, by extension, those for “Brexit”—who have the cosmopolitan view, while it is his opponents who are, in fact, the provincialists.

The point, I think, can be seen by comparing the scoring rules between Wimbledon and the Tour de France. The Tour, as may or may not be known, is determined by the rider who—as Patrick Redford at Deadspin put it the other day in “The Casual Observer’s Guide to the Tour de France”—has “the lowest time over all 21 stages.” Although the race takes place over nearly the whole nation of France, and several more besides, and covers over 2,000 miles from the cobblestone flats of Flanders to the heights of the Alps and down to the streets of Paris, still the basic premise of the race is clear even to the youngest child: ride faster and win. Explaining Wimbledon however—like explaining the rules of the G.O.P. nominating process (or, for that matter, the Democratic nominating process)—is not so simple.

As I have noted before in this space, the rules of tennis are not like cycling—or even such familiar sports as baseball or football. In baseball and most other sports, including the Tour, the “score is cumulative throughout the contest … and whoever has the most points at the end wins,” as Allen Fox once described the difference between tennis and other games in Tennis magazine. But tennis is not like that: “The basic element of tennis scoring is the point,” as mathematician G. Edgar Parker has noted, “but tennis matches are won by the player who wins two out three (or three out of five) sets.” Sets are themselves accumulations of games, not points. During each game, points are won and lost until one player has not only won at least four points but also has a two-point advantage on the other; games go back and forth until one player does have that advantage. Then, at the set level, one player must have won at least six games (though the rules vary at some professional tournaments if that player also needs a two-game advantage to win the set). Finally, then, a player needs to win at least two, and—as at Wimbledon—sometimes three, sets to take a match.

If the Tour de France were won like Wimbledon is won, in other words, the winner would not be determined by whoever had the lowest overall time: the winner would be, at least at first analysis, whoever won the most number of stages. But even that comparison would be too simple: if the Tour winner were determined by the winner of the most stages, that would imply that each stage were equal—and it is certainly not the case that all points, games, or sets in tennis are equal. “If you reach game point and win it,” as Fox writes in Tennis, “you get the entire game while your opponent gets nothing—all of the points he or she won in the game are eliminated.” The points in one game don’t carry over to the next game, and previous games don’t carry over to the next set. That means that some points, some games, and some sets are more important than others: “game point,” “set point,” and “match point” are common tennis terms that mean “the point whose winner may determine the winner of the larger category.” If tennis’ type of scoring system were applied to the Tour, in other words, the winner of the Tour would not be the overall fastest cyclist, nor even the cyclist who won the most stages, but the cyclist who won certain stages, say—or perhaps even certain moments within stages.

Despite all the Sturm und Drang surrounding Donald Trump’s candidacy, then—the outright racism and sexism, the various moronic-seeming remarks concerning American foreign policy, not to mention the insistence that walls are more necessary to the American future than they even are to squash—there is one point about which he, like Bernie Sanders in the Democratic camp, is making cogent sense: the current process for selecting an American president is much more like a tennis match than it is like a bicycle race. After all, as Hendrik Hertzberg of The New Yorker once pointed out, Americans don’t elect their presidents “the same way we elect everybody else—by adding up all the voters’ votes and giving the job to the candidate who gets the most.” Instead, Americans have (as Ed Grabianowski puts it on the how stuff works website), “a whole bunch of separate state elections.” And while both of these comments were directed at the presidential general election, which depends on the Electoral College, they equally, if not more so, apply to the primary process: at least in the general election in November, each state’s rules are more or less the same.

The truth, and hence power, of Trump’s critique of this process can be measured by the vitriol of the response to it. A number of people, on both sides of the political aisle, have attacked Trump (and Sanders) for drawing attention to the fashion in which the American political process works: when Trump pointed out that Colorado had refused to hold a primary, for instance, Reince Priebus, chairman of the Republican National Committee, tweeted (i.e., posted on Twitter, for those of you unfamiliar with, you know, the future) “Nomination process known for a year + beyond. It’s the responsibility of the campaigns to understand it. Complaints now? Give us all a break.” In other words, Priebus was implying that the rules were the same for all candidates, and widely known before hand—so why the whining? Many on the Democratic side said the same about Sanders: as Albert Hunt put it in the Chicago Tribune back in April, both Trump and Sanders ought to shut up about the process: “Both [campaigns’] charges [about the process] are specious,” because “nobody’s rules have changed since the candidates entered the fray.” But as both Trump and Sanders’ campaigns have rightly pointed out, the rules of a contest do matter beyond just the bare fact that they are the same for every candidate: if the Tour de France were conducted under rules similar to tennis’, it seems likely that the race would be won by very different kinds of winners—sprinters, perhaps, who could husband their stamina until just the right moment. It’s very difficult not to think that the criticisms of Trump and Sanders as being “whiners” is disingenuous—an obvious attempt to protect a process that transparently benefits insiders.

Trump’s supporters, like Sanders’ and those who voted “Leave” in the “Brexit” referendum, have been labeled as “losers”—and while, to those who consider themselves “winners,” the thoughts of losers are (as the obnoxious phrase has it) like the thoughts of sheep to wolves, it seems indisputably true that the voters behind all three campaigns represent those for whom the global capitalism of the last several decades hasn’t worked so well. As Matt O’Brian noted in The Washington Post a few days ago, “the working class in rich countries have seen their real, or inflation-adjusted, incomes flatline or even fall since the Berlin Wall came down and they were forced to compete with all the Chinese, Indian, and Indonesian workers entering the global economy.” (Real economists would dispute O’Brian’s chronology here: at least in the United States, wages have not risen since the early 1970s, which far predates free trade agreements like the North American Free Trade Agreement signed by Bill Clinton in the 1990s. But O’Brian’s larger argument, as wrongheaded as it is in detail, instructively illustrates the muddleheadedness of the conventional wisdom.) In this fashion, O’Brian writes, “the West’s triumphant globalism” has “fuel[ed] a nationalist backlash”: “In the United States it’s Trump, in France it’s the National Front, in Germany it’s the Alternative for Germany, and, yes, in Britain it’s the Brexiters.” What’s astonishing about this, however, is that—despite not having, as so, so many articles decrying their horribleness have said, a middle-class senses of decorum—all of these movements stand for a principle that, you would think, the “intellectuals” of the world would applaud: the right of the people themselves to determine their own destiny.

It is they, in other words, who literally embody the principle enunciated by the opening words of the United States Constitution, “We the People,” or enunciated by the founding document of the French Revolution (which, by the by, began on a tennis court), The Declaration of the Rights of Man and the Citizen, whose first article holds that “Men are born and remain free and equal in rights.” In the world of this Declaration, in short, each person has—like every stage of the Tour de France, and unlike each point played during Wimbledon—precisely the same value. It’s a principle that Americans, especially, ought to remember this weekend of all weekends—a weekend that celebrates another Declaration, one whose opening lines reads “We hold these truths to be self-evident, that all men are created equal.” Americans, in other words, despite the success individual Americans like John McEnroe or Pete Sampras or Chris Evert, are not tennis players, as Donald Trump (and Bernie Sanders) have rightfully pointed out over the past year—a sport, as one history of the game has put it, “so clearly aligned with both The Church and Aristocracy.” Americans, as the first modern nation in the world, ought instead to be associated with a sport unknown to the ancients and unthinkable without modern technology.

We are bicycle riders.

Men of Skill

I returned, and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of skill …
Ecclesiastes 9:11

 

“It was a matter of chance,” says Fitzgerald, at the beginning of Gatsby—surely, even now, the Greatest of the Great American Novels—“that I should have rented a house in one of the strangest communities in North America.” The town Fitzgerald calls West Egg is “strange” first of all because it has developed so fast that the “small eyesore” of a house of Gatsby’s middle-class narrator, Nick Carraway, still sits next-door to the “imitation of some Hotel de Ville” of the fabulously wealthy Gatsby. Fitzgerald is also careful to note that the name West Egg is derived from the shape of the town’s geography: an egg-shaped peninsula that juts out into “the great wet barnyard of Long Island Sound … like the egg in the Columbus story.” This detail, dropped seemingly so carelessly in the first chapter, appears irrelevant—what Columbus story?—but in fact it is a key to Fitzgerald’s design, for “the Columbus story” is just what Gatsby is: a story about just what is—and what is not—chance. Gatsby is, in other words, just like the new book ostensibly by Steve Williams, the New Zealander who once caddied for Tiger Woods back when the globe’s most famous black golfer did not have access to the nuclear codes.

Matters of chance are also just what this blog is about, in case you didn’t know—and apparently, through some oversight of the committee, many of you don’t know. Over the course of this autumn I’ve gotten some feedback from readers, such as there are any of you: one, a professional writer employed by network television on a show too well-known to be mentioned here (whose comments were relayed through a third party), said that I cover too narrow a territory; the other (a member of an historically-significant golf club in Chicago—my readers may be few, and maybe don’t like me much, but they are not inconsequential) implied that I ought to stick more to golf. Juxtaposed these comments are, to say the least, confusing: the one says I am too narrow and the other too broad. But that is not to say that each does not have a point.

I do after all stray from discussing golf (especially lately), yet it is also true that while I have covered a number of different subjects on this blog—from golf architecture to the Civil War to British elections—what I do discuss I view from a particular vantage point, and that view remains the same whatever the topic under discussion. In my defense then, what I would say is that in all of these cases, whatever the ostensible subject matter the real object is the question of chance, or luck, in human affairs, whether it be batting averages or stock market returns, elections or breaking par. The fact that I have to say this perhaps demonstrates the validity of the criticisms—which is what brings me around to the latest from Steve Williams.

Williams recently “wrote” the book Out of the Rough (golf books in general are seemingly required to be titled by pun—Williams’ title appears to owe something to John Daly’s 2007 memoir, My Life In And Out Of The Rough). The book has become a topic of conversation because of an excerpt published on the New Zealand website Stuff in which Williams complains that—aside from essentially throwing him under the bus after the notorious Thanksgiving weekend escapade through which the world learned that Tiger’s life was not so buttoned-down as it appeared—Tiger also routinely threw his clubs “in the general direction of the bag, expecting me to go over and pick it up.” Any caddie with experience knows what Williams is talking about.

Likely every golf club has a member (or two) who indulges in temper tantrums and views his (these members are always men) caddie as indistinguishable from his golf bag. (For instance, Medinah’s latest entry in this sweepstakes—the previous occupant having been tossed for his club-throwing boorishness—is a local car dealer who, as he is happy to tell anyone in earshot, worked his way up from Gatz-ian poverty, yet appears incapable of empathy to others in similar situations.) Anyone remotely familiar with caddieing in other words knows that throwing clubs, even at the bag, just is not done; that Tiger routinely did so is an argument Williams is aiming at his readers who, since they are interested enough in golf to buy his book, will take Tiger’s club-throwing ways as a sign of Tiger’s jerkishness. This should not exactly be news to anyone who has ever heard of Tiger Woods.

Yet over at the website Deadspin, Williams has become the object of ridicule by both public commenters and the website’s reporter, Patrick Redman. That is because of a further comment the caddie made about Woods’ club throwing habit: it made him, Williams wrote, “uneasy,” because “it was like I was his slave.” It’s a line that has irritated the hell out of Deadspin readers: returning to the subject of titles, for instance, one commenter remarked “Too bad Twelve Years A Slave was already taken,” while as another scoffingly put it: “You had to bend over and pick stuff up for your job? What a bummer.—[signed] Actual Slaves.” By comparing himself to a slaves, in short, Williams revealed himself as, according to another commenter, “a serious asshole,” and according to yet another “a delusional asshole.” It’s precisely this point that raises the stakes of the dispute into something more than simply a matter of men chasing a ball for money—though it’s worth taking a slight detour regarding money before returning to just what irritates people about Williams—and what that has to do with Gatsby, and Columbus.

In his piece, Redman asserts Williams’ complaint is ridiculous first of all because “picking up clubs and putting them into a golf bag was Williams’ job,” and secondly because the Kiwi caddie “was paid handsomely for his services, earning at least 10% … of Woods’ earnings.” Before getting into the racial politics that is the deeper subject of Redman’s ire, it’s worth asking whether his assertions here are true. And in fact, as someone with some experience as a professional caddie, I can say that the idea that Woods’ paid Williams 10% of winnings is ludicrous on its face, because at best, tour caddies earn 10% on wins, not week-to-week earnings; even assuming that Woods paid Williams at that rate on wins, which is questionable (it’s more likely that Williams was paid a—generously extreme—salary, not a percentage) that’s nowhere close to an overall 10% figure on earnings. To put the point in Hollywood’s terms, this is like claiming a character actor (whose credit comes after, and not before, the title) could get a percentage of a film’s gross, not net, revenue. In other words, Redman is not very knowledgeable about either golf or economics—a riposte that, to be sure, doesn’t address the substance of his criticism, but is significant in terms of what Redman’s piece is really about.

The point I think can be illustrated by retelling the story Fitzgerald alludes to in Gatsby: the story of Columbus’ egg. First related by the Italian Girolamo Benzoni in his 1565 bestseller, History of the New World, the story goes that after returning from the Americas, Columbus was at a dinner party when someone said “‘Sir Christopher, even if your lordship had not discovered the Indies … [someone] would have started a similar adventure with the same result.’” In reply, Columbus merely asked that an egg be brought to him, then dared anyone present to make the egg stand on its end unaided. No one could. When they finished trying, Columbus “tapped it gently on the table—breaking it slightly—and, with this, the egg stood on its end.” Benzoni draws the point—nearly literal in this case—of the story thusly: “once the feat is done, anyone knows how to do it.” Columbus was saying, in effect, “hate the game, not the player.”

Like the Spanish nobles, in other words, Steve Williams—by disliking Woods’ habit of club-throwing—was asserting his equality with his boss: just because his story did not happen to develop in precisely the same way as Woods’ story that does not mean that, in principle, Williams deserves any less dignity than Tiger Woods does. But Woods’ defenders, on Deadspin, read Williams’ remarks differently: they view him as not understanding that his circumstances as a white man were, just as much if not more than Tiger’s, fortunate ones. Williams in other words was one of chance’s winners, even if—in Williams’ mind—he is a kind of Columbus, or Gatsby: a self-made man who has gotten where he is despite, not because of, chance.

It’s just here, the astute may have noticed, that matters of golf and chance intersect with politics in general, and in particular the battle between those who assert, as Penn State English professor Michael Berubé has put the point, “that class oppression [is] the most important game in town,” and those who indulge in supposedly “faddish talk of gender and race and sexuality.” Williams’ memoir, in other words, seems to implicitly take the view that (in Berubé’s words) “the real struggle” has “to do with capital and labor,” while on the other hand his detractors seem to take the position that the whole discussion—even down to the very terms of it—is simply an effect of what Berubé calls “tribal identity.” “Class oppression,” they seem to suggest, is a white people problem.

Yet, as I have said before in different ways in this blog, considerations of the role of chance will not, and can not, go away merely by wishing them so: while it may be, as those on Berubé’s side of the aisle maintain, that factors we might consider “social” or “cultural” may play a larger role than we might suspect in the outcomes of our respective voyages to the New World, nevertheless there will also remain some portion of every outcome, however small, that is merely due to chance itself. Or to put it another way—as Berubé does—it is simply true “that anthropogenic climate change is real, that vaccines do not cause autism, that the Earth revolves around the Sun, and that Adam and Eve did not ride dinosaurs to church.” For some time, what’s been called the Cultural Left has been busily asserting otherwise, suggesting that what appear to be matters of chance are, somehow, actually within human control. But what that perspective fails to understand is that such is, after all, just what Columbus—and Jay Gatsby—argued. What the story of Tiger Woods tells us, however, is that time and chance happeneth to us all.

Luck of the Irish

 … I hear him mock
The luck of Caesar, which the gods give men
To excuse their after wrath.
Antony and Cleopatra V, ii

Stephanie Wei, the ex-Yalie golf blogger, recently got her press credentials revoked for the crime of filming tour players during a non-televised Monday practice round at the WGC-Match Play using a live-stream video app. According to her own account, the tour said that her “live-streaming of behind-the-scenes content had violated the Tour’s media regulations.” Wei has admitted that the tour did have a right to take away her credentials (it’s in her contract), but she argued in response that her work produced “fresh, interesting and different content,” and thus enhanced the value of the tour’s product. Wei’s argument however, as seductive as it might be, is a great example of someone manipulating what Thomas Frank has called “the titanic symbolic clash of hip and square” for their own ends: Wei wants to be “hip”—but her actual work is not only just as “square” as any old-school sportswriter who didn’t see fit to mention that Ty Cobb was one of the meanest and most racist men in America, or that Mickey Mantle was a nihilistic drunk, but in fact might be even more harmful.

As Thomas Frank was writing so long ago as the 1990s, the new digital economy has been sold as an “economic revolution,” celebrating “artists rather than commanders, wearers of ponytails and dreamers of cowboy fantasies who proudly proclaim their ignorance of ‘rep ties.’” In contrast to the old world of “conformity, oppression, bureaucracy, meaninglessness, and the disappearance of individualism”—in a word, golf—the new would value “creativity” and “flexibility.” It’s the bright new world we live in today.

So inevitable does that narrative appear that of course Deadspin, the hipsters’ ESPN, jumped on it. “It’s not surprising,” proclaimed Samer Kalaf, “that the PGA Tour, a stuffy organization for a stuffy sport, is being truculent over something as inconsequential as this, but that doesn’t make it any less ridiculous.” The part of Judge Smails (Caddyshack’s prototypical stuffed shirt) is played in this drama by the PGA Tour’s Ty Votaw, who told Golf.com that in the eyes of the tour, what Wei did was “stealing.” On the theory of the tour, what Wei did extracted value from the tour’s product.

Wei herself, to be sure, had a different theory about her actions. Wei wrote that her purpose in transmitting the “raw, alternative footage”—excellent use of buzzwords!—was to “spread fanfare.” In other words, Wei was actually doing the PGA Tour a favor because of her hip, new kind of journalism. It’s an argument you are probably familiar with, because it is the same one the venues that don’t pay bands, or the companies that tell you to take an internship, or people who tell you to “get on YouTube” make: think of the exposure, man!

Yet while Wei pleads her case on the basis of her hepcat, app-using new jive journo-ing, in fact her stuff isn’t much, if any, different from the bad old days of sports reporting, when writers like Grantland Rice were more interested in palling around with the athletes (and, more worryingly, the owners) than with the audience. The telling detail can be found in her coverage of Rory McIlroy’s win at the very same tournament she got busted at: the Match Play.

The Match Play, obviously, is conducted under match play rules and not stroke play, which meant that, to win, Rory McIlroy had to win seven consecutive matches. In several of those matches, McIlroy came from behind to win, which prompted the following from Wei: “What I found the most interesting [what? Wei is missing a noun here] about McIlroy’s victory,” Wei wrote, “and his route to the winner’s circle was the way he found another gear when he was losing late in the match.” This McIlroy is not the same McIlroy as the one “we knew two years ago”—he is “a more mature one that knows how to dig deep.” Wei thusly repeats one of the most standard sorts of sportswriting cliche.

What of it? Well, the difficulty with this particular cliche, the reason why it is not “on a par” with those jolly old-school fellows who didn’t mention that a lot of ball players took speed, or cheated on their wives, or beat them, or that the owners were chiseling everyone for pennies on the dollar while looking the other way as men’s brains were slowly battered into jello—oh wait, that still happens—is that it justifies a species of rhetoric that gets repeated in many other arenas of life. (The most important of them being, of course, the economic.) That is the rhetoric of “toughness,” the “intangibles,” and so on—you know, the ghosts that don’t exist but are awfully handy when justifying why nobody’s getting a raise.

The belief in a player’s “toughness” or whatever words a given sportswriter can invent—the invention of such terms being largely what sportswriting is about—has been at best questionable, and at worst a knowing cynicism, ever since Gilovich’s, Tversky’s, and Vallone’s landmark 1985 paper, “The Hot Hand in Basketball: On the Misperception of Random Sequences.” The “hot hand,” the three proved, is merely a product of cognitive bias: when people are asked, for instance, to predict sequences of coin tosses, they inevitably expect the tosses to be half heads and half tails—even though such an even breakdown, no matter how many tosses are made, is nearly impossible.

So too in sports: writers continually ask their audience to believe that an athlete has “matured,” or “dug deep,” or what have you, when the more likely explanation is just that the athlete’s inherent talent level eventually expressed itself—or, in the case of a losing effort, the other side “got lucky.” Outcomes in sports are determined by skill (and the lack of it), not by “grit” or “will.” Rory won because he is a better golfer than nearly anyone on the planet, and while that skill can be masked by chance, over time it is more likely to expose the other player’s relative lack of skill.

Rory McIlroy won his tournament because he is a good golfer, not because he has some kind of psychological strength the rest of us lack. The fact that Stephanie Wei participates in this age-old sporting charade demonstrates that, for all her pretensions to the contrary, there isn’t a great deal different between her “new school” approach and that of her “stuffy” opponents. There is, perhaps, even reason to cheer for the PGA Tour in this dispute: at least they, unlike many in the age of the New Economy, believe people ought to get paid.