Lawyers, Guns, and Caddies

Why should that name be sounded more than yours?
Julius Caesar. Act I, Scene 2.

 

One of Ryan’s steady golfers—supposedly the youngest man ever to own an American car dealership—likes to call Ryan, one of the better caddies I know at Medinah, his “lawyer-caddie.” Ostensibly, it’s meant as a kind of joke, although it’s not particularly hard to hear it as a complicated slight mixed up with Schadenfreude: the golfer, involved in the tiring process of piling up cash by snookering old ladies with terrible trade-in deals, never bothered to get a college degree—and Ryan has both earned a law degree and passed the Illinois bar, one of the hardest tests in the country. Yet despite his educational accomplishments Ryan still earns the bulk of his income on the golf course, not in the law office. Which, sorry to say, is not surprising these days: as Alexander Eichler wrote for The Huffington Post in 2012, not only are “jobs … hard to come by in recent years” for would-be lawyers, but the jobs that there are come in two flavors—either “something that pays in the modest five figures” (which implies that Ryan might never get out of debt), “or something that pays much better” (the kinds of jobs that are about as likely as playing in the NBA). The legal profession has in other words bifurcated: something that, according to a 2010 article called “Talent Grab” by New Yorker writer Malcolm Gladwell, is not isolated to the law. From baseball players to investment bankers, it seems, the cream of nearly every profession has experienced a great rise in recent decades, even as much of the rest of the nation has been largely stuck in place economically: sometime in the 1970s, Gladwell writes, “salaries paid to high-level professionals—‘talent’—started to rise.” There’s at least two possible explanations for that rise: Gladwell’s is that “members of the professional class” have learned “from members of the working class”—that, in other words, “Talent” has learned the atemporal lessons of negotiation. The other, however, is both pretty simple to understand and (perhaps for that reason) might be favored by campus “leftists”: to them, widening inequality might be explained by the same reason that, surprisingly enough, prevented Lord Cornwallis from burning Mount Vernon and raping Martha Washington.

That, of course, will sound shocking to many readers—but in reality, Lord Cornwallis’ forbearance really is unexpected if the American Revolution is compared to some other British colonial military adventures. Like, for instance, the so-called “Mau Mau Uprising”—also known as the “Kenya Emergency”—during the 1950s: although much of the documentation only came out recently, after a long legal battle—which is how we know about this in the detail we do now at all—what happened in Kenya in those years was not an atypical example of British colonial management. In a nutshell: after World War II, many Kenyans, like a lot of other European colonies, demanded independence, and like a lot of other European powers, Britain would not give it to them. (A response with which Americans ought to be familiar through our own history.) Therefore, the two sides fought to demonstrate their sincerity.

Yet unlike the American experience, which largely consisted—nearly anomalously in the history of wars of independence—of set-piece battles that pitted conventionally-organized troops against each other, what makes the Kenyan episode relevant is that it was fought using the doctrines of counterinsurgency: that is, the “best practices” for the purposes of ending an armed independence movement. In Kenya, this meant “slicing off ears, boring holes in eardrums, flogging until death, pouring paraffin over suspects who were then set alight, and burning eardrums with lit cigarettes,” as Mark Curtis reported in 2003’s Web of Deceit: Britain’s Real Role in the World. It also meant gathering, according to Wikipedia, somewhere around half a million Kenyans into concentration camps, while more than a million were held in what were called “enclosed villages.” Those gathered were then “questioned” (i.e., tortured) in order to find those directly involved in the independence movement, and so forth. It’s a catalogue of horror, but what’s more horrifying is that the methods being used in Kenya were also being used, at precisely the same moment, half a world away, by more or less the same people: at the same time as the “Kenya Emergency,” the British Empire was also fighting in what’s called the “Malay Emergency.”

In Malaysia, from 1948 to 1960 the Malayan Communist Party fought a guerrilla war for independence against the British Army—a war that became such a model for counterinsurgency war that one British leader, Sir Robert Thompson, later became a senior advisor to the American effort in Vietnam. (Which itself draws attention to the fact that France was also involved in counterinsurgency wars at the time: not only in Vietnam, but also in Algeria.) And in case you happen to think that all of this is merely an historical coincidence regarding the aftershocks of the Second World War, it’s important to remember that the very word “concentration camp” was first widely used in English during the Second Boer War of 1899-1902. “Best practice” in fighting colonial wars, that is, was pretty standardized: go in, grab the wives and kids, threaten them, and then just follow the trail back to the ringleaders. In other words, Abu Ghraib—but also, the Romans.

It’s perhaps no coincidence, in other words, that the basis of elite education in the Western world for millennia began with Julius Caesar’s Gallic Wars, usually the first book assigned to beginning students of Latin. Often justified educationally on the basis of its unusually clear rhetoric (the famously deadpan opening line: “Gaul is divided into three parts …”), the Gallic Wars could also be described as a kind of “how to” manual regarding “pacification” campaigns: in this case, the failed rebellion of Vercingetorix in 52 BCE, who, according to Caesar, “urged them to take up arms in order to win liberty for all.” In Gallic Wars, Caesar details such common counterinsurgency techniques as, say, hostage-taking: in negotiations with the Helvetii in Book One, for instance, Caesar makes the offer that “if hostages were to be given by them [the Helvetii] in order that he may be assured these will do what they promise … he [Caesar] will make peace with them.” The book also describes torture in several places throughout (though, to be sure, it is usually described as the work of the Gauls, not the Romans). Hostage-taking and torture was all, in other words, common stuff in elite European education—the British Army did not suddenly create these techniques during the 1950s. And that, in turn, begs the question: if British officers were aware of the standard methods of “counterinsurgency,” why didn’t the British Army use them during the “American Emergency” of the 1770s?

According to Pando Daily columnist “Gary Brecher” (a pseudonym for John Dolan), perhaps the “British took it very, very easy on us” during the Revolution because Americans “were white, English-speaking Protestants like them.” In fact, that leniency may have been the reason the British lost the war—at least, according to Lieutenant Colonel Paul Montanus’ (U.S.M.C.) paper for the U.S. Army War College, “A Failed Counterinsurgency Strategy: The British Southern Campaign, 1780-1781.” To Montanus, the British Army “needed to execute a textbook pacification program”—instead, the actions that army took “actually inflamed the [populace] and pushed them toward the rebel cause.” Montanus, in other words, essentially asks the question: why didn’t the Royal Navy sail up the Potomac and grab Martha Washington? Brecher’s point is pretty valid: there simply aren’t a lot of reasons to explain just why Lord Cornwallis or the other British commanders didn’t do that other than the notion that, when British Army officers looked at Americans, they saw themselves. (Yet, it might be pointed out that just what the British officers saw is still an open question: did they see “cultural Englishmen”—or simply rich men like themselves?)

If Gladwell were telling the story of the American Revolution, however, he might explain American independence as a result simply of the Americans learning to say no—at least, that is what he advances as a possible explanation for the bifurcation Gladwell describes in the professions in American life these days. Take, for instance, the profession with which Gladwell begins: baseball. In the early 1970s, Gladwell tells us, Marvin Miller told the players of the San Francisco Giants that “‘If we can get rid of the system as we now know it, then Bobby Bond’s son, if he makes it to the majors, will make more in one year than Bobby will in his whole career.’” (Even then, when Barry Bonds was around ten years old, people knew that Barry Bonds was a special kind of athlete—though they might not have known he would go on to shatter, as he did in 2001, the single season home run record.) As it happens, Miller wildly understated Barry Bonds’ earning power: Barry Bonds “ended up making more in one year than all the members of his father’s San Francisco Giants team made in their entire careers, combined” (emp. added). Barry Bonds’ success has been mirrored in many other sports: the average player salary in the National Basketball Association, for instance, increased more than 800 percent from the 1984-5 season to the 1998-99 season, according to a 2000 article by the Chicago Tribune’s Paul Sullivan. And so on: it doesn’t take much acuity to know that professional athletes have taken a huge pay jump in recent decades. But as Gladwell says, that increase is not limited just to sportsmen.

Take book publishing, for instance. Gladwell tells an anecdote about the sale of William Safire’s “memoir of his years as a speechwriter in the Nixon Administration to William Morrow & Company”—a book that might seem like the kind of “insider” account that often finds its way to publication. In this case, however, between Safire’s sale to Morrow and final publication Watergate happened—which caused Morrow to rethink publishing a book from a White House insider that didn’t mention Watergate. In those circumstances, Morrow decided not to publish—and could they please have the advance they gave to Safire back?

In book contracts in those days, the publisher had all the cards: Morrow could ask for their money back after the contract was signed because, according to the terms of a standard publishing deal, they could return a book at any time, for more or less any reason—and thus not only void the contract, but demand the return of the book’s advance. Safire’s attorney, however—Mort Janklow, a corporate attorney unfamiliar with the ways of book publishing—thought that was nonsense, and threatened to sue. Janklow told Morrow’s attorney (Maurice Greenbaum, of Greenbaum, Wolff & Ernst) that the “acceptability clause” of the then-standard literary contract—which held that a publisher could refuse to publish a book, and thereby reclaim any advance, for essentially any reason—“‘was being fraudulently exercised’” because the reason Morrow wanted to reject Safire’s book wasn’t due to the reason Morrow said they wanted to reject it (the intrinsic value of the content) but simply because an external event—Watergate—had changed Morrow’s calculations. (Janklow discovered documentary evidence of the point.) Hence, if Morrow insisted on taking back the advance, Janklow was going to take them to court—and when faced with the abyss, Morrow crumbled, and standard contracts with authors have become (supposedly) far less weighted towards publishing houses. Today, bestselling authors (like, for instance, Gladwell) now have a great deal of power: they more or less negotiate with publishing houses as equals, rather than (as before) as, effectively, servants. And not just in publishing: Gladwell goes on to tell similar anecdotes about modeling (Lauren Hutton), moviemaking (George Lucas), and investing (Teddy Forstmann). In all of these cases, the “Talent” (Gladwell’s word) eventually triumphs over “Capital.”

As I mentioned, for a variety of reasons—in the first place, the justification for the study of “culture,” which these days means, as political scientist Adolph Reed of the University of Pennsylvania has remarked, “the idea that the mass culture industry and its representational practices constitute a meaningful terrain for struggle to advance egalitarian interests”—to a lot of academic leftists these days that triumph would best be explained by the fact that, say, George Lucas and the head of Twentieth-Century Fox at the time, George Stulberg, shared a common rapport. (Perhaps they gossiped over their common name.) Or to put it another way, that “Talent” has been rewarded by “Capital” because of a shared “culture” between the two (apparent) antagonists—just as in the same way that Britain treated their American subjects different than their Kenyan ones because the British shared something with the Americans that they did not with the Kenyans (and the Malaysians and the Boer …). (Which was either “culture”—or money.) But there’s a problem with this analysis: it doesn’t particularly explain Ryan’s situation. After all, if this hypothesis correct that would appear to imply that—since Ryan shares a great deal “culturally” with the power elite that employs him on the golf course—that Ryan ought to have a smooth path towards becoming a golfer who employs caddies, not a caddie who works for golfers. But that is not, obviously, the case.

Gladwell, on the other hand, does not advance a “cultural” explanation for why some people in a variety of professions have become compensated far beyond that even of their fellows within the profession. Instead, he prefers to explain what happened beginning in the 1970s as being instances of people learning how to use a tool initially widely used by organized labor: the strike.

It’s an explanation that has an initial plausibility about it, in the first place, because of Marvin Miller’s personal history: he began his career working for the United Steelworkers before becoming an employee of the baseball players’ union. (Hence, there is a means of transmission.) But even aside from that, it seems clear that each of the “talents” Gladwell writes about made use of either a kind of one-person strike, or the threat of it, to get their way: Lauren Hutton, for example, “decided she would no longer do piecework, the way every model had always done, and instead demanded that her biggest client, Revlon, sign her to a proper contract”; in 1975 “Hollywood agent Tom Pollock,” demanded “that Twentieth Century Fox grant his client George Lucas full ownership of any potential sequels to Star Wars”; and Mort Janklow … Well, here is what Janklow said to Gladwell regarding how he would negotiate with publishers after dealing with Safire’s book:

“The publisher would say, ‘Send back that contract or there’s no deal,’ […] And I would say, ‘Fine, there’s no deal,’ and hang up. They’d call back in an hour: ‘Whoa, what do you mean?’ The point I was making was that the author was more important than the publisher.”

Each of these instances, I would say, is more or less what happens when a group of industrial workers walk out: Mort Janklow (whose personal political opinions, by the way, are apparently the farthest thing from labor’s), was for instance telling the publishers that he would withhold the labor product until his demands were met, just as the United Autoworkers shut down General Motors’ Flint, Michigan assembly plant in the Sit-Down Strike of 1936-37. And Marvin Miller did take baseball players out on strike: the first baseball strike was in 1972, and lasted all of thirteen days before management crumbled. What all of these people learned, in other words, was to use a common technique or tool—but one that is by no means limited to unions.

In fact, it’s arguable that one of the best examples of it in action is a James Dean movie—while another is the fact the world has not experienced a nuclear explosion delivered in anger since 1945. In the James Dean movie, Rebel Without a Cause, there’s a scene in which James Dean’s character gets involved in what the kids in his town call a “chickie run”—what some Americans know as the game of “Chicken.” In the variant played in the movie, two players each drive a car towards the edge of a cliff—the “winner” of the game is the one who exits his car closest to the edge, thus demonstrating his “courage.” (The other player is, hence, the “chicken,” or coward.) Seems childish enough—until you realize, as the philosopher Bertrand Russell did in a book called Common Sense and Nuclear Warfare, that it was more or less this game that the United States and the Soviet Union were playing throughout the Cold War:

Since the nuclear stalemate became apparent, the Governments of East and West have adopted the policy which Mr. Dulles calls “brinksmanship.” This is a policy adapted from a sport which, I am told, is practised [sic] by some youthful degenerates. This sport is called “Chicken!” …

As many people of less intellectual firepower than Bertrand Russell have noticed, Rebel Without A Cause thusly describes what happened between Moscow and Washington D.C. faced each other in October 1962, the incident later called the Cuban Missile Crisis. (“We’re eyeball to eyeball,” then-U.S. Secretary of State Dean Rusk said later about those events, “and I think the other fellow just blinked.”) The blink was, metaphorically, the act of jumping out of the car before the cliff of nuclear annihilation: the same blink that Twentieth Century Fox gave when it signed over the rights to sequels to Star Wars to Lucas, or Revlon did when it signed Lauren Hutton to a contract. Each of the people Gladwell describes played “Chicken”—and won.

To those committed to a “cultural” explanation, of course, the notion that all these incidents might instead have to do with a common negotiating technique rather than a shared “culture” is simply question begging: after all, there have been plenty of people, and unions, that have played games of “Chicken”—and lost. So by itself the game of “Chicken,” it might be argued, explains nothing about what led employers to give way. Yet, at two points, the “cultural” explanation also is lacking: in the first place, it doesn’t explain how “rebel” figures like Marvin Miller or Janklow were able to apply essentially the same technique across many industries. If it were a matter of “culture,” in other words, it’s hard to see how the same technique could work no matter what the underlying business was—or, if “culture” is the explanation, it’s difficult to see how that could be distinguished from saying that an all-benevolent sky fairy did it. As an explanation, in other words, “culture” is vacuous: it explains both too much and not enough.

What needs to be explained, in other words, isn’t why a number of people across industries revolted against their masters—just as it likely doesn’t especially need to be explained why Kenyans stopped thinking Britain ought to run their land any more. What needs to explained instead is why these people were successful. In each of these industries, eventually “Capital” gave in to “Talent”: “when Miller pushed back, the owners capitulated,” Gladwell says—so quickly, in fact, that even Miller was surprised. In all of these industries, “Capital” gave in so easily that it’s hard to understand why there was any dispute in the first place.

That’s precisely why the ease of that victory is grounds for being suspicious: surely, if “Capital” really felt threatened by this so-called “talent revolution” they would have fought back. After all, American capital was (and is), historically, tremendously resistant to the labor movement: blacklisting, arrest, and even mass murder were all common techniques capital used against unions prior to World War II: when Wyndham Mortimer arrived in Flint to begin organizing for what would become the Sit-Down Strike, for instance, an anonymous caller phoned him at his hotel within moments of his arrival to tell him to leave town if the labor organizer didn’t “want to be carried out in a wooden box.” Surely, although industries like sports or publishing are probably governed by less hard-eyed people than automakers, neither are they so full of softies that they would surrender on the basis of a shared liking for Shakespeare or the films of Kurosawa, nor even the fact that they shared a common language. On the other hand, however, neither does it seem likely that anyone might concede after a minor threat or two. Still, I’d say that thinking about these events using Gladwell’s terms makes a great deal more sense than the “cultural” explanation—not because of the final answer they provide, but because of the method of thought they suggest.

There is, in short, another possible explanation—one that, however, will mean trudging through yet another industry to explain. This time, that industry is the same one where the “cultural” explanation is so popular: academia, which has in recent decades also experienced an apparent triumph of “Talent” at the expense of “Capital”; in this case, the university system itself. As Christopher Shea wrote in 2014 for The Chronicle of Higher Education, “the academic star system is still going strong: Universities that hope to move up in the graduate-program rankings target top professors and offer them high salaries and other perks.” The “Talent Revolution,” in short, has come to the academy too. Yet, if so, it’s had some curious consequences: if “Talent” were something mysterious, one might suspect that it might come from anywhere—yet academia appears to think that it comes from the same sources.

As Joel Warner of Slate and Aaron Clauset, an assistant professor of computer science at the University of Colorado wrote in Slate recently, “18 elite universities produce half of all computer science professors, 16 schools produce half of all business professors, and eight schools account for half of all history professors.” (In fact, when it comes to history, “the top 10 schools produce three times as many future professors as those ranked 11 through 20.”) This, one might say, is curious indeed: why should “Talent” be continually discovered in the same couple of places? It’s as if, because William Wilkerson  discovered Lana Turner at the Top Hat Cafe on Sunset Boulevard  in 1937, every casting director and talent agent in Hollywood had decided to spend the rest of their working lives sitting on a stool at the Top Hat waiting for the next big thing to walk through that door.

“Institutional affiliation,” as Shea puts the point, “has come to function like inherited wealth” within the walls of the academy—a fact that just might explain another curious similarity between the academy and other industries these days. Consider, for example, that while Marvin Miller did have an enormous impact on baseball player salaries, that impact has been limited to major league players, and not their comrades at lower levels of organized baseball. “Since 1976,” Patrick Redford noted in Deadspin recently, major leaguers’ “salaries have risen 2,500 percent while minor league salaries have only gone up 70 percent.” Minor league baseball players can, Redford says, “barely earn a living while playing baseball”—it’s not unheard of, in fact, for ballplayers to go to bed hungry. (Glen Hines, a writer for The Cauldron, has a piece for instance describing his playing days in the Jayhawk League in Kansas: “our per diem,” Hines reports, “was a measly 15 dollars per day.”) And while it might difficult to have much sympathy for minor league baseball players—They get to play baseball!—that’s exactly what makes them so similar to their opposite numbers within academia.

That, in fact, is the argument Major League Baseball uses to deny minor leaguers are subject to the Fair Labor Standards Act: as the author called “the Legal Blitz” wrote for Above the Law: Redline, “Major League Baseball claims that its system [of not paying minimum wage] is legal as it is not bound by the FLSA [Fair Labor Standards Act] due to an exemption for seasonal and recreational employers.” In other words, because baseball is a “game” and not a business, baseball doesn’t have to pay the workers at the low end of the hierarchy—which is precisely what makes minor leaguers like a certain sort of academic.

Like baseball, universities often argue (as Yale’s Peter Brooks told the New York Times when Yale’s Graduate Employees and Student Organization (GESO) went out on strike in the late 1990s) that adjunct faculty are “among the blessed of the earth,” not its downtrodden. As Emily Eakin reported for the now-defunct magazine Lingua Franca during that same strike, in those days Yale’s administration argued “that graduate students can’t possibly be workers, since they are admitted (not hired) and receive stipends (not wages).” But if the pastoral rhetoric—a rhetoric that excludes considerations common to other pursuits, like gambling—surrounding both baseball and the academy is cut away, the position of universities is much the same as Major League Baseball’s, because both academia and baseball (and the law, and a lot of other professions) are similar types of industries at least in one respect: as presently constituted, they’re dependent on small numbers of highly productive people—which is just why “Capital” should have tumbled so easily in the way Gladwell described in the 1970s.

Just as scholars are only very rarely productive early in their careers, in other words, so too are baseball players: as Jim Callis noted for Baseball America (as cited by the paper, “Initial Public Offerings of Baseball Players” by John D. Burger, Richard D. Grayson, and Stephen Walters), “just one of every four first-round picks ultimately makes a non-trivial contribution to a major league team, and a mere one in twenty becomes a star.” Similarly, just as just a few baseball players hit most of the home runs or pitch most of the complete games, most academic production is done by just a few producers, as a number of researchers discovered in the middle of the twentieth century: a verity variously formulated as “Price’s Law,” “Lotka’s Law,” or “Bradford’s Law.” (Or, there’s the notion described as “Sturgeon’s Law”: “90% of everything is crap.”) Hence, rationally enough, universities (and baseball teams) only want to pay for those high-producers, while leaving aside the great mass of others: why pay for a load of .200 hitters, when with the same money you can buy just one superstar?

That might explain just why it is that William Morrow folded when confronted by Mort Janklow, or why Major League Baseball collapsed when confronted by Marvin Miller. They weren’t persuaded by the justice of the case Janklow or Miller brought—rather, they decided that it was in their long-term interests to reward wildly the “superstars” because that bought them the most production at the cheapest rate. Why pay for a ton of guys who hit all of the home runs, you might say—when, for much less, you can buy Barry Bonds? (In 2001, all major leaguers collectively hit over 5000 home runs, for instance—but Barry Bonds hit 73 of them, in a context in which the very best players might hit 20.) In such a situation, it makes sense (seemingly) to overpay Barry Bonds wildly (so that he made more money in a single season than all of his father’s teammates did for their entire careers): given that Barry Bonds is so much more productive than his peers, it’s arguable that, despite his vast salary, he was actually underpaid.

If you assign a value per each home run, that is, Bonds got a lower price per home run than his peers did: despite his high salary he was—in a sense—a bargain. (The way to calculate the point is to take all the home runs hit by all the major leaguers in a given season, and then work out the average price per home run. Although I haven’t actually done the calculation, I would bet that the average price is more than the price per home run received by Barry Bonds—which isn’t even to get into how standard major league rookie contracts deflate the market: as Newsday reported in March, Bryce Harper of the Washington Nationals, who was third on the 2015 home run list, was paid only $59,524 per home run—when virtually every other top ten home run hitter in the major leagues made at least a quarter of a million dollars.) Similarly, an academic superstar is also, arguably, underpaid: even though, according to citation studies, a small number of scholars might be responsible for 80 percent of the citations in a given field, there’s no way they can get 80 percent of the total salaries being paid in that field. Hence, by (seemingly) wildly overpaying a few superstars, major league owners (like universities) can pocket the difference between those salaries and what they (wildly underpay) to the (vastly more) non-superstars.

Not only that, but wildly overpaying also has a secondary benefit, as Walter Benn Michaels has observed: by paying “Talent” vastly more money, not only are they actually getting a bargain (because no matter what “Talent” got paid, they simply couldn’t be paid what they were really “worth”), but also “Talent’s” (seemingly vast, but in reality undervalued) salaries enable the system to be performed as  “fair”—if you aren’t getting paid what, say, Barry Bonds or Nobel Prize-winning economist Gary Becker is getting paid, in other words, then that’s because you’re not smart enough or good enough or whatever enough, jack. That is what Michaels is talking about when he discusses how educational “institutions ranging from U.I.C. to Harvard” like to depict themselves as “meritocracies that reward individuals for their own efforts and abilities—as opposed to rewarding them for the advantages of their birth.” Which, as it happens, just might explain why it is that, despite his educational accomplishments, Ryan is working on a golf course as a servant instead of using his talent in a courtroom or boardroom or classroom—as Michaels says, the reality of the United States today is that the “American Dream … now has a better chance of coming true in Sweden than it does in America, and as good a chance of coming true in western Europe (which is to say, not very good) as it does here.” That reality, in turn, is something that American universities, who are supposed to pay attention to events like this, have rapidly turned their heads away from: as Michaels says, “the intellectual left has responded to the increase in economic inequality”—that is, the supposed “Talent Revolution”—“by insisting on the importance of cultural identity.” In other words, “when it comes to class difference” (as Michaels says elsewhere), even though liberal professors “have understood our universities to be part of the solution, they are in fact part of the problem.” Hence, Ryan’s educational accomplishments (remember Ryan? There’s an essay about Ryan) aren’t actually helping him: in reality, they’re precisely what is holding him back. The question that Americans ought to be asking these days, then, is this one: what happens when Ryan realizes that?

It’s enough to make Martha Washington nervous.

 

Advertisements

Striking Out

When a man’s verses cannot be understood … it strikes a man more dead than a great reckoning in a little room.
As You Like It. III, iii.

 

There’s a story sometimes told by the literary critic Stanley Fish about baseball, and specifically the legendary early twentieth-century umpire Bill Klem. According to the story, Klem is working behind the plate one day. The pitcher throws a pitch; the ball comes into the plate, the batter doesn’t swing, and the catcher catches it. Klem doesn’t say anything. The batter turns around and says (Fish tells us),

“O.K., so what was it, a ball or a strike?” And Klem says, “Sonny, it ain’t nothing ’till I call it.” What the batter is assuming is that balls and strikes are facts in the world and that the umpire’s job is to accurately say which one each pitch is. But in fact balls and strikes come into being only on the call of an umpire.

Fish is expressing here what is now the standard view of American departments of the humanities: the dogma (a word precisely used) known as “social constructionism.” As Fish says elsewhere, under this dogma, “what is and is not a reason will always be a matter of faith, that is of the assumptions that are bedrock within a discursive system which because it rests upon them cannot (without self-destructing) call them into question.” To many within the academy, this view is inherently liberating: the notion that truth isn’t “out there” but rather “in here” is thought to be a sub rosa method of aiding the political change that, many have thought, has long been due in the United States. Yet, while joining the “social construction” bandwagon is certainly the way towards success in the American academy, it isn’t entirely obvious that it’s an especially good way to practice American politics: specifically, because the academy’s focus on the doctrines of “social constructionism” as a means of political change has obscured another possible approach—an approach also suggested by baseball. Or, to be more precise, suggested by the World Series of 1904 that didn’t happen.

“He’d have to give them,” wrote Will Hively, in Discover magazine in 1996, “a mathematical explanation of why we need the electoral college.” The article describes how one Alan Natapoff, a physicist at the Massachusetts Institute of Technology, became involved in the question of the Electoral College: the group, assembled once every four years, that actually elects an American president. (For those who have forgotten their high school civics lessons, the way an American presidential election works is that each American state elects a number of “electors” equal in number to that state’s representation  in Congress; i.e., the number of congresspeople each state is entitled to by population, plus two senators. Those electors then meet to cast their votes in what is the actual election.) The Electoral College has been derided for years: the House of Representatives introduced a constitutional amendment to abolish it in 1969, for instance, while at about the same time the American Bar Association called the college “archaic, undemocratic, complex, ambiguous, indirect, and dangerous.” Such criticisms have a point: as has been seen a number times in American history (most recently in 2000), the Electoral College makes it possible to elect a president without a majority of the votes. But to Natapoff, such criticisms fundamentally miss the point because, according to him, they misunderstood the math.

The example Natapoff turned to in order to support his argument for the Electoral College was drawn from baseball. As Anthony Ramirez wrote in a New York Times article about Natapoff and his argument, also from 1996, the physicist’s favorite analogy is to the World Series—a contest in which, as Natapoff says, “the team that scores the most runs overall is like a candidate who gets the most popular votes.” But scoring more runs than your opponent is not enough to win the World Series, as Natapoff goes on to say: in order to become the champion baseball team of the year, “that team needs to win the most games.” And scoring runs is not the same as winning games.

Take, for instance, the 1960 World Series: in that contest, as Lively says in Discover, “the New York Yankees, with the awesome slugging combination of Mickey Mantle, Roger Maris, and Bill ‘Moose’ Skowron, scored more than twice as many total runs as the Pittsburgh Pirates, 55 to 27.” Despite that difference in production, the Pirates won the last game of the series (in perhaps the most exciting game in Series history—the only one that has ever ended with a ninth-inning, walk-off home run) and thusly won the series, four games to three. Nobody would dispute, Natapoff’s argument runs, that the Pirates deserved to win the series—and so, similarly, nobody should dispute the legitimacy of the Electoral College.

Why? Because if, as Lively writes, in the World Series “[r]uns must be grouped in a way that wins games,” in the Electoral College “votes must be grouped in a way that wins states.” Take, for instance, the election of 1888—a famous case for political scientists studying the Electoral College. In that election, Democratic candidate Grover Cleveland gained over 5.5 million votes to Republican candidate Benjamin Harrison’s 5.4 million votes. But Harrison not only won more states than Cleveland, but also won states with more electoral votes: including New York, Pennsylvania, Ohio, and Illinois, each of whom had at least six more electoral votes than the most populous state Cleveland won, Missouri. In this fashion, Natapoff argues that Harrison is like the Pirates: although he did not win more votes than Cleveland (just as the Pirates did not score more runs than the Yankees), still he deserved to win—on the grounds that the total numbers of popular votes do not matter, but rather how those votes are spread around the country.

In this argument, then, games are to states just as runs are to votes. It’s an analogy that has an easy appeal to it: everyone feels they understand the World Series (just as everyone feels they understand Stanley Fish’s umpire analogy) and so that understanding appears to transfer easily to the matter of presidential elections. Yet, while clever, in fact most people do not understand the purpose of the World Series: although people think it is the task of the Series to identify the best baseball team in the major leagues, that is not what it is designed to do. It is not the purpose of the World Series to discover the best team in baseball, but instead to put on an exhibition that will draw a large audience, and thus make a great deal of money. Or so said the New York Giants, in 1904.

As many people do not know, there was no World Series in 1904. A World Series, as baseball fans do know, is a competition between the champions of the National League and the American League—which, because the American League was only founded in 1901, meant that the first World Series was held in 1903, between the Boston Americans (soon to become the Red Sox) and the same Pittsburgh Pirates also involved in Natapoff’s example. But that series was merely a private agreement between the two clubs; it created no binding precedent. Hence, when in 1904 the Americans again won their league and the New York Giants won the National League—each achieving that distinction by winning more games than any other team over the course of the season—there was no requirement that the two teams had to play each other. And the Giants saw no reason to do so.

As legendary Giants manager, John McGraw, said at the time, the Giants were the champions of the “only real major league”: that is, the Giants’ title came against tougher competition than the Boston team faced. So, as The Scrapbook History of Baseball notes, the Giants, “who had won the National League by a wide margin, stuck to … their plan, refusing to play any American League club … in the proposed ‘exhibition’ series (as they considered it).” The Giants, sensibly enough, felt that they could not gain much by playing Boston—they would be expected to beat the team from the younger league—and, conversely, they could lose a great deal. And mathematically speaking, they were right: there was no reason to put their prestige on the line by facing an inferior opponent that stood a real chance to win a series that, for that very reason, could not possibly answer the question of which was the better team.

“That there is,” writes Nate Silver and Dayn Perry in Baseball Between the Numbers: Why Everything You Know About the Game Is Wrong, “a great deal of luck involved in the playoffs is an incontrovertible mathematical fact.” But just how much luck is involved is something that the average fan hasn’t considered—though former Caltech physicist Leonard Mlodinow has. In Mlodinow’s book, The Drunkard’s Walk: How Randomness Rules Our Lives, the scientist writes that—just by virtue of doing the math—it can be concluded that “in a 7-game series there is a sizable chance that the inferior team will be crowned champion”:

For instance, if one team is good enough to warrant beating another in 55 percent of its games, the weaker team will nevertheless win a 7-game series about 4 times out of 10. And if the superior team could be expected to beat its opponent, on average, 2 out of each 3 times they meet, the inferior team will still win a 7-game series about once every 5 matchups.

What Mlodinow means is this: let’s say that, for every game, we roll a one-hundred sided die to determine whether the team with the 55 percent edge wins or not. If we do that four times, there’s still a good chance that the inferior team is still in the series: that is, that the superior team has not won all the games. In fact, there’s a real possibility that the inferior team might turn the tables, and instead sweep the superior team. Seven games, in short, is just not enough games to demonstrate conclusively that one team is better than another.

In fact, in order to eliminate randomness as much as possible—that is, make it as likely as possible for the better team to win—the World Series would have to be much longer than it currently is: “In the lopsided 2/3-probability case,” Mlodinow says, “you’d have to play a series consisting of at minimum the best of 23 games to determine the winner with what is called statistical significance, meaning the weaker team would be crowned champion 5 percent or less of the time.” In other words, even in a case where one team has a two-thirds likelihood of winning a game, it would still take 23 games to make the chance of the weaker team winning the series less than 5 percent—and even then, there would still be a chance that the weaker team could still win the series. Mathematically then, winning a seven-game series is meaningless—there have been just too few games to eliminate the potential for a lesser team to beat a better team.

Just how mathematically meaningless a seven-game series is can be demonstrated by the case of a team that is only five percent better than another team: “in the case of one team’s having only a 55-45 edge,” Mlodinow goes on to say, “the shortest statistically significant ‘world series’ would be the best of 269 games” (emp. added). “So,” Mlodinow writes, “sports playoff series can be fun and exciting, but being crowned ‘world champion’ is not a very reliable indication that a team is actually the best one.” Which, as a matter of fact about the history of the World Series, is simply a point that true baseball professionals have always acknowledged: the World Series is not a competition, but an exhibition.

What the New York Giants were saying in 1904 then—and Mlodinow more recently—is that establishing the real worth of something requires a lot of trials: many, many different repetitions. That’s something that, all of us, ought to know from experience: to learn anything, for instance, requires a lot of practice. (Even if the famous “10,000 hour rule” New Yorker writer Malcolm Gladwell concocted for this book, Outliers: The Story of Success, has been complicated by those who did the original research Gladwell based his research upon.) More formally, scientists and mathematicians call this the “Law of Large Numbers.”

What that law means, as the Encyclopedia of Mathematics defines it, is that “the frequency of occurence of a random event tends to become equal to its probability as the number of trials increases.” Or, to use the more natural language of Wikipedia, “the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.” What the Law of Large Numbers implies is that Natapoff’s analogy between the Electoral College and the World Series just might be correct—though for the opposite reason Natapoff brought it up. Namely, if the Electoral College is like the World Series, and the World Series is not designed to find the best team in baseball but instead be merely an exhibition, then that implies that the Electoral College is not a serious attempt to find the best president—because what the Law would appear to advise is that, in order to obtain a better result, it is better to gather more voters.

Yet the currently-fashionable dogma of the academy, it would seem, is expressly-designed to dismiss that possibility: if, as Fish says, “balls and strikes” (or just things in general) are the creations of the “umpire” (also known as a “discursive system”), then it is very difficult to confront the wrongheadedness of Natapoff’s defense of the Electoral College—or, for that matter, the wrongheadedness of the Electoral College itself. After all, what does an individual run matter—isn’t what’s important the game in which it is scored? Or, to put it another way, isn’t it more important where (to Natapoff, in which state; to Fish, less geographically inclined, in which “discursive system”) a vote is cast, rather than whether it was cast? The answer in favor of the former at the expense of the latter to many, if not most, literary-type intellectuals is clear—but as any statistician will tell you, it’s possible for any run of luck to continue for quite a bit longer than the average person might expect. (That’s one reason why it takes at least 23 games to minimize the randomness between two closely-matched baseball teams.) Even so, it remains difficult to believe—as it would seem that many today, both within and without the academy, do—that the umpire can continue to call every pitch a strike.

 

Par For The Course: Memorial Day, 2016

 

For you took what’s before me and what’s behind me
You took east and west when you would not mind me
Sun, moon and stars from me you have taken
And Christ likewise if I’m not mistaken.
“Dónal Óg.” Traditional.

 

None of us were sure. After two very good shots—a drive off the tee, and a three- or four-wood second—both ladies found themselves short of the green by more than forty yards. Two chips later, neither of which were close, both had made fives—scores that either were pars or bogies. But we did not know which scores they were; that is, we didn’t know what par was on the hole, the eighth on Medinah’s Course One. That was important because, while in normal play, the difference would hardly have mattered, it did matter in this case because our foursome was playing as part of a larger tournament, and the method of scoring of this tournament was what is called a “modified Stableford” format. In “modified Stableford,” points are assigned for each score: instead of the total number of strokes being added up or the number of holes being added up, in other words, as in stroke and match play scoring formats, under a modified Stableford format players receive zero points for a par, but lose a point for bogey. To know what the ladies had scored, then, it was important to know what the par was—and since Course One had only just reopened last year after a renovation, none of us knew if the par for ladies had changed with it. The tournament scorecard was no help—we needed a regular scorecard to check against, which we could only get when we returned towards the clubhouse after the ninth hole. When we did, we learned what we needed to know—and I learned just how much today’s women golfers still have in common with both French women, circa 1919, and the nation of France, today.

The eighth hole on Medinah Country Club’s Course One is, for men, a very long par four, measuring 461 yards from the back tee. For the most part it is straight, though with a slight curve from left to right along its length. Along with length, the hole is also defended with a devilish green that is highly sloped from the high side on the left to a low side on the right. It is an extremely difficult hole, ranked as the fifth-hardest hole on the golf course. And though the ladies do not play from the back tees, the eighth is still nearly 400 yards for them, which even for very good women players is quite long; it is not unusual to find ladies’ par fives at that distance. Hence, we had good reason to at least wish to question whether the tournament scorecard was printed in error.

Returning to the clubhouse, we went by the first tee where all the scorecards for Course One are kept. Picking one up, I quickly scanned it and found that, indeed, the par for the eighth hole was four for the ladies, as the tournament scorecard said. At that instant, one of the assistant pros happened by, and I asked him about it: “Well,” he said, “if the par’s the same for everyone it hardly matters—par’s just a number, anyway.” In a sense, of course, he was right: par really is, in one way, completely arbitrary. A golfer scores what she scores: whether that is “par” or not really makes little difference—par is just a name, it might be said. Except that in this case the name of the thing really did matter, because it had a direct effect on the scoring for the tournament as a whole … I could feel my brain slowly sinking into a mental abyss, as I tried to work out the possible consequences of what might appear to be merely an inconsequential name change.

What I immediately realized, at least, was that making the hole a par four greatly amplified the efforts of a long-hitting woman: being able to reach that green in two gave any woman even more of a huge advantage over her fellow competitors than she already had simply by hitting the ball further. Making the hole a par four made such a woman an electric guitar against everyone else’s acoustic: she would just drown everyone out. Furthermore, that advantage would multiply the more rounds the tournament played: the interest, in other words, would compound.

It’s in that sense that, researching another topic, I became interested in the fate of Frenchwomen in the year 1919—the year after the end of the Great War, or World War I. That war, as everyone knows, virtually wiped out an entire generation of young men: Britain, for example, lost nearly a million young men in battle, while France lost nearly one and half millions. (Germany, by comparison, lost nearly two millions.) Yet, although occasionally the point comes up during Veterans Day observations in America—what the Europeans call “Armistice Day” is, with good reason, a much bigger deal—or classroom discussions about writers of the 1920s in English classes (like Fitzgerald or Hemingway, the “Lost Generation”), the fact is treated sentimentally: we are supposed to be sad about those many, many deaths. But what we do not do is think about the long-term effect of losing so many young men (and, though less so, women) in their youth.

We do not, that is, consider the fact that, as writer Fraser Cameron observed in 2014, in France in “1919, the year after the war was over in France, there were 15 women for every man between the ages of 18 and 30.” We do not think about, as Cameron continues, “all of the lost potential, all of the writers, artists, teachers, inventors, and leaders that were killed.” Cameron neglects to consider all of the janitors that were killed also, but his larger point is solid: the fact of the Great War has had a measurable effect on France’s destiny as a nation, because all of those missing young men would have contributed to France’s total productivity, would have paid taxes, would have paid into pensions—and perhaps above all, would have had babies who would have done the same. And those missing French (and British and German and Russian and Italian …) babies still matter—and probably will forever.

“In the past two decades,” says Malcolm Gladwell of the New Yorker, in an article from a few years ago entitled, “The Risk Pool,” “Ireland has gone from being one of the most economically backward countries in Western Europe to being one of the strongest: its growth rate has been roughly double that of the rest of Europe.” Many explanations have been advanced for that growth, Gladwell says—but the most convincing explanation, he also says, may have been advanced by two Harvard economists, David Bloom and David Canning: “In 1979, restrictions on contraception that had been in place since Ireland’s founding”—itself a consequence, by the bye, of the millions of deaths on the Western Front—“were lifted, and the birth rate began to fall.” What had been an average of nearly four children per woman in the late 1960s became, by the mid-nineteen-nineties, less than two. And so Ireland, in those years, “was suddenly free of the enormous social cost of supporting and educating and caring for a large dependent population”—which, as it happens, coincides with the years when the Irish economy exploded. Bloom and Canning argue that this is not a coincidence.

It might then be thought, were you to take a somewhat dark view, that France in 1919 was thusly handed a kind of blessing: the French children that were born in 1919 would be analogous to Irish children in 1969, a tiny cohort easily supported by the rest of the nation. But actually, of course, the situation is rather the opposite: when French children of 1919 came of age, that meant there were many fewer of them to support the rest of the nation—and, as we know, Frenchmen born in 1919 were doubly the victims of fate: the year they turned twenty was the year Hitler invaded Poland. Hence, the losses first realized during the Great War were doubled down—not only was the 1919 generation many times less than there would have been had there been no general European war in the first decades of the twentieth-century, but now there would be many fewer of their grandchildren, too. And so it went: if you are ever at a loss for something to do, there is always the exercise of thinking about all of those millions of missing French (and Italian and English and Russian …) people down through the decades, and the consequences of their loss.

That’s an exercise that, for the most part, people do not do: although nearly everyone in virtually every nation on earth memorializes their war dead on some holiday or another, it’s very difficult to think of the ramifying, compounding costs of those dead. In that sense, the dead of war are a kind of “hidden” cost, for although they are remembered on each nation’s version of Memorial Day or Armistice Day or Veterans Day, they are remembered sentimentally, emotionally. But while that is, to be sure, an important ritual to be performed—because rituals are performed for the living, not the dead—it seems to me also important to remember just what it is that wars really mean: they are a kind of tax on the living and on the future, a tax that represents choices that can never be made and roads that may never be traveled. The dead are debt that can never be repaid and whose effects become greater, rather than less, with time—a compound interest of horror that goes on working like one of Blake’s “dark satanic mills” through all time.

Hidden costs, of course, are all around us, all of the time; very few of us have the luxury of wondering about how far a bullet fired during, say, the summer of 1916 or the winter of 1863 can really travel. For all of the bullets that ever found their mark, fired in all of the wars that were ever fought, are, and always will be, still in flight, onwards through the generations. Which, come to think of it, may have been what James Joyce meant at the end of what has been called “the finest short story in the English language”—a story entitled, simply, “The Dead.” It’s a story that, like the bullets of the Great War, still travels forward through history; it ends as the story’s hero, Gabriel Conroy, stands at the window during a winter’s night, having just heard from his wife—for the first time ever—the story of her youthful romance with a local boy, Michael Fury, long before she ever met Gabriel. At the window, he considers how Fury’s early death of tuberculosis affected his wife’s life, and thusly his own: “His soul swooned slowly as he heard the snow falling faintly through the universe and, faintly falling, like the descent of their last end, upon all the living and the dead.” As Joyce saw, all the snowflakes are still falling, all the bullets are still flying, and we will never, ever, really know what par is.

Closing With God in the City of Brotherly Love, or, How To Get A Head on the Pennsylvania Pike

However do senators get so close to God?
How is it that front office men never conspire?
—Nelson Algren.
“The Silver-Colored Yesterday.”
     Chicago: City on the Make (1951).

Sam Hinkie, the general manager of the Philadelphia 76ers—a basketball team in the National Basketball Association—resigned from his position this past week, citing the fact that he “no longer [had] the confidence” that he could “make good decisions on behalf of investors in the Sixers.” As writers from ESPN and many other outlets have observed, because the ownership of the Sixers had given him supervisors recently (the father-son duo of the Coangelos: Jerry and the other one), Hinkie had effectively been given a vote of no confidence. But the owners’ disapproval appears to have been more than simply a rejection of Hinkie: it also appears to be a rejection of the theory by which Hinkie conducted operations—a theory that Hinkie called “the Process.” It’s the destiny of this theory that’s concerning: the fate of the man Hinkie is irrelevant, but the fate of his idea is one that concerns all Americans—because the theory of “the Process” is also the theory of America. At least, according to one (former) historian.

To get from basketball to the fate of nations might appear quite a leap, of course—but that “the Process” applies to more than basketball can be demonstrated firstly by showing that it is (or perhaps, was) also more or less Tiger Woods’ theory about golf. As Tiger used to say, as he did for example in the press conferences for his wins at both the 2000 PGA Championship and the 2008 U.S. Open, the key to winning majors is “hanging around.” As the golfer said in 2012, the “thing is to keep putting myself [in contention]” (as Deron Snyder reported for The Root that year), or as he said in 2000, after he won the PGA Championship, “in a major championship you just need to hang around,” and also that “[i]n majors, if you kind of hang around, usually good things happen.” Eight years later, after the 2008 U.S. Open Championship (which he famously won on a broken leg), Woods said that “I was just hanging around, hanging around.” That is, Woods’ theory seems to have seen his task as a golfer to give himself the chance to win by staying near the lead—thereby giving destiny, or luck, or chance, the opportunity to put him over the top with a win.

That’s more or less the philosophy that guided Hinkie’s tenure at the head of the 76ers, though to understand it fully requires first understanding the intricacies of one of the cornerstones of life in the NBA: the annual player draft. Like many sport leagues, the NBA conducts a draft of new players each year, and also like many other leagues, teams select new players roughly in the order of their records in the previous season: i.e., the prior season’s league champion picks last. Conversely, teams that missed the last season’s playoffs participate in what’s become known as the “draft lottery”: all the teams that missed the playoffs are entered into the lottery, with their chances of receiving the first pick in the draft weighted by their win-loss records. (In other words, the worst team in the league has the highest chance of getting the first pick in the next season’s draft—but getting that pick is not guaranteed.) Hinkie’s “Process” was designed to take this reality of NBA life into account, along with the fact that, in today’s NBA, championships are won by “superstar” players: players, that is, that are selected in the “lottery” rounds of the draft.

Although in other sports, like for instance the National Football League, very good players can fall to very low rounds in their drafts, that is not the case in the contemporary NBA. While Tom Brady of the NFL’s New England Patriots was famously not drafted until the sixth round of the 2000 draft, and has since emerged as one of that league’s best players, stories like that simply do not happen in the NBA. As a study by FiveThirtyEight’s Ian Levy has shown, for example, in the NBA “the best teams are indeed almost always driven by the best players”—an idea that seems confirmed by the fact that the NBA is, as several studies have found, the easiest American professional sports league to bet. (As Noah Davis and Michael Lopez observed in 2015, also in FiveThirtyEight, in “hockey and baseball, even the worst teams are generally given a 1 in 4 chance of beating the best teams”—a figure nowhere near the comparable numbers in pro basketball.) In other words, in the NBA the favorite nearly always wins, a fact that would appear to correlate with the idea that NBA wins and losses are nearly always determined simply by the sheer talent of the players rather than, say, such notions as “team chemistry” or the abilities of a given coach.

With those facts in mind, then, the only possible path to an NBA championship—a goal that Hinkie repeatedly says was his—is to sign a transcendent talent to a team’s roster, and since (as experience has shown) it is tremendously difficult to sign an already-established superstar away from another team in the league, the only real path most teams have to such a talent is through the draft. But since such hugely capable players are usually only available as the first pick (though sometimes second, and very occasionally third—as Michael Jordan, often thought of as the best player in the history of the NBA, was drafted in 1984), that implies that the only means to a championship is first to lose a lot of games—and thus become eligible for a “lottery” draft pick. This was Sam Hinkie’s “Process”—a theory that sounded so odd to some that many openly mocked Hinkie’s notions: the website Deadspin for instance called Hinkie’s team a “Godless Abomination” in a headline.

Although surely the term was meant comedically, Deadspin’s headline writer in fact happens to have hit upon something central to both Woods’ and Hinkie’s philosophy: it seems entirely amenable to the great American saying, attributed to obscure writer Coleman Cox, that “I am a great believer in Luck: the harder I work, the more of it I seem to have.” Or to put it another way, “you make your own luck.” As can be seen, all of these notions leave the idea of God or any other supernatural agency to the side: God might exist, they imply, but it’s best to operate as if he doesn’t—a sentiment that might appear contrary to the “family values” often espoused by Republican politicians, as it seems merely a step away from disbelieving in God at all. But in fact, according to arch-conservative former Speaker of the House and sometime-presidential candidate, Newt Gingrich, this philosophy simply was the idea of the United States—at least until the 1960s came and wrecked everything. In reality however Gingrich’s idea that until the 1960s the United States was governed by the rules “don’t work, don’t eat” and “your salvation is spiritual” is not only entirely compatible with the philosophies of both Hinkie and Woods—but entirely opposed to the philosophy embodied by the United States Constitution.

To see that point requires seeing the difference between Philadelphia’s “76ers” and the Philadelphians who matter to Americans most today: the “87ers.” Whereas the major document produced in Philadelphia in 1776, in other words, held that “all men are created equal”—a statement that is perhaps most profitably read as a statement about probability, not in the sentimental terms with which it is often read—the major document produced in the same city over a decade later in 1787 is, as Seth Ackerman of the tiny journal Jacobin has pointed out, “a charter for plutocracy.” That is, whereas the cornerstone of the Declaration of Independence appears to be a promise in favor of the well-known principle of “one man, one vote,” the government constructed by the Constitution appears to have been designed according to an opposing principle: in the United States Senate, for instance, a single senator can hold up a bill the rest of the country demands, and “[w]hereas France can change its constitution anytime with a three-fifths vote of its Congress and Britain could recently mandate a referendum on instant runoff voting by a simple parliamentary majority,” as Ackerman says, “the U.S. Constitution requires the consent of no less than thirty-nine different legislatures comprising roughly seventy-eight separately elected chambers” [original emp.]. Pretty obviously, if it takes that much work to change the laws, that will clearly advantage those with pockets deep enough to extend to nearly every corner of the nation—a notion that cruelly ridicules the idea, first advanced in Philadelphia in 1776 and now espoused by Gingrich, Woods, and Hinkie, that with enough hard work “luck” will even out.

Current data, in fact, appear to support Ackerman’s contentions: as Edward Wolff, an economist at New York University and the author of Top Heavy: The Increasing Inequality of Wealth in America and What Can Be Done About It (a book published in 1996) noted online at The Atlantic’s website recently, “average real wages peaked in 1973.” “Median net worth,” Wolff goes on to report, “plummeted by 44 percent between 2007 and 2013 for middle income families, 61 percent for lower middle income families, and by 70 percent for low income families.” This is a pattern, as many social scientists have reported, consistent with the extreme inequality faced in very poor nations: nations usually also notable for their deviation from the “one man, one vote” principle. (Cf. the history of contemporary Russia, and then work backwards.) With that in mind, then, a good start for the United States might be if the entire U.S. Senate resigned—on the grounds that they cannot, any longer, make good decisions on behalf of the investors.

All The Single Ladies

 

They must  … [renounce]
The faith they have in tennis, and tall stockings …
And understand again like honest men …
William Shakespeare. The History of Henry VIII (1612). 

 

The latest news from the world of tennis is about the recent remarks of one Raymond Moore, the 69-year-old CEO of the Indian Wells Tennis Garden. Indian Wells was the site of last week’s professional tournament, which was won (on the woman’s side) by, as it happens, the chief example of tennis’ commitment to “diversity,” Serena Williams—an irony that merely highlighted Moore’s remark, prior to the final women’s match, that in his next life “I want to be someone in the WTA [Women’s Tennis Association] because they ride on the coattails of men.” Naturally, the national press conducted the usual harumphing about Moore’s obvious affliction with old white guyness—another in the series of episodes by which, as Professor Walter Benn Michaels of the University of Illinois at Chicago might put it, the differences between the pay of men and women is fiddled with and weighed in the balance, but it’s never mentioned that the levels of pay under examination are simply light-years away from that of the average person; as Michaels put the point in The Trouble With Diversity, “making sure the women of the upper class are paid just as well as the men of the upper class” is not really a bold strike for the future of humanity. But that’s why the most interesting account of the episode, I think, can be found at The Atlantic, where—although Adam Chandler in no way refers to the fact that this is a dispute between different varieties of multimillionaires—he does have the presence of mind to refer to a story from last fall by Carl Bialik at FiveThirtyEight. That story is about the differences in structure between the men’s game and the women’s game—a story that at least makes a gesture in the direction Michaels would have us go because, unlike so many current forms of discussion in the academy and elsewhere, Bialik understands the significance of math and probability.

In his piece, “Serena Williams Is Getting A Raw Deal By Only Playing Best Of Three Sets,” Bialik takes off from a paper by “a statistician at the RAND Corp.” named Stephanie Kovalchik, and begins by observing the key structural difference between the men’s game in tennis and the women’s: the fact that, in Grand Slam events (the four biggest tournaments of the year: the Australian Open, the French Open, Wimbledon, and the U.S. Open) the men play five sets per match, instead of the ladies’ three. It’s a difference that might appear trivial—after all, Bialik notes, ladies’ matches routinely take longer than the men’s matches at Grand Slam events despite the fact that men must win three sets to the ladies’ two to complete a match. Yet, in reality, it is a difference that makes all the difference—it goes a long way towards explaining just how the CEO of Indian Wells can view women’s tennis as “riding on the men’s coattails.”

Moore’s comments that is followed statements over the past year by two high-ranking men’s players, Novak Djokovic—ranked first in the world and winner at Indian Wells—and Frenchman Jo-Wilfried Tsonga, statements that implied that, while professional tennis has awarded equal pay to both men and women players for decades, male players ought to be paid more because, as The Atlantic’s Chandler wrote, “they attract high[er] viewership.” In turn, the reason apparently often advanced for that higher viewership on the men’s side is that women’s tournaments have a higher volatility: that is, it is more likely for top-seeded men to reach the final matches of a tournament than it is for top-seeded women, an intuition that has been borne out by research. For example, Bialik notes that in last year’s U.S. Open, “only three of the top 10 women’s seeds reached the third round, while nine of the top 10 men’s seeds did.”

In tennis, it seems, fans would rather root for players they already know than players they don’t—a phenomenon also witnessed in golf, where for many years television networks rooted for Tiger Woods to be in the lead or close to it during the tournament because that would have a positive effect on ratings. Tsonga, like Djokovic, have publicly either outright stated or implied that the higher volatility in the women’s game is due to women’s “hormones”—what the paper by RAND’s Kovalchik suggests on the contrary is that the higher volatility of the women’s game does not have to do with women’s bodies, but does have to do with the women’s game’s three-set structure. In other words, the familiar “nature vs. culture” dichotomy so beloved by many in the American humanist academy has little ability to explain what’s going on here.

What does have explanatory power is, Bialik says, the fact that women play best-of-three sets and men play best-of-five in the Grand Slam tournaments. Because they are the only tournaments that have that difference—in all other tournaments women and men play the same number of sets—these data sets can be compared, which is what Kovalchik has done. What she found, Bialik reports, is that “women are no less consistent than men when competing under the same format.” In other words, while upsets—matches in which the lower-seeded player bests the higher-seeded player—happen at about the same rate in tournaments conducted under a best-of-three format, in Grand Slam tournaments (the ones with the highest interest for tennis fans) “upsets are much more common for women than for men.” Obviously, that suggests that the reason why those upsets are more common is because of the format, not because of “hormones.” “Generally in sports,” as Bialik says, “the longer the contest, the greater the chance the favorite prevails”—which is to say that the relevant distinction here isn’t the one between “nature” and “culture” so precious to academics in the humanities, but also that the relevant “natural law” here isn’t the biological differences between men and women. Instead, what’s important about Kovalchik’s research is what scientists and mathematicians call “the law of large numbers.”

According to Wikipedia—itself perhaps an example of the very phenomena in question—the “law of large numbers” describes how “the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.” As an example, the article observes that “while a casino may lose money in a single spin of the roulette wheel, its earnings will tend towards a predictable percentage over a large number of spins.” In short, favorites tend to prevail in Grand Slam contests—but only male ones—simply because the favorites have more opportunity to overcome chance than they do in other tournaments.

What that would imply, in turn, is that the suggestion that men ought to get more money because their matches receive higher ratings is an argument that may be built on sand: it’s got nothing to do with women as women that their matches receive lower ratings, but simply because the format of their game is different. On the other hand, however, it also suggests that Walter Benn Michaels may be on to something when he criticizes his fellow colleagues in the American humanistic academy: by framing conversations about social justice in terms of the difference between something called “culture” and something called “nature,” American humanist academics may actually be retarding, not forwarding, social justice—because the very distinction is itself not relevant to at least some discussions about justice. What that in turn might mean is that the game of American political discussion is about to change radically—with a suddenness, and severity, that may be of a surprise much greater than the one that greeted Raymond Moore.

When Time Is Broke

… how sour sweet music is,
When time is broke and no proportion kept!
William Shakespeare.
     History of Richard II (c. 1595)

 

The phrase “paradox of proportionality” was, says Tom Doak—author of Streamsong Red and the Sheep Ranch and Cape Kidnappers and Stone Eagle and perhaps a dozen other highly entertaining golf courses—“was just something I said off the top of my head a few years ago in a discussion.” He invented the phrase in response to the common complaint of golfers that some architectural feature is “unfair” for one reason or another. What if, Doak asked in turn, a perfectly “fair” golf course could be invented? “What you would get,” he surmised,  “is a course on the straight and narrow … for every yard offline you hit a particular shot, you’d get a proportionately harder next shot.” That is, a shot that is twenty yards offline, say, would be punished twice as much as one that is ten yards offline. Yet, while the phrase may have been invented at the spur-of-the-moment, it also reflects a conversation about golf architecture that goes back at least to the 1920s. As Doak’s phrase and my emphasis makes clear, it is the word proportionately that’s of crucial significance: around this word, as I’ll show, universes spin.

What Doak means by “proportionate” is that if courses were designed in this way, it “would make the game easiest for the good player, and hardest for the bad player.” Which, on the surface, might sound just, or fair: shouldn’t things be easiest for the good player? What’s the point of being good, if not? But that’s what makes for the paradox, Doak says: in reality, “good players need to be challenged”—that is, they need an arena to demonstrate their skill—and “bad players need a way around that doesn’t cost them too much.” If the game is hardest for the very worst players, in other words, there isn’t going to be a game much longer.

That then is what Doak means by what he calls the “paradox”: “if you design a course strictly to punish bad shots proportionately, you get just the opposite” of a course that would allow bad players to survive while delighting the better player. To allow golf to survive as a sport, Doak says that it’s necessary to design golf courses that are—“paradoxically” you might say—unfair. But while to put things this way makes Doak sound like a kind of socialist—it’s difficult not to hear an echo of “from each according to his abilities, and to each according to his needs” in what Doak writes— it’s also possible to describe his position in terms that have directly the opposite political valence.

That’s how the debate that Doak here enters turned nearly at the very beginnings of the modern age of the sport itself: as one Bob Crosby has noted on the website, Golf Club Atlas, at the zenith of the Jazz Age, one Joshua Crane—an excellent player in his own right, and critic of golf architecture—insisted that golf “is improving because the punishment for poor play is becoming universally fairer.” Golf became better, Crane argued, the more it acceded to “the demands of human nature for fair play”—the “real pleasure” of the game, Crane argued, was in the “manipulation” of all the elements of the game (shotmaking, first of all) “in a skillful and thoughtful way, and under conditions where victory or defeat is due to superior or inferior handling, not to good or bad luck beyond either player’s control.” In Doak’s terms, Crane was in effect arguing that golf architecture ought to be “proportionate”: Crane was essentially claiming that golf ought to be easiest for the good player, and hardest for the bad player.

Opposing Crane, and thus championing the position eventually occupied by Doak, was the golf architect, Max Behr—“Yale’s first graduate to design golf courses,” according to a website maintained by the university. (I once saw Robert Duvall in the parking lot at Lakeside Golf Club in Burbank, California—one of Behr’s best known designs.) Contrary to Crane, Behr believed that what he called “the moral dimension”—i.e., what Crane thought of as “fair play”—had no place in golf architecture. To the contrary in fact, according to one of his foremost interpreters: Bob Crosby of Golf Club Atlas writes that Behr thought that it is “the threat of inequitable, devastating hazards that accounts for the highest drama in the game,” which for Behr was “the whole point of good golf architecture.” Indeed, so long as “the player is given the option to play away from hazards,” Behr thought that the “architect owes the player nothing in terms of equity.” Hole-wrecking, even round-wrecking, hazards were entirely part of the game—even perhaps the point of the game—to Behr, while Crane thought they were anathema.

All through the late summer and autumn of 1926, and into 1927, the combat between the two men raged in the pages of the arrestingly-named Country Club & Pacific Golf and Motor Magazine—an apparently fine publication that would, like so many other literary efforts, not survive the crash of Wall Street less than three years in the future. They argued the point in several fashions; one way to describe the difference between the two men is to put it in terms of ideal scores. As Crane saw golf, an ideal scorecard would be a series of fives say, or—for the better player—a series of fours. Crane’s notion of a “good score” would be, in other words, something like “5-5-5-4-5-4-5-5-5” for nine holes. But Behr’s “ideal” scorecard (and, I suspect, Doak’s) would look quite different: that scorecard might read “4-6-2-5-9-2-4-5-6.” If I’ve done the math right, both cards come in at 43—but one suspects that the Behr scorecard contains, as Behr wished, a great deal more drama. To Behr, that was the whole point. To Behr, it wasn’t what you shot, but how you shot it that mattered; to Crane, the reverse.

Put in this way, it is Behr who might sound something like those conservative voices who, for example, argued against the federal bailouts of the large banks on the grounds that such bailouts encourage “moral hazard”—the notion that, as Andrew Beattie at Investopedia has put it, “a party that is protected in some way from risk will act differently than if they didn’t have that protection.” By protecting banks against the risk of catastrophic losses, you have encouraged them to behave in ways that risk catastrophic losses.

Just so, Behr could be imagined as saying, by in effect protecting golfers from the risk of huge scores, you are encouraging them to play lackadaisically; that is, without the sort of strategic planning Behr thought was essential to playing golf well. Behr thought of this, according to Bob Crosby, as “strategic freedom”; the “primary pay-off” of which, Crosby says, “is the drama created by a player’s fore-knowledge that his failure to pull off an aggressive … strategy might indeed have consequences that are devastating, disproportionate and ‘unfair.’” Behr in short thought that golfers ought to have a choice about which path to take to the hole: a risky choice and a less-risky one. But that freedom, he argued, ought to be backed by some pretty severe consequences of failure.

In that sense then, it’s possible to read Behr’s idea as being archly-capitalist, rather than, as I said earlier about Doak, socialistic. (For the record, both Behr and Crane were strongly conservative, while Doak’s politics are entirely unknown to me.) For example, it’s possible to hear an echo of Behr’s philosophy of golf courses in the recent debate over health care—particularly, as Malcolm Gladwell pointed out in the New Yorker some years ago, over the question of “moral hazard.” Apparently, in 1968 “the economist Mark Pauly,” Gladwell tells us, “argued that moral hazard played an enormous role in medicine”—the idea being that making “you responsible for a share of the costs” of your medical bills “will reduce moral hazard.” That is, you will be—as Behr argued about golfers—a more careful person, and more diligent about your health needs.

Yet, whereas that may be true in a game like golf, Gladwell’s informants argue that such is an absurd line of thinking when applied to medical care. One of them, the economist Uwe Reinhardt of Princeton University, says flatly that “[m]oral hazard is overblown” when it comes to medicine because nobody goes to the doctor blithely. “We go to the doctor,” as Gladwell remarks, “grudgingly, only because we’re sick.” Whereas, in the notional, Behr-like, world suggested by the then-current system before Obamacare, people would supposedly spend their off hours trekking to the doctor’s office were they not impeded by the costs, people like Reinhardt just observed that in reality nobody goes to the doctor cheerfully. Or as Reinhardt put it, do people “check into the hospital instead of playing golf?”

The answer, of course, is “no”—an answer that also suggests just why it is so dangerous to mix arguments over games with arguments over politics. I would, for example, much rather play a golf course designed by Doak or Behr than one designed by Crane. Conversely, however, I would much rather get my medical care from a system designed by Crane than I would one designed by Behr. Does this mean that one philosophy is better than the other in all situations? No; it just means that what we want from our entertainment is different than what we want—or should want—from the systems that support our lives.

Once—or so I understand—this was known as “having a sense of proportion.”

Arbitrating Arbitrariness

MACBETH: If chance will have me king, why, chance may crown me,
Without my stir.
The Tragedy of Macbeth.

 

 

Justice Antonin Scalia died this past week, and while his judicial opinions will be alternately celebrated and denounced according to political sensibilities, Scalia is perhaps known to golfers best for his dissent in the case of PGA Tour, Inc. v. Martin, the case that pitted Casey Martin, Stanford teammate of Tiger Woods and victim of a birth defect in his right leg, against the Tour over whether Martin could use a golf cart while playing tournaments. Excepting the fact that Casey Martin is and always has been an extremely polite individual, the case embodied the “snobs vs. slobs” trope that has motivated nearly every golf story for the mass market at least since the premiere of Caddyshack, and Justice Scalia did not disappoint from that angle; in a performance reminiscent of Judge Smails recollecting to the Danny character how, while he had not wished to sentence “boys younger than you to the gas chamber,” he felt he “owed it to them,” Scalia pours a rain of sarcasm on the majority of the court (who sided with Martin). Yet, while Scalia’s opinion is entertaining, what is perhaps most interesting about it from an intellectual perspective is that in his dissent Scalia lays out a theory of games that’s about as “postmodern” as that from any Continental philosopher or theory-addled Brown semiotician: “in all games,” Scalia wrote, the rules are “entirely arbitrary”—an assertion hardly distinguishable from hero-of-the-poststructuralist left Ferdinand de Saussure’s claim, about language itself, that “the link between signal and signification is arbitrary.” But are these claims about arbitrariness true? And what does it mean that in this connection Scalia appears hardly discernable from some of the more outré claims of the contemporary humanistic academy? I’d suggest that there is indeed a subterranean connection between the two—a connection that may in turn explain just how it is that Bill James, the scholar of baseball, is working for the Boston Red Sox and not, say, the University of Missouri.

James, after all, is perhaps best-known in baseball circles—aside from being the man who nearly singlehandedly brought Enlightenment principles to sport—for inventing what’s become known as “Pythagorean expectation”: it’s a formula by which a given team’s win and loss record can be accurately forecasted by examining the runs the team scores versus the runs the team allows. (It’s called “Pythagorean” because of the formula’s superficial similarity to Pythagora’s famous theorem.) By combing through the records, baseball scholars have found that win-loss records generally do mirror the difference between the runs they score and the runs they allow, and also that teams that differ greatly in terms of their expectation can be shown to have benefitted (or been harmed) by some sort of chanciness: like, for instance, the 1974 San Diego Padres, who had a phenomenal record of 31-16 in one-run games while going 29-86 in all the other games.

Hence, as Baseball Reference points out, “while winning as many games as possible is still the ultimate goal of a baseball team, a team’s run differential … provides a better idea of how well a team is actually playing.” Pythagorean Expectation, in short, is a way of eliminating arbitrariness from a team’s record by taking what could be called a more-granular view: rather than viewing a team from the skybox level of a team’s record, it’s better to look at the record from the basepath-level—how well or poorly a team does at the game’s essential act of scoring or preventing runs.

To Scalia, however, it seems that there is no such thing as an act “essential” to a given game: “since it is the very nature of a game to have no object except amusement,” the justice wrote in Martin, “it is quite impossible to say that any of a game’s arbitrary rules is ‘essential.’” Similarly, postmodern intellectuals like to claim, as literary critic Jonathan Culler has, that “there is no natural or inevitable link between the signifier and the signified.” Such arguments take off from de Saussure’s work on language a century ago, by which the Swiss linguist was led to argue that, for instance, “There is no internal connection, for example, between the idea ‘sister’ and the French sequence of sounds s—ö—r which acts as its signal.” In that sense, literary intellectuals often like to speak, as philosopher Ludwig Wittgenstein did, of “language games”: in this way, as has been said, the “rules of language are analogous to the rules of games; thus saying something in a language is analogous to making a move in a game.” Conversely then, no one “sign” can be considered to be “essential” to a language, just as no one act can be considered to be essential to a game. In that sense, it seems that while Scalia and hyper-left-wing scholars of the humanities were political opponents in many different arenas, they can usefully be said to oppose James’ notion that, in fact, there are essential acts that are definitional to a game—and that those acts can be used to determine value.

In that way, then, contemporary literary intellectuals and Scalia can be said to be united in their opposition to a position first enunciated a long time before Bill James ever walked the earth—a position with far more political import than the game of golf. So far as I know, that principle was first announced by the German philosopher, theologian, jurist, and astronomer, Nicholas of Cusa in the fifteenth century in his work, De concordatia catholica (or, The Catholic Concordance). “It is, Nicholas wrote there, “a general principle that the greater the agreement to a proposal, the more reason there is to think it correct and divinely inspired.” Or, as the Marquis de Condorcet would put it similarly some centuries later in his Essay on the Application of Analysis to the Probability of Majority Decisions: “If … each voter is more likely to vote correctly … then adding more voters increases the probability that the majority decision is correct.” In other words, what these learned Europeans were arguing centuries before Bill James is that by looking at the acts of scoring—in this case, voting—that are essential to the game of elections, it is possible to find real value, and not simply mirages.

Today, arguments like Scalia’s in the Martin case or the arguments of postmodern literary intellectuals can be found advanced by, for instance, Hillary Clinton’s campaign when her supporters sometimes say—as they do—that the supporters of her opponent Bernie Sanders should know that, while Sanders nearly tied Clinton in Iowa (and just how nearly is under dispute, because the Iowa Democratic Party refuses to release the actual vote totals) and outright won New Hampshire, still those figures should be overlooked because Clinton has an overwhelming lead in what are known as “superdelegates”: delegates of the party who will attend this summer’s national convention and vote on a nominee, but were unelected within their state’s primary process. Such arguments like to point out that, while Sanders possesses a 36-32 lead among elected delegates thus far, Clinton is crushing Sanders by 362-8 among party insiders. The link between the party’s nominee and the primary process, these Clinton arguments suggest, is arbitrary—thusly, that Sanders’ supporters should give up their insurgency and, so to speak, return to the Clinton fold.

As can be seen, then, the arguments of “arbitrariness” are not particular to a certain political bent, but are instead markers of a certain kind of social position: fans of Hillary Clinton are in sum  likely to share these assumptions with fans of Antonin Scalia. It’s not arbitrary, in other words, that the only demographic group Clinton won in New Hampshire were those making over $200,000 per year. Both fans of Scalia and fans of Clinton are likely to reject Nicholas of Cusa’s and the Marquis de Condorcet’s assertion that value can be found in the opinion of the majority. Which, one supposes, is an opinion they are entitled to have. What’s perhaps surprising, however, is to suppose that the majority of Americans—golf fans or not—should ever agree to it.