Don Thumb

Then there was the educated Texan from Texas who looked like someone in Technicolor and felt, patriotically, that people of means—decent folk—should be given more votes than drifters, whores, criminals, degenerates, atheists, and indecent folk—people without means.
Joseph Heller. Catch-22. (1961).

 

“Odd arrangements and funny solutions,” the famed biologist Stephen Jay Gould once wrote about the panda’s thumb, “are the proof of evolution—paths that a sensible God would never tread but that a natural process, constrained by history, follows perforce.” The panda’s thumb, that is, is not really a thumb: it is an adaptation of another bone (the radial sesamoid) in the animal’s paw; Gould’s point is that the bamboo-eater’s thumb is not “a beautiful machine,” i.e. not the work of “an ideal engineer.” Hence, it must be the product of an historical process—a thought that occurred to me once again when I was asked recently by one of my readers (I have some!) whether it’s really true, as law professor Paul Finkelman has suggested for decades in law review articles like “The Proslavery Origins of the Electoral College,” that the “connection between slavery and the [electoral] college was deliberate.” One way to answer the question, of course, is to pour through (as Finkelman has very admirably done) the records of the Constitutional Convention of 1787: the notes of James Madison, for example, or the very complete documents collected by Yale historian Max Farrand at the beginning of the twentieth century. Another way, however, is to do as Gould suggests, and think about the “fit” between the design of an instrument and the purpose it is meant to achieve. Or in other words, to ask why the Law of Large Numbers suggests Donald Trump is like the 1984 Kansas City Royals.

The 1984 Kansas City Royals, for those who aren’t aware, are well-known in baseball nerd circles for having won the American League West division despite being—as famous sabermetrician Bill James, founder of the application of statistical methods to baseball, once wrote—“the first team in baseball history to win a championship of any stripe while allowing more runs (684) than they scored (673).” “From the beginnings of major league baseball just after the civil war through 1958,” James observes, no team ever managed such a thing. Why? Well, it does seem readily apparent that scoring more runs than one’s opponent is a key component to winning baseball games, and winning baseball games is a key component to winning championships, so in that sense it ought to be obvious that there shouldn’t be many winning teams that failed to score more runs than their opponents. Yet on the other hand, it also seems possible to imagine a particular sort of baseball team winning a lot of one-run games, but occasionally giving up blow-out losses—and yet as James points out, no such team succeeded before 1959.

Even the “Hitless Wonders,” the 1906 Chicago White Sox, scored more runs than their opponents  despite hitting (according to This Great Game: The Online Book of Baseball) “a grand total of seven home runs on the entire season” while simultaneously putting up the American League’s “worst batting average (.230).” The low-offense South Side team is seemingly made to order for the purposes of this discussion because they won the World Series that year (over the formidable Chicago Cubs)—yet even this seemingly-hapless team scored 570 runs to their opponents’ 460, according to Baseball Reference. (A phenomenon most attribute to the South Siders’ pitching and fielding: that is, although they didn’t score a lot of runs, they were really good at preventing their opponents’ from scoring a lot of runs.) Hence, even in the pre-Babe Ruth “dead ball” era, when baseball teams routinely employed “small ball” strategies designed to produce one-run wins as opposed to Ruth’s “big ball” attack, there weren’t any teams that won despite scoring fewer runs than their opponents’.

After 1958, however, there were a few teams that approached that margin: the 1959 Dodgers, freshly moved to Los Angeles, scored only 705 runs to their opponents’ 670, while the 1961 Cincinnati Reds scored 710 to their opponents 653, and the 1964 St. Louis Cardinals scored 715 runs to their opponents’ 652. Each of these teams were different than most other major league teams: the ’59 Dodgers played in the Los Angeles Coliseum, a venue built for the 1932 Olympics, not baseball; its cavernous power alleys were where home runs went to die, while its enormous foul ball areas ended many at-bats that would have continued in other stadiums. (The Coliseum, that is, was a time machine to the “deadball” era.) The 1961 Reds had Frank Robinson and virtually no other offense until the Queen City’s nine was marginally upgraded through a midseason trade. Finally, the 1964 Cardinals team had Bob Gibson (please direct yourself to the history of Bob Gibson’s career immediately if you are unfamiliar with him), and second they played in the first year after major league baseball’s Rules Committee redefined the strike zone to be just slightly larger—a change that had the effect of dropping home run totals by ten percent and both batting average and runs scored by twelve percent. In The New Historical Baseball Abstract, Bill James calls the 1960s the “second deadball era”; the 1964 Cardinals did not score a lot of runs, but then neither did anyone else.

Each of these teams was composed of unlikely sets of pieces: the Coliseum was a weird place to play baseball, the Rule Committee was a small number of men who probably did not understand the effects of their decision, and Bob Gibson was Bob Gibson. And even then, these teams all managed to score more runs than their opponents, even if the margin was small. (By comparison, the all-time run differential record is held by Joe DiMaggio’s 1939 New York Yankees, who outscored their opponents by 411 runs: 967 to 556, a ratio may stand until the end of time.) Furthermore, the 1960 Dodgers finished in fourth place, the 1962 Reds finished in third, and the 1965 Cards finished seventh: these were teams, in short, that had success for a single season, but didn’t follow up. Without going very deeply into the details then, suffice it to say that run differential is—as Sean Forman noted in the The New York Times in 2011—“a better predictor of future win-loss percentage than a team’s actual win-loss percentage.” Run differential is a way to “smooth out” the effects of chance in a fashion that the “lumpiness” of win-loss percentage doesn’t.

That’s also, as it happens, just what the Law of Large Numbers does: first noted by mathematician Jacob Bernoulli in his Ars Conjectandi of 1713, that law holds that “the more … observations are taken into account, the less is the danger of straying from the goal.” It’s the principle that is the basis of the insurance industry: according to Caltech physicist Leonard Mlodinow, it’s the notion that while “[i]ndividual life spans—and lives—are unpredictable, when data are collected from groups and analyzed en masse, regular patterns emerge.” Or for that matter, the law is also why it’s very hard to go bankrupt—which Donald Trump, as it so happens, has—when running a casino: as Nicholas Taleb commented in The Black Swan: The Impact of the Highly Improbable, all it takes to run a successful casino is to refuse to allow “one gambler to make a massive bet,” and instead “have plenty of gamblers make series of bets of limited size.” More bets equals more “observations,” and the more observations the more likely it is that all those bets will converge toward the expected result. In other words, one coin toss might be heads or might be tails—but the more times the coin is thrown, the more likely it is that there will be an equal number of both heads and tails.

How this concerns Donald Trump is that, as has been noted, although the president-elect did win the election, he did not win more votes than the Democratic candidate, Hillary Clinton. (As of this writing, those totals now stand at 62,391,335 votes for Clinton to Trump’s 61,125,956.) The reason that Clinton did not win the election is because American presidential elections are not won by collecting more votes in the wider electorate, but rather through winning in that peculiarly American institution, the Electoral College: an institution in which, as Will Hively remarked remarkably presciently in a Discover article in 1996, a “popular-vote loser in the big national contest can still win by scoring more points in the smaller electoral college.” Despite how weird that bizarre sort of result actually is, however, according to some that’s just what makes the Electoral College worth keeping.

Hively was covering that story in 1996: his Discovery story was about how, in the pages of the journal Public Choice that year, mathematician Alan Natapoff tried to argue that the “same logic that governs our electoral system … also applies to many sports”—for example, baseball’s World Series. In order “to become [World Series] champion,” Natapoff noticed, a “team must win the most games”—not score the most runs. In the 1960 World Series, the mathematician wrote, the New York Yankees “scored more than twice as many total runs as the Pittsburgh Pirates, 55 to 27”—but the Yankees lost game 7, and thus the series. “Runs must be grouped in a way that wins games,” Natapoff thought, “just as popular votes must be grouped in a way that wins states.” That is, the Electoral College forces candidates to “have broad appeal across the whole nation,” instead of playing “strongly on a single issue to isolated blocs of voters.” It’s a theory that might seem, on its face, to have a certain plausibility: by constructing the Electoral College, the delegates to the constitutional convention of 1787 prevented future candidates from winning by appealing to a single, but large, constituency.

Yet, recall Stephen Jay Gould’s remark about the panda’s thumb, which suggests that we can examine just how well a given object fulfills its purpose: in this case, Natapoff is arguing that, because the design of the World Series “fits” the purpose of identifying the best team in baseball, so too does the Electoral College “fit” the purpose of identifying the best presidential candidate. Natapoff’s argument concerning the Electoral College presumes, in other words, that the task of baseball’s playoff system is to identify the best team in baseball, and hence it ought to work for identifying the best president. But the Law of Large Numbers suggests that the first task of any process that purports to identify value is that it should eliminate, or at least significantly reduce, the effects of chance: whatever one thinks about the World Series, presumably presidents shouldn’t be the result of accident. And the World Series simply does not do that.

“That there is”—as Nate Silver and Dayn Perry wrote in their ESPN.com piece, “Why Don’t the A’s Win In October?” (collected in Jonah Keri and James Click’s Baseball Between the Numbers: Why Everything You Know About the Game Is Wrong)—“a great deal of luck involved in the playoffs is an incontrovertible mathematical fact.” It’s a point that was


argued so early in baseball’s history as 1904, when the New York Giants refused to split the gate receipts evenly with what they considered to be an upstart American League team (Cf. “Striking Out” https://djlane.wordpress.com/2016/07/31/striking-out/.). As Caltech physicist Leonard Mlodinow has observed, if the World Series were designed—by an “ideal engineer,” say—to make sure that one team was the better team, it would have to be 23 games long if one team were significantly better than the other, and 269 games long if the two teams were evenly matched—that is, nearly as long as two full seasons. In fact, since it may even be argued that baseball, by increasingly relying on a playoff system instead of the regular season standings, is increasing, not decreasing, the role of chance in the outcome of its championship process: whereas prior to 1969, the two teams meeting in the World Series were the victors of a paradigmatic Law of Large Numbers system—the regular season—now many more teams enter the playoffs, and do so by multiple routes. Chance is playing an increasing role in determining baseball’s champions: in James’ list of sixteen championship-winning teams that had a run differential of less than 1.100: 1, all of the teams, except the ones I have already mentioned, are from 1969 or after. Hence, from a mathematical perspective the World Series cannot be seriously argued to eliminate, or even effectively reduce, the element of chance—from which it can be reasoned, as Gould says about the panda’s thumb, that the purpose of the World Series is not to identify the best baseball team.

Natapoff’s argument, in other words, has things exactly backwards: rather than showing just how rational the Electoral College is, the comparison to baseball demonstrates just how irrational it is—how vulnerable it is to chance. In the light of Gould’s argument about the panda’s thumb, which suggests that a lack of “fit” between the optimal solution (the human thumb) to a problem and the actual solution (the panda’s thumb) implies the presence of “history,” that would then intimate that the Electoral College is either the result of a lack of understanding of the mathematics of chance with regards to elections—or that the American system for electing presidents was not designed for the purpose that it purports to serve. As I will demonstrate, despite the rudimentary development of the mathematics of probability at the time at least a few—and these, some of the most important—of the delegates to the Philadelphia convention in 1787 were aware of those mathematical realities. That fact suggests, I would say, that Paul Finkelman’s arguments concerning the purpose of the Electoral College are worth much more attention than they have heretofore received: Finkelman may or may not be correct that the purpose of the Electoral College was to support slavery—but what is indisputable is that it was not designed for the purpose of eliminating chance in the election of American presidents.

Consider, for example, that although he was not present at the meeting in Philadelphia, Thomas Jefferson possessed not only a number of works on the then-nascent study of probability, but particularly a copy of the very first textbook to expound on Bernoulli’s notion of the Law of Large Numbers: 1718’s The Doctrine of Chances, or, A Method of Calculating the Probability of Events in Play, by Abraham de Moivre. Jefferson also had social and intellectual connections to the noted French mathematician, the Marquis de Condorcet—a man who, according to Iain McLean of the University of Warwick and Arnold Urken of the Stevens Institute of Technology, applied “techniques found in Jacob Bernoulli’s Ars Conjectandi” to “the logical relationship between voting procedures and collective outcomes.” Jefferson in turn (McLean and Urken inform us) “sent [James] Madison some of Condorcet’s political pamphlets in 1788-9”—a connection that would only have reaffirmed a connection already established by the Italian Philip Mazzei, who sent a Madison a copy of some of Condorcet’s work in 1786: “so that it was, or may have been, on Madison’s desk while he was writing the Federalist Papers.” And while none of that implies that Madison knew of the marquis prior to coming to Philadelphia in 1787, before even meeting Jefferson when the Virginian came to France to be the American minister, the marquis had already become a close friend, for years, to another man who would become a delegate to the Philadelphia meeting: Benjamin Franklin. Although not all of the convention attendees, in short, may have been aware of the relationship between probability and elections, at least some were—and arguably, they were the most intellectually formidable ones, the men most likely to notice that the design of the Electoral College is in direct conflict with the Law of Large Numbers.

In particular, they would have been aware of the marquis’ most famous contribution to social thought: Condorcet’s “Jury Theorem,” in which—as Norman Schofield once observed in the pages of Social Choice Welfare—the Frenchman proved that, assuming “that the ‘typical’ voter has a better than even chance of choosing the ‘correct’ outcome … the electorate would, using the majority rule, do better than an average voter.” In fact, Condorcet demonstrated mathematically—using Bernoulli’s methods in a book entitled Essay on the Application of Analysis to the Probability of Majority Decisions (significantly published in 1785, two years before the Philadelphia meeting)—that adding more voters made a correct choice more likely, just as (according to the Law of Large Numbers) adding more games makes it more likely that the eventual World Series winner is the better team. Franklin at the least then, and perhaps Madison next most-likely, could not but have been aware of the possible mathematical dangers an Electoral College could create: they must have known that the least-chancy way of selecting a leader—that is, the product of the design of an infallible engineer—would be a direct popular vote. And while it cannot be conclusively demonstrated that these men were thinking specifically of Condorcet’s theories at Philadelphia, it is certainly more than suggestive that both Franklin and Madison thought that a direct popular vote was the best way to elect a president.

When James Madison came to the floor of Independence Hall to speak to the convention about the election of presidents for instance, he insisted that “popular election was better” than an Electoral College, as David O. Stewart writes in his The Summer of 1787: The Men Who Invented the Constitution. Meanwhile, it was James Wilson of Philadelphia—so close to Franklin, historian Lawrence Goldstone reports, that the infirm Franklin chose Wilson to read his addresses to the convention—who originally proposed direct popular election of the president: “Experience,” the Scottish-born Philadelphian said, “shewed [sic] that an election of the first magistrate by the people at large, was both a convenient & successful mode.” In fact, as William Ewald of the University of Pennsylvania has pointed out, “Wilson almost alone among the delegates advocated not only the popular election of the President, but the direct popular election of the Senate, and indeed a consistent application of the principle of ‘one man, one vote.’” (Wilson’s positions were far ahead of their time: in the case of the Senate, Wilson’s proposal would not be realized until the passage of the Seventeenth Amendment in 1913, and his stance in favor of the principle of “one man, one vote” would not be enunciated as part of American law until the Reynolds v. Sims line of cases decided by the Earl Warren-led U.S. Supreme Court in the early 1960s.) To Wilson, the “majority of people wherever found” should govern “in all questions”—a statement that is virtually identical to Condorcet’s mathematically-influenced argument.

What these men thought, in other words, was that an electoral system that was designed to choose the best leader of a nation would proceed on the basis of a direct national popular vote: some of them, particularly Madison, may even have been aware of the mathematical reasons for supposing that a direct national popular vote was how an American presidential election would be designed if it were the product of what Stephen Jay Gould calls an “ideal engineer.” Just as an ideal (but nonexistent) World Series would be at least 23, and possibly so long as 269 games—in order to rule out chance—the ideal election to the presidency would include as many eligible voters as possible: the more voters, Condorcet would say, the more likely those voters would be to get it right. Yet just as with the actual, as opposed to ideal, World Series, there is a mismatch between the Electoral College’s proclaimed purpose and its actual purpose: a mismatch that suggests researchers ought to look for the traces of history within it.

Hence, although it’s possible to investigate Paul Finkelman’s claims regarding the origins of the Electoral College by, say, trawling through the volumes of the notes taken at the Constitutional Convention, it’s also possible simply to think through the structure of the Constitution itself in the same fashion that Stephen Jay Gould thinks about, say, the structure of frog skeletons: in terms of their relation to the purpose they serve. In this case, there is a kind of mathematical standard to which the Electoral College can be compared: a comparison that doesn’t necessarily imply that the Constitution was created simply and only to protect slavery, as Finkelman says—but does suggest that Finkelman is right to think that there is something in need of explanation. Contra Natapoff, the similarity between the Electoral College and the World Series does not suggest that the American way of electing a head of state is designed to produce the best possible leader, but instead that—like the World Series—it was designed with some other goal in mind. The Electoral College may or may not be the creation of an ideal craftsman, but it certainly isn’t a “beautiful machine”; after electing the political version of the 1984 Kansas City Royals—who, by the way, were swept by Detroit in the first round—to the highest office in the land, maybe the American people should stop treating it that way.

Advertisements

Lex Majoris

The first principle of republicanism is that the lex majoris partis is the fundamental law of every society of individuals of equal rights; to consider the will of the society enounced by the majority of a single vote, as sacred as if unanimous, is the first of all lessons in importance, yet the last which is thoroughly learnt. This law once disregarded, there is no other but that of force, which ends necessarily in military despotism.
—Thomas Jefferson. Letter to Baron von Humboldt. 13 June 1817.

Since Hillary Clinton lost the 2016 American presidential election, many of her supporters have been quick to cry “racism” on the part of voters for her opponent, Donald Trump. According to Vox’s Jenée Desmond-Harris, for instance, Trump won the election “not despite but because he expressed unfiltered disdain toward racial and religious minorities in the country.” Aside from being the easier interpretation, because it allows Clinton voters to ignore the role their own economic choices may have played in the broad support Trump received throughout the country, such accusations are counterproductive even on their own terms because—only seemingly paradoxically—they reinforce many of the supports racism still receives in the United States: above all, because they weaken the intellectual argument for a national direct election for the presidency. By shouting “racism,” in other words, Hillary Clinton’s supporters may end up helping to continue racism’s institutional support.

That institutional support begins with the method by which Americans elect their president: the Electoral College—a method that, as many have noted, is not used in any other industrialized democracy. Although many scholars and others have advanced arguments for the existence of the college through the centuries, most of these “explanations” are, in fact, intellectually incoherent: while the most common of the traditional “explanations” concerns the differences between the “large states” and the “small,” for instance, in the actual United States—as James Madison, known as the “Father of the Constitution,” noted at the time—there had not then, and has not ever been since, a situation in American history that involved a conflict between larger-population and smaller-population states. Meanwhile, the other “explanations” for the Electoral College do not even rise to this level of incoherence.

In reality there is only one explanation for the existence of the college, and that explanation has been most forcefully and clearly made by law professor Paul Finkelman, now serving as a Senior Fellow at the University of Pennsylvania after spending much of his career at obscure law schools like the University of Tulsa College of Law, the Cleveland-Marshall College of Law, and the Albany Law School. As Finkelman has been arguing for decades (his first papers on the subject were written in the 1980s), the Electoral College was originally invented by the delegates to the Constitutional Convention of 1787 in order to protect slavery. That such was the purpose of the College can be known, most obviously, because the delegates to the convention said so.

When the means of electing a president were first debated, it’s important to remember that the convention had already decided, for the purposes of representation in the newly-created House of Representatives, to count black slaves by the means of the infamous three-fifths ratio. That ratio, in turn, had its effect when discussing the means of electing a president: delegates like James Madison argued, as Finkelman notes, that the existence of such a college—whose composition would be based on each state’s representation in the House of Representatives—would “guarantee that the nonvoting slaves could nevertheless influence the presidential election.” Or as Hugh Williamson, a delegate from North Carolina, observed during the convention, if American presidents were elected by direct national vote the South would be shut out of electing a national executive because “her slaves will have no suffrage”—that is, because in a direct vote all that would matter is the number of voters, the Southern states would lose the advantage the three-fifths ratio gave them in the House. Hence, the existence of the Electoral College is directly tied to the prior decision to grant Southern slave states an advantage in Congress, and so the Electoral College is another in a string of institutional decisions made by convention delegates to protect domestic slavery.

Yet, assuming that Finkelman’s case for the racism of the Electoral College is true, how can decrying the racism of the American voter somehow inflict harm on the case for abolishing the Electoral College? The answer goes back to the very justifications of, not only presidential elections, but elections in general—the gradual discovery, during the eighteenth century Enlightenment, of what is today known as the Law of Large Numbers.

Putting the law in capital letters, I admit, tends to mystify it, but anyone who buys insurance already understands the substance of the concept. As New Yorker writer Malcolm Gladwell once explained insurance, “the safest and most efficient way to provide insurance” is “to spread the costs and risks of benefits over the biggest and most diverse group possible.” In other words, the more people participating in an insurance plan, the greater the possibility that the plan’s members will be protected. The Law of Large Numbers explains why that is.

That reason is the same as the reason that, as Peter Bernstein remarks in Against the Gods: The Remarkable Story of Risk, if we toss a coin enough times that “will correspondingly increase the probability that the ratio of heads thrown to total throws” will decrease. Or, the reason that—as physicist Leonard Mlodinow has pointed out—in order really to tell which baseball team is better than another a World Series would have to be at least 23 games long (if one team were much better than the other), and possibly as long as 269 games (between two closely-matched opponents). Only by playing so many games can random chance be confidently excluded: as Carl Bialik of FiveThirtyEight once pointed out, usually “in sports, the longer the contest, the greater the chance that the favorite prevails.” Or, as Israeli psychologists Daniel Kahneman and Amos Tversky put the point in 1971, “the law of large numbers guarantees that very large samples will indeed be representative”: it’s what scientists rely upon to know that, if they have performed enough experiments or poured over enough data, they know enough to exclude idiosyncratic results. The Law of Large Numbers asserts, in short, that the more times we repeat something, the closer we will approach its true value.

It’s for just that reason that many have noted the connection between science and democratic government: “Science and democracy are powerful partners,” as the website for the Union of Concerned Scientists has put it. What makes these two objects such “powerful” partners is that the Law of Large Numbers is what underlies the act of holding elections: as James Surowiecki put the point in his book, The Wisdom of Crowds, the theory of democracy is that “the larger the group, the more reliable its judgment will be.” Just as scientists think that, by replicating an experiment, they can more readily trust in its results, so too does a democratic government implicitly think that, by including more people in the decision-making process, the government can the more readily arrive at the “correct” solution: as James Madison put it in The Federalist No. 10, if you “take in a greater variety of parties and interests,” then “you make it less probable that a majority of the whole will have a common motive for invading the rights of other citizens.” Without such a belief, after all, there would be no reason not to trust, say, a ruling caste to make decisions for society—or even a single, perhaps orange-toned, individual. Without some concept of the Law of Large Numbers—some belief that increasing the numbers of trials, or increasing the number of inputs, will make for better results—there is no reason for democratic government at all.

That’s why, when people criticize the Electoral College, they are implicitly invoking the Law of Large Numbers. The Electoral College divides the pool of American voters into fifty smaller pools, but a national popular vote would collect all Americans into a single lump—a point that some defenders of the College sometimes seek to make into a virtue, instead of the vice it is. In the wake of the 2000 election, for example, Senator Mitch McConnell wrote that the “Electoral College served to center the post-election battles in Florida,” preventing the “vote recounts and court battles in nearly every state of the Union” that, McConnell assures us, would have occurred in the college’s absence. But as Timothy Noah pointed out in The New Republic in 2012, what McConnell’s argument “fails to realize is that when you’re assembling one big count rather than a lot of little ones it’s a lot less clear what’s to be gained from rigging any of the little ones.” If what matters is the popular vote, what happens in any one location doesn’t matter so much; hence, stealing votes in downstate Illinois won’t allow you to steal the entire state—just as, with enough samples or experiments run, the fact that the lab assistant was drowsy at the time she recorded one set of results won’t matter so much. Or why deliberately losing a single game in July hardly matters so much as tanking a game of the World Series.

Put in such a way, it’s hard to see how anyone without a vested stake in the construction of the present system could defend the Electoral College—yet, as I suspect we are about to see, the very people now ascribing Donald Trump’s victory to the racism of the American voter will soon be doing just that. The reason will be precisely the same reason that such advocates want to blame racism, rather than the ongoing thievery of economic elites, for the rejection of Clinton: because racism is a “cultural” phenomenon, and most left-wing critics of the United States now obtain credentials in “cultural,” rather than scientific, disciplines.

If, in other words, Donald Trump’s victory was due to a complex series of renegotiations of the global contract between capital and labor, then that would require experts in economic and other, similar, disciplines to explain it; if his victory was due to racism, however—racism being considered a cultural phenomenon—then that will call forth experts in “cultural” fields. Because those with “liberal” or “leftist” political leanings now tend to gather in “cultural” fields, those with those political leanings will (indeed, must) now attempt to shift the battleground towards their areas of expertise. That shift, I would wager, will in turn lead those who argue for “cultural” explanations for the rise of Trump against arguments for the elimination of the Electoral College.

The reason is not difficult to understand: it isn’t too much to say, in fact, that one way to define the study of the humanities is to say it comprises the disciplines that largely ignore, or even oppose, the Law of Large Numbers both as a practical matter and as a philosophic one. As literary scholar Franco Moretti, now of Stanford, observed in his Atlas of the European Novel, 1800-1900, just as “silver fork novels”—a genre published in England between the 1820s and the 1840s—do not “show ‘London,’ but only a small, monochrome portion of it,” so too does the average student of literature not really study her ostensible subject matter. “I work on west European narrative between 1790 and 1930, and already feel like a charlatan outside of Britain and France,” Moretti confesses in an essay entitled “Distant Reading”—and even then, he only works “on its canonical fraction, which is not even 1 percent of published literature.” As Joshua Rothman put the point in a New Yorker profile of Moretti a few years ago, Moretti instead insists that “if you really want to understand literature, you can’t just read a few books or poems over and over,” but instead “you have to work with hundreds or even thousands of texts at a time”—that is, he insists on the significance of the Law of Large Numbers in his field, an insistence whose very novelty demonstrates how literary study is a field that has historically resisted precisely that recognition.

In order to proceed, in other words, disciplines like literary study or art history—or even history itself—must argue for the representativeness of a given body of work: usually termed, at least in literary study, “the Canon.” Such disciplines are already, simply by their very nature, committed to the idea that it is not necessary to read all of what Moretti says is the “thirty thousand nineteenth-century British novels out there” in order to arrive at conclusions about the nineteenth-century British novel: in the first place, “no one really knows” how many there really are (there could easily be twice as many), and in the second “no one has read them [all], [and] no one ever will.” In order to get off the ground, such disciplines must necessarily deny the Law of Large Numbers: as Moretti says, “you invest so much in individual texts only if you think that very few of them really matter”—a belief with an obvious political corollary. Rejection of the Law of Large Numbers is thusly, as Moretti also observes, “an unconscious and invisible premiss” for most who study such fields—which is to say that although students of the humanities often make claims for the political utility of their work, they sometimes forget that the enabling presuppositions of their fields are inherently those of the pre-Enlightenment ancien régime.

Perhaps that’s why—as Joe Pinsker observed in a fascinating, but short, article for The Atlantic several years ago—studies of college students find that those “from lower-income families tend toward ‘useful’ majors, such as computer science, math, and physics,” while students “whose parents make more money flock to history, English, and the performing arts”: the baseline assumptions of those disciplines are, no matter the particular predilections of a given instructor, essentially aristocratic, not democratic. To put it most baldly, the disciplines of the humanities must reject the premise of the Law of Large Numbers, which says that as more examples are added, the closer we approach to the truth—a point that can be directly witnessed when, for instance, English professor Michael Bérubé of Pennsylvania State University observes that the “humanists at [his] end of the [academic] hallway roundly dismissed” Harvard biologist E.O. Wilson’s book, Consilience: The Unity of Knowledge for arguing that “all human knowledge can and eventually will be unified under the rubric of the natural sciences.” Rejecting the Law of Large Numbers is foundational to the very operation of the humanities: without making that rejection, they cannot exist.

In recent decades, of course, presumably Franco Moretti has not been the only professor of the humanities to realize that their disciplines stood on a collision course with the Law of Large Numbers—it may perhaps explain why disciplines like literature and others have, for years, been actively recruiting among members of minority groups. The institutional motivations of such hiring, in other words, ought to be readily apparent: by making such hires, departments of the humanities could insulate themselves from charges from the political left—while at the same time continuing the practices that, without such cover, might have appeared increasingly anachronistic in a democratic age. Minority hiring, that is, may not be so politically “progressive” as its defenders sometimes argue: it may, in fact, have prevented the intellectual reforms within the humanities urged by people like Franco Moretti for a generation or more. Of course, by joining such departments, members of minority groups also may have, consciously or not, tied their own fortunes to a philosophic rejection of concepts like the Law of Large Numbers—as African-American sportswriter Michael Wilbon, of ESPN fame, wrote this past May, black people supposedly have some kind of allergy to statistical analysis: “in ‘BlackWorld,’” Wilbon solemnly intoned, “never is heard an advanced analytical word.” I suspect then that many who claim to be on the political left will soon come out to defend the Electoral College. If that happens, then in one last cruel historical irony the final defenders of American slavery may end up being precisely those slavery meant to oppress.

Buck Dancer’s Choice

Buck Dancer’s Choice: “a tune that goes back to Saturday-night dances, when the Buck, or male partner, got to choose who his partner would be.”
—Taj Mahal. Oooh So Good ‘n’ Blues. (1973).

 

“Goddamn it,” Scott said, as I was driving down the Kennedy Expressway towards Medinah Country Club. Scott is another caddie I sometimes give rides to; he’s living in the suburbs now and has to take the train into the city every morning to get his methadone pill, where I pick him up and take him to work. On this morning, Scott was distracting himself, as he often does, from the traffic outside by playing, on his phone, the card game known as spades—a game in which, somewhat like contract bridge, two players team up against an opposing partnership. On this morning, he was matched with a bad partner—a player who had, it came to light later, not trumped a ten of spades with the king the other player had in possession, and instead had played a three of spades. (In so doing, Scott’s incompetent partner thereby negated the value of the latter while receiving nothing in return.) Since, as I agree, that sounds relentlessly boring, I wouldn’t have paid much attention to the whole complaint—until I realized that not only did Scott’s grumble about his partner essentially describe the chief event of the previous night’s baseball game, but also why so many potential Democratic voters will likely sit out this election. After all, arguably the best Democratic candidate for the presidency this year will not be on the ballot in November.

What had happened the previous night was described on ESPN’s website as “one of the worst managerial decisions in postseason history”: in a one-game, extra-innings, playoff between the Baltimore Orioles and and the Toronto Blue Jays, Orioles manager Buck Showalter used six relief pitchers after starter Chris Tillman got pulled in the fifth inning. But he did not order his best reliever, Zach Britton, into the game at all. During the regular season, Britton had been one of the best relief pitchers in baseball; as ESPN observed, Britton had allowed precisely one earned run since April, and as Jonah Keri wrote for CBS Sports, over the course of the year Britton posted an Earned Run Average (.53) that was “the lowest by any pitcher in major league history with that many innings [67] pitched.” (And as Deadspin’s Barry Petchesky remarked the next day, Britton had “the best ground ball rate in baseball”—which, given that Orioles ultimately lost on a huge, moon-shot walk-off home run by Edwin Encarnacion, seems especially pertinent.) Despite the fact that the game went 11 innings, Showalter did not put Britton on the mound even once—which is to say that the Orioles ended their season with one of their best weapons sitting on the bench.

Showalter had the king of spades in his hand—but neglected to play him when it mattered. He defended himself later by saying, essentially, that he is the manager of the Baltimore Orioles, and that everyone else was lost in hypotheticals. “That’s the way it went,” the veteran manager said in the post-game press conference—as if the “way it went” had nothing to do with Showalter’s own choices. Some journalists speculated, in turn, that Showalter’s choices were motivated by what Deadspin called “the long-held, slightly-less-long-derided philosophy that teams shouldn’t use their closers in tied road games, because if they’re going to win, they’re going to need to protect a lead anyway.” In this possible view, Showalter could not have known how long the game would last, and could only know that, until his team scored some runs, the game would continue. If so, then it might be possible to lose by using your ace of spades too early.

Yet, not only did Showalter deny that such was a factor in his thinking—“It [had] nothing to do with ‘philosophical,’” he said afterwards—but such a view takes things precisely backward: it’s the position that imagines the Orioles scoring some runs first that’s lost in hypothetical thinking. Indisputably, the Orioles needed to shut down the Jays in order to continue the game; the non-hypothetical problem presented to the Orioles manager was that the O’s needed outs. Showalter had the best instrument available to him to make those outs … but didn’t use him. And that is to say that it was Showalter who got lost in his imagination, not the critics. By not using his best pitcher Showalter was effectively reacting to an imaginative hypothetical scenario, instead of responding to the actual facts playing out before him.

What Showalter was flouting, in other words, was a manner of thinking that is arguably the reason for what successes there are in the present world: probability, the first principle of which is known as the Law of Large Numbers. First conceived by a couple of Italians—Gerolamo Cardano, the first man known to have devised the idea, during the sixteenth century, and Jacob Bernoulli, who publicized it during the eighteenth—the Law of Large Numbers holds that, as Bernoulli put it in his Ars Conjectandi from 1713, “the more observations … are taken into account, the less is the danger of straying.” Or, that the more observations, the less the danger of reaching wrong conclusions. What Bernoulli is saying, in other words, is that in order to demonstrate the truth of something, the investigator should look at as many instances as possible: a rule that is, largely, the basis for science itself.

What the Law of Large Numbers says then is that, in order to determine a course of action, it should first be asked, “what is more likely to happen, over the long run?” In the case of the one-game playoff, for instance, it’s arguable that Britton, who has one of the best statistical records in baseball, would have been less likely to give up the Encarnacion home run than the pitcher who did (Ubaldo Jimenez, 2016 ERA 5.44) was. Although Jimenez, for example, was not a bad ground ball pitcher in 2015—he had a 1.85 ground ball to fly ball ratio that season, putting him 27th out of 78 pitchers, according to SportingCharts.com—his ratio was dwarfed by Britton’s: as J.J. Cooper observed just this past month for Baseball America, Britton is “quite simply the greatest ground ball pitcher we’ve seen in the modern, stat-heavy era.” (Britton faced 254 batters in 2016; only nine of them got an extra-base hit.) Who would you rather have on the mound in a situation where a home run (which is obviously a fly ball) can end not only the game, but the season?

What Bernoulli (and Cardano’s) Law of Large Numbers does is define what we mean by the concept, “the odds”: that is, the outcome that is most likely to happen. Bucking the odds is, in short, precisely the crime Buck Showalter committed during the game with the Blue Jays: as Deadspin’s Petchesky wrote, “the concept that you maximize value and win expectancy by using your best pitcher in the highest-leverage situations is not ‘wisdom’—it is fact.” As Petchesky goes on to say “the odds are the odds”—and Showalter, by putting all those other pitchers on the mound instead of Britton, ignored those odds.

As it happens, “bucking the odds” is just what the Democratic Party may be doing by adopting Hillary Clinton as their nominee instead of Bernie Sanders. As a number of articles this past spring noted, at that time many polls were saying that Sanders had better odds of beating Donald Trump than Clinton did. In May, Linda Qiu and Louis Jacobson noted in The Daily Beast Sanders was making the argument that “he’s a better nominee for November because he polls better than Clinton in head-to-head matches against” Trump. (“Right now,” Sanders said then on the television show, Meet the Press, “in every major poll … we are defeating Trump, often by big numbers, and always at a larger margin than Secretary Clinton is.”) Then, the evidence suggested Sanders was right: “Out of eight polls,” Qiu and Jacobson wrote, “Sanders beat Trump eight times, and Clinton beat Trump seven out of eight times,” and “in each case, Sanders’s lead against Trump was larger.” (In fact, usually by double digits.) But, as everyone now knows, that argument did not help to secure the nomination for Sanders: in August, Clinton became the Democratic nominee.

To some, that ought to be the end of the story: Sanders tried, and (as Showalter said after his game), “it didn’t work out.” Many—including Sanders himself—have urged fellow Democrats to put the past behind them and work towards Clinton’s election. Yet, that’s an odd position to take regarding a campaign that, above everything, was about the importance of principle over personality. Sanders’ campaign was, if anything, about the same point enunciated by William Jennings Bryan at the 1896 Democratic National Convention, in the famous “Cross of Gold” speech: the notion that the “Democratic idea … has been that if you legislate to make the masses prosperous, their prosperity will find its way up through every class which rests upon them.” Bryan’s idea, as ought to clear, has certain links to Bernoulli’s Law of Large Numbers—among them, the notion that it’s what happens most often (or to the most people) that matters.

That’s why, after all, Bryan insisted that the Democratic Party “cannot serve plutocracy and at the same time defend the rights of the masses.” Similarly—as Michael Kazin of Georgetown University described the point in May for The Daily Beast—Sanders’ campaign fought for a party “that would benefit working families.” (A point that suggests, it might be noted, that the election of Sanders’ opponent, Clinton, would benefit others.) Over the course of the twentieth century, in other words, the Democratic Party stood for the majority against the depredations of the minority—or, to put it another way, for the principle that you play the odds, not hunches.

“No past candidate comes close to Clinton,” wrote FiveThirtyEight’s Harry Enten last May, “in terms of engendering strong dislike a little more than six months before the election.” It’s a reality that suggests, in the first place, that the Democratic Party is hardly attempting to maximize their win expectancy. But more than simply those pragmatic concerns regarding her electability, however, Clinton’s candidacy represents—from the particulars of her policy positions, her statements to Wall Street financial types, and the existence of electoral irregularities in Iowa and elsewhere—a repudiation, not simply of Bernie Sanders the person, but of the very idea about the importance of the majority the Democratic Party once proposed and defended. What that means is that, even were Hillary Clinton to be elected in November, the Democratic Party—and those it supposedly represents—will have lost the election.

But then, you probably don’t need any statistics to know that.