A Part of the Main

We may be confident that the Great American Poem will not be written, no matter what genius attempts it, until democracy, the idea of our day and nation and race, has agonized and conquered through centuries, and made its work secure.

But the Great American Novel—the picture of the ordinary emotions and manners of American existence … will, we suppose, be possible earlier.
—John William De Forest. “The Great American Novel.” The Nation 9 January 1868.

Things refuse to be mismanaged long.
—Theodore Parker. “Of Justice and the Conscience.” 1853.

 

“It was,” begins Chapter Seven of The Great Gatsby, “when curiosity about Gatsby was at its highest that the lights in his house failed to go on one Saturday night—and, as obscurely as it began, his career as Trimalchio was over.” Trimalchio is a character in the ancient Roman novel The Satyricon who, like Gatsby, throws enormous and extravagant parties; there’s a lot that could be said about the two novels compared, and some of it has been said by scholars. The problem with comparing the two novels however is that, unlike Gatsby, The Satyricon is “unfinished”: we today have only the 141, not-always-continguous chapters collated by 17th century editors from two medieval manuscript copies, which are clearly not the entire book. Hence, comparing The Satyricon to Gatsby, or to any other novel, is always handicapped by the fact that, as the Wikipedia page continues, “its true length cannot be known.” Yet, is it really true that estimating a message’s total length based only on a part of the whole is impossible? Contrary to the collective wisdom of classical scholars and Wikipedia contributors, it isn’t, which we know due to techniques developed at the behest of a megalomaniac Trimalchio convinced Shakespeare was not Shakespeare—work that eventually become the foundation of the National Security Agency.  

Before getting to the history of those techniques, however, it might be best to describe first what they are. Essentially, the problem of figuring out the actual length of The Satyricon is a problem of sampling: that is, of estimating whether you have, like Christopher Columbus, run up on an island—or, like John Cabot, smacked into a continent. In biology, for instance, a researcher might count the number of organisms in a given area, then extrapolate for the entire area. Another biological technique is to capture and tag or mark some animals in an area, then recapture the same number of animals in the same area some time later—the number of re-captured previously-tagged animals provides a ratio useful for estimating the true size of the population. (The fewer the numbers of re-captured, the larger the size of the total population.) Or, as the baseball writer Bill James did earlier this year on his website (in “Red Hot Start,” from 16 April), of forecasting the final record of a baseball team based upon its start: in this case, the “true underlying win percentage” of the Boston record given that the team’s record in its first fifteen games was 13-2. The way that James did it is, perhaps, instructive about possible methods for determining the length of The Satyricon.

James begins by noting that because the “probability that a .500 team would go 13-2 or better in a stretch of 15 games is  … one in 312,” while the “probability that a .600 team would go 13-2 in a stretch of 15 games is … one in 46,” it is therefore “much more likely that they are a .600 team than that they are a .500 team”—though with the caveat that, because “there are many more .500 teams than .600 teams,” this is not “EXACTLY true” (emp. James). Next, James finds the standard statistical measure called the standard deviation: that is, the amount by which actual team records distribute themselves around the .500 mark of 81-81. James finds this number for teams in the years 2000-2015 to be .070, a low number; meaning that most team records in that era bunched closely around .500. (By comparison, the historical standard deviation for “all [major league] teams in baseball history” is .102, meaning that there used to be a wider spread between first-place teams and last-place teams than there is now.) Finally, James arranges the possible records of baseball teams according to what mathematicians call the “Gaussian,” or “normal” distribution: that is, how team records would look were they to follow the familiar “bell-shaped” curve, familiar from basic statistical courses, in which most teams had .500 records and very few teams had either 100 wins—or 100 losses. 

If the records of actual baseball teams follow such a distribution, James finds that “in a population of 1,000 teams with a standard deviation of .070,” there should be 2 teams above .700, 4 teams with percentages from .675 to .700, 10 teams from .650 to .675, 21 teams from .625 to .650, and so on, down to 141 teams from .500 to .525. (These numbers are mirrored, in turn, by teams with losing records.) Obviously, teams with better final records have better chances of starting 13-2—but at the same time, there are a lot fewer teams with final records of .700 than there are of teams going .600. As James writes, it is “much more likely that a 13-2 team is actually a .650 to .675 team than that they are actually a .675 to .700 team—just because there are so many more teams” (i.e., 10 teams as compared to 4). So the chances of each level of the distribution producing a 13-2 team actually grows as we approach .500—until, James says, we approach a winning percentage of .550 to .575, where the number of teams finally gets outweighed by the quality of those teams. Whereas in a thousand teams there are 66 teams who might be expected to have winning percentages of .575 to .600, thereby meaning that it is likely that a bit better than one of those teams might have start 13-2 (1.171341 to be precise), the chance of one of the 97 teams starting at 13-2 is only 1.100297. Doing a bit more mathematics, which I won’t bore you with, James eventually concludes that it is most likely that the 2018 Boston Red Sox will finish the season with .585 winning percentage, which is between a 95-67 season and a 94-68 season. 

What, however, does all of this have to do with The Satyricon, much less with the National Security Agency? In the specific case of the Roman novel, James provides a model for how to go about estimating the total length of the now-lost complete work: a model that begins by figuring out what league Petronius is playing in, so to speak. In other words, we would have to know something about the distribution of the lengths of fictional works: do they tend to converge—i.e., have a low standard deviation—strongly on some average length, the way that baseball teams tend to converge around 81-81? Or, do they wander far afield, so that the standard deviation is high? The author(s) of the Wikipedia article appear to believe that this is impossible, or nearly so; as the Stanford literary scholar Franco Moretti notes, when he says that he works “on West European narrative between 1790 and 1930,” he “already feel[s] like a charlatan” because he only works “on its canonical fraction, which is not even one percent of published literature.” There are, Moretti observes for instance, “thirty thousand nineteenth-century British novels out there”—or are there forty, or fifty, or sixty? “[N]o one really knows,” he concludes—which is not even to consider the “French novels, Chinese, Argentinian, [or] American” ones. But to compare The Satyricon to all novels would be to accept a high standard deviation—and hence a fairly wide range of possible lengths. 

Alternately, The Satyricon could be compared only to its ancient comrades and competitors: the five ancient Greek novels that survive complete from antiquity, for example, along with the only Roman novel to survive complete—Apuleius’ The Metamorphoses. Obviously, were The Satyricon to be compared only to ancient novels (and of those, only the complete ones) the standard deviation would likely be higher, meaning that the lengths might cluster more tightly around the mean. That would thereby imply a tighter range of possible lengths—at the risk, since the six ancient novels could all differ in length from The Satyricon much more than all the novels written likely would, of making a greater error in the estimate. The choice of which set (all novels, ancient novels) to use thereby is the choice between a higher chance of being accurate, and a higher chance of being precise. Either way, Wikipedia’s claim that the length “cannot be known” is only so if the words “with absolute certainty” are added. The best guess we can make can either be nearly certain to contain the true length within it, or be nearly certain—if it is accurate at all—to be very close to the true length, which is to say that it is entirely possible that we could know what the true length of The Satyricon was, even if we were not certain that we did in fact know it. 

That then answers the question of how we could know the length of The Satyricon—but when I began this story I promised that I would (eventually) relate it to the foundations of the National Security Agency. Those, I mentioned, began with an eccentric millionaire convinced that William Shakespeare did not write the plays that now bear his name. The millionaire’s name was George Fabyan; in the early 20th century he brought together a number of researchers in the new field of cryptography in order to “prove” Fabyan’s pet theory that Francis Bacon was the true author of the Bard’s work Bacon having been known as the inventor of the code system that bears his name; Fabyan thusly subscribed to the proposition that Bacon had concealed the fact of his authorship by means of coded messages within the plays themselves. The first professional American codebreakers thereby found themselves employed on Fabyan’s 350-acre estate (“Riverbank”) on the Fox River just south of Geneva, Illinois, which is still there today—and where American military minds found them on the American entry into World War One in 1917. 

Specifically, they found Elizabeth Smith and William Friedman (who would later marry). During the war the couple helped to train several federal employees in the art of codebreaking. By 1921, they had been hired away by the War Department, which then led to spending the 1920s breaking the codes of gangsters smuggling liquor into the dry United States in the service of the Coast Guard. During World War Two, Elizabeth would be employed in breaking one of the Enigma codes used by the German Navy; meanwhile, her husband William had founded the Army’s Signal Intelligence Service—the outfit that broke the Imperial Japanese Navy’s “Purple” code (itself based on Enigma machines), and was the direct predecessor to the National Security Agency. William had also written the scientific papers that underlay their work; he had, in fact, even coined the word cryptanalysis itself.          

Central to Friedman’s work was something now called the “Friedman test,” but then called the “kappa test.” This test, like Bill James’ work, compared two probabilities: the first being the obvious probability of which letter a coded one is likely to be, which in English is in one in 26, or 0.0385. The second, however, was not so obvious, that being the chance that two randomly selected letters from a source text will turn out to be the same letter, which is known in English to be 0.067. Knowing those two points, plus how long the intercepted coded message is, allows the cryptographer to estimate the length of the key, the translation parameter that determines the output—just as James can calculate the likely final record of a team that starts 13-2 using two different probabilities. Figuring out the length of The Satyricon, then, might not be quite the Herculean task it’s been represented to be—which raises the question, why has it been represented that way? 

The answer to that question, it seems to me, has something to do with the status of the “humanities” themselves: using statistical techniques to estimate the length of The Satyricon would damage the “firewall” that preserves disciplines like Classics, or literary study generally, from the grubby no ’ccount hands of the sciences—a firewall, we are eternally reminded, necessary in order to foster what Geoffrey Harpham, former director of the National Institute for the Humanities, has called “the capacity to sympathize, empathize, or otherwise inhabit the experience of others” so “clearly essential to democratic citizenship.” That may be so—but it’s also true that maintaining that firewall allows law schools, as Sanford Levinson of the University of Texas remarked some time ago, to continue to emphasize “traditional, classical legal skills” at the expense of “‘finding out how the empirical world operates.’” And since that has allowed (in Gill v. Whitford) the U.S. Supreme Court the luxury of considering whether to ignore a statistical measure of gerrymandering, for example, while on the other hand it is quite sure that the disciplines known as the humanities collect students from wealthy backgrounds at a disproportionate rate, it perhaps ought to be wondered precisely in what way those disciplines are “essential to democratic citizenship”—or rather, what idea of “democracy” is really being preserved here. If so, then—perhaps using what Fitzgerald called “the dark fields of the republic”—the final record of the United States can quite easily be predicted.

Advertisements

Nunc Dimittis

Nunc dimittis servum tuum, Domine, secundum verbum tuum in pace:
Quia viderunt oculi mei salutare tuum
Quod parasti ante faciem omnium populorum:
Lumen ad revelationem gentium, et gloriam plebis tuae Israel.
—“The Canticle of Simeon.”
What appeared obvious was therefore rendered problematical and the question remains: why do most … species contain approximately equal numbers of males and females?
—Stephen Jay Gould. “Death Before Birth, or a Mite’s Nunc dimittis.”
    The Panda’s Thumb: More Reflections in Natural History. 1980.
 4199HWwzLWL._AC_US218_

Since last year the attention of most American liberals has been focused on the shenanigans of President Trump—but the Trump Show has hardly been the focus of the American right. Just a few days ago, John Nichols of The Nation observed that ALEC—the business-funded American Legislative Exchange Council that has functioned as a clearinghouse for conservative proposals for state laws—“is considering whether to adopt a new piece of ‘model legislation’ that proposes to do away with an elected Senate.” In other words, ALEC is thinking of throwing its weight behind the (heretofore) fringe idea of overturning the Seventeenth Amendment, and returning the right to elect U.S. Senators to state legislatures: the status quo of 1913. Yet, why would Americans wish to return to a period widely known to be—as the most recent reputable academic history, Wendy Schiller and Charles Stewart’s Electing the Senate: Indirect Democracy Before the Seventeenth Amendment has put the point—“plagued by significant corruption to a point that undermined the very legitimacy of the election process and the U.S. Senators who were elected by it?” The answer, I suggest, might be found in a history of the German higher educational system prior to the year 1933.

“To what extent”—asked Fritz K. Ringer in 1969’s The Decline of the German Mandarins: The German Academic Community, 1890-1933—“were the German mandarins to blame for the terrible form of their own demise, for the catastrophe of National Socialism?” Such a question might sound ridiculous to American ears, to be sure: as Ezra Klein wrote in the inaugural issue of Vox, in 2014, there’s “a simple theory underlying much of American politics,” which is “that many of our most bitter political battles are mere misunderstandings” that can be solved with more information, or education. To blame German professors, then, for the triumph of the Nazi Party sounds paradoxical to such ears: it sounds like blaming an increase in rats on a radio station. From that view, then, the Nazis must have succeeded because the German people were too poorly-educated to be able to resist Hitler’s siren song.

As one appraisal of Ringer’s work in the decades since Decline has pointed out, however, the pioneering researcher went on to compare biographical dictionaries between Germany, France, England and the United States—and found “that 44 percent of German entries were academics, compared to 20 percent or less elsewhere”; another comparison of such dictionaries found that a much-higher percentage of Germans (82%) profiled in such books had exposure to university classes than those of other nations. Meanwhile, Ringer also found that “the real surprise” of delving into the records of “late nineteenth-century German secondary education” is that it “was really rather progressive for its time”: a higher percentage of Germans found their way to a high school education than did their peers in France or England during the same period. It wasn’t, in other words, for lack of education that Germany fell under the sway of the Nazis.

All that research, however, came after Decline, which dared to ask the question, “Did the work of German academics help the Nazis?” To be sure, there were a number of German academics, like philosopher Martin Heidegger and legal theorist Carl Schmitt, who not only joined the party, but actively cheered the Nazis on in public. (Heidegger’s connections to Hitler have been explored by Victor Farias and Emannuel Faye; Schmitt has been called “the crown jurist of the Third Reich.”) But that question, as interesting as it is, is not Ringer’s; he isn’t interested in the culpability of academics in direct support of the Nazis, perhaps the culpability of elevator repairmen could as well be interrogated. Instead, what makes Ringer’s argument compelling is that he connects particular intellectual beliefs to a particular historical outcome.

While most examinations of intellectuals, in other words, bewail a general lack of sympathy and understanding on the part of the public regarding the significance of intellectual labor, Ringer’s book is refreshing insofar as it takes the opposite tack: instead of upbraiding the public for not paying attention to the intellectuals, it upbraids the intellectuals for not understanding just how much attention they were actually getting. The usual story about intellectual work and such, after all, is about just how terrible intellectuals have it—how many first novels, after all, are about young writers and their struggles? But Ringer’s research suggests, as mentioned, the opposite: an investigation of Germany prior to 1933 shows that intellectuals were more highly thought of there than virtually anywhere in the world. Indeed, for much of its history before the Holocaust Germany was thought of as a land of poets and thinkers, not the grim nation portrayed in World War II movies. In that sense, Ringer has documented just how good intellectuals can have it—and how dangerous that can be.

All of that said, what are the particular beliefs that, Ringer thinks, may have led to the installation of the Fürher in 1933? The “characteristic mental habits and semantic preferences” Ringer documents in his book include such items as “the underlying vision of learning as an empathetic and unique interaction with venerated texts,” as well as a “consistent repudiation of instrumental or ‘utilitarian’ knowledge.” Such beliefs are, to be sure, seemingly required of the departments of what are now—but weren’t then—thought of, at least in the United States, as “the humanities”: without something like such foundational assumptions, subjects like philosophy or literature could not remain part of the curriculum. But, while perhaps necessary for intellectual projects to leave the ground, they may also have some costs—costs like, say, forgetting why the Seventeenth Amendment was passed.

That might sound surprising to some—after all, aren’t humanities departments hotbeds of leftism? Defenders of “the humanities”—like Gregory Harpham, once Director of the National Endowment for the Humanities—sometimes go even further and make the claim—as Harpham did in his 2011 book, The Humanities and the Dream of America—that “the capacity to sympathize, empathize, or otherwise inhabit the experience of others … is clearly essential to democratic society,” and that this “kind of capacity … is developed by an education that includes the humanities.” Such views, however, make a nonsense of history: traditionally, after all, it’s been the sciences that have been “clearly essential to democratic society,” not “the humanities.” And, if anyone thinks about it closely, the very notion of democracy itself depends on an idea that, at base, is “scientific” in nature—and one that is opposed to the notion of “the humanities.”

That idea is called, in scientific circles, “the Law of Large Numbers”—a concept first written down formally two centuries ago by mathematician Jacob Bernoulli, but easily illustrated in the words of journalist Michael Lewis’ most recent book. “If you flipped a coin a thousand times,” Lewis writes in The Undoing Project, “you were more likely to end up with heads or tails roughly half the time than if you flipped it ten times.” Or as Bernoulli put it in 1713’s Ars Conjectandi, “it is not enough to take one or another observation for such a reasoning about an event, but that a large number of them are needed.” It is a restatement of the commonsensical notion that the more times a result is repeated, the more trustworthy it is—an idea hugely applicable to human life.

For example, the Law of Large Numbers is why, as publisher Nate Silver recently put it, if “you want to predict a pitcher’s win-loss record, looking at the number of strikeouts he recorded and the number of walks he yielded is more informative than looking at his W’s and L’s from the previous season.” It’s why, when financial analyst John Bogle examined the stock market, he decided that, instead of trying to chase the latest-and-greatest stock, “people would be better off just investing their money in the entire stock market for a very cheap price”—and thereby invented the index fund. It’s why, Malcolm Gladwell has noted, the labor movement has always endorsed a national health care system: because they “believed that the safest and most efficient way to provide insurance against ill health or old age was to spread the costs and risks of benefits over the biggest and most diverse group possible.” It’s why casinos have limits on the amounts bettors can wager. In all these fields, as well as more “properly” scientific ones, it’s better to amass large quantities of results, rather than depend on small numbers of them.

What is voting, after all, but an act of sampling of the opinion of the voters, an act thereby necessarily engaged with the Law of Large Numbers? So, at least, thought the eighteenth-century mathematician and political theorist the Marquis de Condorcet—who called the result “the miracle of aggregation.” Summarizing a great deal of contemporary research, Sean Richey of Georgia State University has noted that Condorcet’s idea was that (as one of Richey’s sources puts the point) “[m]ajorities are more likely to select the ‘correct’ alternative than any single individual when there is uncertainty about which alternative is in fact the best.” Or, as Richey describes how Condorcet’s process actually works more concretely puts it, the notion is that “if ten out of twelve jurors make random errors, they should split five and five, and the outcome will be decided by the two who vote correctly.” Just as, in sum, a “betting line” demarks the boundary of opinion between gamblers, Condorcet provides the justification for voting: Condorcet’s theory was that “the law of large numbers shows that this as-if rational outcome will be almost certain in any large election if the errors are randomly distributed.” Condorcet, thereby, proposed elections as a machine for producing truth—and, arguably, democratic governments have demonstrated that fact ever since.

Key to the functioning of Condorcet’s machine, in turn, is large numbers of voters: the marquis’ whole idea, in fact, is that—as David Austen-Smith and Jeffrey S. Banks put the French mathematician’s point in 1996—“the probability that a majority votes for the better alternative … approaches 1 [100%] as n [the number of voters] goes to infinity.” In other words, the point is that the more voters, the more likely an election is to reach the correct decision. The Seventeenth Amendment is, then, just such a machine: its entire rationale is that the (extremely large) pool of voters of a state is more likely to reach a correct decision than an (extremely small) pool voters consisting of the state legislature alone.

Yet the very thought that anyone could even know what truth is, of course—much less build a machine for producing it—is anathema to people in humanities departments: as I’ve mentioned before, Bruce Robbins of Columbia University has reminded everyone that such departments were “founded on … the critique of Enlightenment rationality.” Such departments have, perhaps, been at the forefront of the gradual change in Americans from what the baseball writer Bill James has called “an honest, trusting people with a heavy streak of rationalism and an instinctive trust of science,” with the consequence that they had “an unhealthy faith in the validity of statistical evidence,” to adopting “the position that so long as something was stated as a statistic it was probably false and they were entitled to ignore it and believe whatever they wanted to [believe].” At any rate, any comparison of the “trusting” 1950s America described by James by comparison to what he thought of as the statistically-skeptical 1970s (and beyond) needs to reckon with the increasingly-large bulge of people educated in such departments: as a report by the Association of American Colleges and Universities has pointed out, “the percentage of college-age Americans holding degrees in the humanities has increased fairly steadily over the last half-century, from little over 1 percent in 1950 to about 2.5 percent today.” That might appear to be a fairly low percentage—but as Joe Pinsker’s headline writer put the point of Pinsker’s article in The Atlantic, “Rich Kids Major in English.” Or as a study cited by Pinsker in that article noted, “elite students were much more likely to study classics, English, and history, and much less likely to study computer science and economics.” Humanities students are a small percentage of graduates, in other words—but historically they have been (and given the increasingly-documented decreasing social mobility of American life, are increasingly likely to be) the people calling the shots later.

Or, as the infamous Northwestern University chant had it: “That‘s alright, that’s okay—you’ll be working for us someday!” By building up humanities departments, the professoriate has perhaps performed useful labor by clearing the ideological ground for nothing less than the repeal of the Seventeenth Amendment—an amendment whose argumentative success, even today, depends upon an audience familiar not only with Condorcet’s specific proposals, but also with the mathematical ideas that underlay them. That would be no surprise, perhaps, to Fritz Ringer, who described how the German intellectual class of the late nineteenth century and early twentieth constructed an “a defense of the freedom of learning and teaching, a defense which is primarily designed to combat the ruler’s meddling in favor of a narrowly useful education.” To them, the “spirit flourishes only in freedom … and its achievements, though not immediately felt, are actually the lifeblood of the nation.” Such an argument is reproduced by such “academic superstar” professors of humanities as Judith Butler, Maxine Elliot Professor in the Departments of Rhetoric and Comparative Literature at (where else?) the University of California, Berkeley, who has argued that the “contemporary tradition”—what?—“of critical theory in the academy … has shown how language plays an important role in shaping and altering our common or ‘natural’ understanding of social and political realities.”

Can’t put it better.

Size Matters

That men would die was a matter of necessity; which men would die, though, was a matter of circumstance, and Yossarian was willing to be the victim of anything but circumstance.
Catch-22.
I do not pretend to understand the moral universe; the arc is a long one, my eye reaches but little ways; I cannot calculate the curve and complete the figure by the experience of sight; I can divine it by conscience. And from what I see I am sure it bends towards justice.
Things refuse to be mismanaged long.
—“Of Justice and the Conscience.

 

monte-carlo-casino
The Casino at Monte Carlo

 

 

Once, wrote the baseball statistician Bill James, there was “a time when Americans” were such “an honest, trusting people” that they actually had “an unhealthy faith in the validity of statistical evidence”–but by the time James wrote in 1985, things had gone so far the other way that “the intellectually lazy [had] adopted the position that so long as something was stated as a statistic it was probably false.” Today, in no small part because of James’ work, that is likely no longer as true as it once was, but nevertheless the news has not spread to many portions of academia: as University of Virginia historian Sophia Rosenfeld remarked in 2012, in many departments it’s still fairly common to hear it asserted—for example—that all “universal notions are actually forms of ideology,” and that “there is no such thing as universal common sense.” Usually such assertions are followed by a claim for their political utility—but in reality widespread ignorance of statistical effects is what allowed Donald Trump to be elected, because although the media spent much of the presidential campaign focused on questions like the size of Donald Trump’s … hands, the size that actually mattered in determining the election was a statistical concept called sample size.

First mentioned by the mathematician Jacob Bernoulli made in his 1713 book, Ars Conjectandi, sample size is the idea that “it is not enough to take one or another observation for such a reasoning about an event, but that a large number of them are needed.” Admittedly, it might not appear like much of an observation: as Bernoulli himself acknowledged, even “the most stupid person, all by himself and without any preliminary instruction,” knows that “the more such observations are taken into account, the less is the danger of straying from the goal.” But Bernoulli’s remark is the very basis of science: as an article in the journal Nature put the point in 2013, “a study with low statistical power”—that is, few observations—“has a reduced chance of detecting a true effect.” Sample sizes need to be large enough to be able to eliminate chance as a possible factor.

If that isn’t known it’s possible to go seriously astray: consider an example drawn from the work of Israeli psychologists Amos Tversky (MacArthur “genius” grant winner) and (Nobel Prize-winning) Daniel Kahneman—a study “of two toys infants will prefer.” Let’s say that in the course of research our investigator finds that, of “the first five infants studied, four have shown a preference for the same toy.” To most psychologists, the two say, this would be enough for the researcher to conclude that she’s on to something—but in fact, the two write, a “quick computation” shows that “the probability of a result as extreme as the one obtained” being due simply to chance “is as high as 3/8.” The scientist might be inclined to think, in other words, that she has learned something—but in fact her result has a 37.5 percent chance of being due to nothing at all.

Yet when we turn from science to politics, what we find is that an American presidential election is like a study that draws grand conclusions from five babies. Instead of being one big sample—as a direct popular national election would be—presidential elections are broken up into fifty state-level elections: the Electoral College system. What that means is that American presidential elections maximize the role of chance, not minimize it.

The laws of statistics, in other words, predict that chance will play a large role in presidential elections—and as it happens, Tim Meko, Denise Lu and Lazaro Gamio reported for The Washington Post three days after the election that “Trump won the presidency with razor-thin margins in swing states.” “This election was effectively decided,” the trio went on to say, “by 107,000 people”—in an election in which more than 120 million votes were cast, that means that election was decided by less than a tenth of one percent of the total votes. Trump won Pennsylvania by less than 70,000 votes of nearly 6 million, Wisconsin by less than 30,000 of just less than three million, and finally Michigan by less than 11,000 out of 4.5 million: the first two by just more than one percent of the total vote each—and Michigan by a whopping .2 percent! Just to give you an idea of how insignificant these numbers are by comparison with the total vote cast, according to the Michigan Department of Transportation it’s possible that a thousand people in the five largest counties were involved in car crashes—which isn’t even to mention people who just decided to stay home because they couldn’t find a babysitter.

Trump owes his election, in short, to a system that is vulnerable to chance because it is constructed to turn a large sample (the total number of American voters) into small samples (the fifty states). Science tells us that small sample sizes increase the risk of random chance playing a role, American presidential elections use a smaller sample size than they could, and like several other presidential elections, the 2016 election did not go as predicted. Donald Trump could, in other words, be called “His Accidency” with even greater justice than John Tyler—the first vice-president to be promoted due to the death of his boss in office—was. Yet, why isn’t that point being made more publicly?

According to John Cassidy of The New Yorker, it’s because Americans haven’t “been schooled in how to think in probabilistic terms.” But just why that’s true—and he’s essentially making the same point Bill James did in 1985, though more delicately—is, I think, highly damaging to many of Clinton’s biggest fans: the answer is, because they’ve made it that way. It’s the disciplines where many of Clinton’s most vocal supporters make their home, in other words, that are most directly opposed to the type of probabilistic thinking that’s required to see the flaws in the Electoral College system.

As Stanford literary scholar Franco Moretti once observed, the “United States is the country of close reading”: the disciplines dealing with matters of politics, history, and the law within the American system have, in fact, more or less been explicitly constructed to prevent importing knowledge of the laws of chance into them. Law schools, for example, use what’s called the “case method,” in which a single case is used to stand in for an entire body of law: a point indicated by the first textbook to use this method, Christopher Langdell’s A Selection of Cases on the Law of Contracts. Other disciplines, such as history, are similar: as Emory University’s Mark Bauerlein has written, many such disciplines depend for their very livelihood upon “affirming that an incisive reading of a single text or event is sufficient to illustrate a theoretical or historical generality.” In other words, it’s the very basis of the humanities to reject the concept of sample size.

What’s particularly disturbing about this point is that, as Joe Pinsker documented in The Atlantic last year, the humanities attract a wealthier student pool than other disciplines—which is to say that the humanities tend to be populated by students and faculty with a direct interest in maintaining obscurity around the interaction between the laws of chance and the Electoral College. That doesn’t mean that there’s a connection between the architecture of presidential elections and the fact that—as Geoffrey Harpham, former president and director of the National Humanities Center, has observed—“the modern concept of the humanities” (that is, as a set of disciplines distinct from the sciences) “is truly native only to the United States, where the term acquired a meaning and a peculiar cultural force that it does not have elsewhere.” But it does perhaps explain just why many in the national media have been silent regarding that design in the month after the election.

Still, as many in the humanities like to say, it is possible to think that the current American university and political structure is “socially constructed,” or in other words could be constructed differently. The American division between the sciences and the humanities is not the only way to organize knowledge: as the editors of the massive volumes of The Literary and Cultural Reception of Darwin in Europe pointed out in 2014, “one has to bear in mind that the opposition of natural sciences … and humanities … does not apply to the nineteenth century.” If that opposition that we today find so omnipresent wasn’t then, it might not be necessary now. Hence, if the choice of the American people is between whether they ought to get a real say in the affairs of government (and there’s very good reason to think they don’t), or whether a bunch of rich yahoos spend time in their early twenties getting drunk, reading The Great Gatsby, and talking about their terrible childhoods …well, I know which side I’m on. But perhaps more significantly, although I would not expect that it happens tomorrow, still, given the laws of sample size and the prospect of eternity, I know how I’d bet.

Or, as another sharp operator who’d read his Bernoulli once put the point:

The arc of the moral universe is long, but it bends towards justice.”

 

Don Thumb

Then there was the educated Texan from Texas who looked like someone in Technicolor and felt, patriotically, that people of means—decent folk—should be given more votes than drifters, whores, criminals, degenerates, atheists, and indecent folk—people without means.
Joseph Heller. Catch-22. (1961).

 

“Odd arrangements and funny solutions,” the famed biologist Stephen Jay Gould once wrote about the panda’s thumb, “are the proof of evolution—paths that a sensible God would never tread but that a natural process, constrained by history, follows perforce.” The panda’s thumb, that is, is not really a thumb: it is an adaptation of another bone (the radial sesamoid) in the animal’s paw; Gould’s point is that the bamboo-eater’s thumb is not “a beautiful machine,” i.e. not the work of “an ideal engineer.” Hence, it must be the product of an historical process—a thought that occurred to me once again when I was asked recently by one of my readers (I have some!) whether it’s really true, as law professor Paul Finkelman has suggested for decades in law review articles like “The Proslavery Origins of the Electoral College,” that the “connection between slavery and the [electoral] college was deliberate.” One way to answer the question, of course, is to pour through (as Finkelman has very admirably done) the records of the Constitutional Convention of 1787: the notes of James Madison, for example, or the very complete documents collected by Yale historian Max Farrand at the beginning of the twentieth century. Another way, however, is to do as Gould suggests, and think about the “fit” between the design of an instrument and the purpose it is meant to achieve. Or in other words, to ask why the Law of Large Numbers suggests Donald Trump is like the 1984 Kansas City Royals.

The 1984 Kansas City Royals, for those who aren’t aware, are well-known in baseball nerd circles for having won the American League West division despite being—as famous sabermetrician Bill James, founder of the application of statistical methods to baseball, once wrote—“the first team in baseball history to win a championship of any stripe while allowing more runs (684) than they scored (673).” “From the beginnings of major league baseball just after the civil war through 1958,” James observes, no team ever managed such a thing. Why? Well, it does seem readily apparent that scoring more runs than one’s opponent is a key component to winning baseball games, and winning baseball games is a key component to winning championships, so in that sense it ought to be obvious that there shouldn’t be many winning teams that failed to score more runs than their opponents. Yet on the other hand, it also seems possible to imagine a particular sort of baseball team winning a lot of one-run games, but occasionally giving up blow-out losses—and yet as James points out, no such team succeeded before 1959.

Even the “Hitless Wonders,” the 1906 Chicago White Sox, scored more runs than their opponents  despite hitting (according to This Great Game: The Online Book of Baseball) “a grand total of seven home runs on the entire season” while simultaneously putting up the American League’s “worst batting average (.230).” The low-offense South Side team is seemingly made to order for the purposes of this discussion because they won the World Series that year (over the formidable Chicago Cubs)—yet even this seemingly-hapless team scored 570 runs to their opponents’ 460, according to Baseball Reference. (A phenomenon most attribute to the South Siders’ pitching and fielding: that is, although they didn’t score a lot of runs, they were really good at preventing their opponents’ from scoring a lot of runs.) Hence, even in the pre-Babe Ruth “dead ball” era, when baseball teams routinely employed “small ball” strategies designed to produce one-run wins as opposed to Ruth’s “big ball” attack, there weren’t any teams that won despite scoring fewer runs than their opponents’.

After 1958, however, there were a few teams that approached that margin: the 1959 Dodgers, freshly moved to Los Angeles, scored only 705 runs to their opponents’ 670, while the 1961 Cincinnati Reds scored 710 to their opponents 653, and the 1964 St. Louis Cardinals scored 715 runs to their opponents’ 652. Each of these teams were different than most other major league teams: the ’59 Dodgers played in the Los Angeles Coliseum, a venue built for the 1932 Olympics, not baseball; its cavernous power alleys were where home runs went to die, while its enormous foul ball areas ended many at-bats that would have continued in other stadiums. (The Coliseum, that is, was a time machine to the “deadball” era.) The 1961 Reds had Frank Robinson and virtually no other offense until the Queen City’s nine was marginally upgraded through a midseason trade. Finally, the 1964 Cardinals team had Bob Gibson (please direct yourself to the history of Bob Gibson’s career immediately if you are unfamiliar with him), and second they played in the first year after major league baseball’s Rules Committee redefined the strike zone to be just slightly larger—a change that had the effect of dropping home run totals by ten percent and both batting average and runs scored by twelve percent. In The New Historical Baseball Abstract, Bill James calls the 1960s the “second deadball era”; the 1964 Cardinals did not score a lot of runs, but then neither did anyone else.

Each of these teams was composed of unlikely sets of pieces: the Coliseum was a weird place to play baseball, the Rule Committee was a small number of men who probably did not understand the effects of their decision, and Bob Gibson was Bob Gibson. And even then, these teams all managed to score more runs than their opponents, even if the margin was small. (By comparison, the all-time run differential record is held by Joe DiMaggio’s 1939 New York Yankees, who outscored their opponents by 411 runs: 967 to 556, a ratio may stand until the end of time.) Furthermore, the 1960 Dodgers finished in fourth place, the 1962 Reds finished in third, and the 1965 Cards finished seventh: these were teams, in short, that had success for a single season, but didn’t follow up. Without going very deeply into the details then, suffice it to say that run differential is—as Sean Forman noted in the The New York Times in 2011—“a better predictor of future win-loss percentage than a team’s actual win-loss percentage.” Run differential is a way to “smooth out” the effects of chance in a fashion that the “lumpiness” of win-loss percentage doesn’t.

That’s also, as it happens, just what the Law of Large Numbers does: first noted by mathematician Jacob Bernoulli in his Ars Conjectandi of 1713, that law holds that “the more … observations are taken into account, the less is the danger of straying from the goal.” It’s the principle that is the basis of the insurance industry: according to Caltech physicist Leonard Mlodinow, it’s the notion that while “[i]ndividual life spans—and lives—are unpredictable, when data are collected from groups and analyzed en masse, regular patterns emerge.” Or for that matter, the law is also why it’s very hard to go bankrupt—which Donald Trump, as it so happens, has—when running a casino: as Nicholas Taleb commented in The Black Swan: The Impact of the Highly Improbable, all it takes to run a successful casino is to refuse to allow “one gambler to make a massive bet,” and instead “have plenty of gamblers make series of bets of limited size.” More bets equals more “observations,” and the more observations the more likely it is that all those bets will converge toward the expected result. In other words, one coin toss might be heads or might be tails—but the more times the coin is thrown, the more likely it is that there will be an equal number of both heads and tails.

How this concerns Donald Trump is that, as has been noted, although the president-elect did win the election, he did not win more votes than the Democratic candidate, Hillary Clinton. (As of this writing, those totals now stand at 62,391,335 votes for Clinton to Trump’s 61,125,956.) The reason that Clinton did not win the election is because American presidential elections are not won by collecting more votes in the wider electorate, but rather through winning in that peculiarly American institution, the Electoral College: an institution in which, as Will Hively remarked remarkably presciently in a Discover article in 1996, a “popular-vote loser in the big national contest can still win by scoring more points in the smaller electoral college.” Despite how weird that bizarre sort of result actually is, however, according to some that’s just what makes the Electoral College worth keeping.

Hively was covering that story in 1996: his Discovery story was about how, in the pages of the journal Public Choice that year, mathematician Alan Natapoff tried to argue that the “same logic that governs our electoral system … also applies to many sports”—for example, baseball’s World Series. In order “to become [World Series] champion,” Natapoff noticed, a “team must win the most games”—not score the most runs. In the 1960 World Series, the mathematician wrote, the New York Yankees “scored more than twice as many total runs as the Pittsburgh Pirates, 55 to 27”—but the Yankees lost game 7, and thus the series. “Runs must be grouped in a way that wins games,” Natapoff thought, “just as popular votes must be grouped in a way that wins states.” That is, the Electoral College forces candidates to “have broad appeal across the whole nation,” instead of playing “strongly on a single issue to isolated blocs of voters.” It’s a theory that might seem, on its face, to have a certain plausibility: by constructing the Electoral College, the delegates to the constitutional convention of 1787 prevented future candidates from winning by appealing to a single, but large, constituency.

Yet, recall Stephen Jay Gould’s remark about the panda’s thumb, which suggests that we can examine just how well a given object fulfills its purpose: in this case, Natapoff is arguing that, because the design of the World Series “fits” the purpose of identifying the best team in baseball, so too does the Electoral College “fit” the purpose of identifying the best presidential candidate. Natapoff’s argument concerning the Electoral College presumes, in other words, that the task of baseball’s playoff system is to identify the best team in baseball, and hence it ought to work for identifying the best president. But the Law of Large Numbers suggests that the first task of any process that purports to identify value is that it should eliminate, or at least significantly reduce, the effects of chance: whatever one thinks about the World Series, presumably presidents shouldn’t be the result of accident. And the World Series simply does not do that.

“That there is”—as Nate Silver and Dayn Perry wrote in their ESPN.com piece, “Why Don’t the A’s Win In October?” (collected in Jonah Keri and James Click’s Baseball Between the Numbers: Why Everything You Know About the Game Is Wrong)—“a great deal of luck involved in the playoffs is an incontrovertible mathematical fact.” It’s a point that was


argued so early in baseball’s history as 1904, when the New York Giants refused to split the gate receipts evenly with what they considered to be an upstart American League team (Cf. “Striking Out” https://djlane.wordpress.com/2016/07/31/striking-out/.). As Caltech physicist Leonard Mlodinow has observed, if the World Series were designed—by an “ideal engineer,” say—to make sure that one team was the better team, it would have to be 23 games long if one team were significantly better than the other, and 269 games long if the two teams were evenly matched—that is, nearly as long as two full seasons. In fact, since it may even be argued that baseball, by increasingly relying on a playoff system instead of the regular season standings, is increasing, not decreasing, the role of chance in the outcome of its championship process: whereas prior to 1969, the two teams meeting in the World Series were the victors of a paradigmatic Law of Large Numbers system—the regular season—now many more teams enter the playoffs, and do so by multiple routes. Chance is playing an increasing role in determining baseball’s champions: in James’ list of sixteen championship-winning teams that had a run differential of less than 1.100: 1, all of the teams, except the ones I have already mentioned, are from 1969 or after. Hence, from a mathematical perspective the World Series cannot be seriously argued to eliminate, or even effectively reduce, the element of chance—from which it can be reasoned, as Gould says about the panda’s thumb, that the purpose of the World Series is not to identify the best baseball team.

Natapoff’s argument, in other words, has things exactly backwards: rather than showing just how rational the Electoral College is, the comparison to baseball demonstrates just how irrational it is—how vulnerable it is to chance. In the light of Gould’s argument about the panda’s thumb, which suggests that a lack of “fit” between the optimal solution (the human thumb) to a problem and the actual solution (the panda’s thumb) implies the presence of “history,” that would then intimate that the Electoral College is either the result of a lack of understanding of the mathematics of chance with regards to elections—or that the American system for electing presidents was not designed for the purpose that it purports to serve. As I will demonstrate, despite the rudimentary development of the mathematics of probability at the time at least a few—and these, some of the most important—of the delegates to the Philadelphia convention in 1787 were aware of those mathematical realities. That fact suggests, I would say, that Paul Finkelman’s arguments concerning the purpose of the Electoral College are worth much more attention than they have heretofore received: Finkelman may or may not be correct that the purpose of the Electoral College was to support slavery—but what is indisputable is that it was not designed for the purpose of eliminating chance in the election of American presidents.

Consider, for example, that although he was not present at the meeting in Philadelphia, Thomas Jefferson possessed not only a number of works on the then-nascent study of probability, but particularly a copy of the very first textbook to expound on Bernoulli’s notion of the Law of Large Numbers: 1718’s The Doctrine of Chances, or, A Method of Calculating the Probability of Events in Play, by Abraham de Moivre. Jefferson also had social and intellectual connections to the noted French mathematician, the Marquis de Condorcet—a man who, according to Iain McLean of the University of Warwick and Arnold Urken of the Stevens Institute of Technology, applied “techniques found in Jacob Bernoulli’s Ars Conjectandi” to “the logical relationship between voting procedures and collective outcomes.” Jefferson in turn (McLean and Urken inform us) “sent [James] Madison some of Condorcet’s political pamphlets in 1788-9”—a connection that would only have reaffirmed a connection already established by the Italian Philip Mazzei, who sent a Madison a copy of some of Condorcet’s work in 1786: “so that it was, or may have been, on Madison’s desk while he was writing the Federalist Papers.” And while none of that implies that Madison knew of the marquis prior to coming to Philadelphia in 1787, before even meeting Jefferson when the Virginian came to France to be the American minister, the marquis had already become a close friend, for years, to another man who would become a delegate to the Philadelphia meeting: Benjamin Franklin. Although not all of the convention attendees, in short, may have been aware of the relationship between probability and elections, at least some were—and arguably, they were the most intellectually formidable ones, the men most likely to notice that the design of the Electoral College is in direct conflict with the Law of Large Numbers.

In particular, they would have been aware of the marquis’ most famous contribution to social thought: Condorcet’s “Jury Theorem,” in which—as Norman Schofield once observed in the pages of Social Choice Welfare—the Frenchman proved that, assuming “that the ‘typical’ voter has a better than even chance of choosing the ‘correct’ outcome … the electorate would, using the majority rule, do better than an average voter.” In fact, Condorcet demonstrated mathematically—using Bernoulli’s methods in a book entitled Essay on the Application of Analysis to the Probability of Majority Decisions (significantly published in 1785, two years before the Philadelphia meeting)—that adding more voters made a correct choice more likely, just as (according to the Law of Large Numbers) adding more games makes it more likely that the eventual World Series winner is the better team. Franklin at the least then, and perhaps Madison next most-likely, could not but have been aware of the possible mathematical dangers an Electoral College could create: they must have known that the least-chancy way of selecting a leader—that is, the product of the design of an infallible engineer—would be a direct popular vote. And while it cannot be conclusively demonstrated that these men were thinking specifically of Condorcet’s theories at Philadelphia, it is certainly more than suggestive that both Franklin and Madison thought that a direct popular vote was the best way to elect a president.

When James Madison came to the floor of Independence Hall to speak to the convention about the election of presidents for instance, he insisted that “popular election was better” than an Electoral College, as David O. Stewart writes in his The Summer of 1787: The Men Who Invented the Constitution. Meanwhile, it was James Wilson of Philadelphia—so close to Franklin, historian Lawrence Goldstone reports, that the infirm Franklin chose Wilson to read his addresses to the convention—who originally proposed direct popular election of the president: “Experience,” the Scottish-born Philadelphian said, “shewed [sic] that an election of the first magistrate by the people at large, was both a convenient & successful mode.” In fact, as William Ewald of the University of Pennsylvania has pointed out, “Wilson almost alone among the delegates advocated not only the popular election of the President, but the direct popular election of the Senate, and indeed a consistent application of the principle of ‘one man, one vote.’” (Wilson’s positions were far ahead of their time: in the case of the Senate, Wilson’s proposal would not be realized until the passage of the Seventeenth Amendment in 1913, and his stance in favor of the principle of “one man, one vote” would not be enunciated as part of American law until the Reynolds v. Sims line of cases decided by the Earl Warren-led U.S. Supreme Court in the early 1960s.) To Wilson, the “majority of people wherever found” should govern “in all questions”—a statement that is virtually identical to Condorcet’s mathematically-influenced argument.

What these men thought, in other words, was that an electoral system that was designed to choose the best leader of a nation would proceed on the basis of a direct national popular vote: some of them, particularly Madison, may even have been aware of the mathematical reasons for supposing that a direct national popular vote was how an American presidential election would be designed if it were the product of what Stephen Jay Gould calls an “ideal engineer.” Just as an ideal (but nonexistent) World Series would be at least 23, and possibly so long as 269 games—in order to rule out chance—the ideal election to the presidency would include as many eligible voters as possible: the more voters, Condorcet would say, the more likely those voters would be to get it right. Yet just as with the actual, as opposed to ideal, World Series, there is a mismatch between the Electoral College’s proclaimed purpose and its actual purpose: a mismatch that suggests researchers ought to look for the traces of history within it.

Hence, although it’s possible to investigate Paul Finkelman’s claims regarding the origins of the Electoral College by, say, trawling through the volumes of the notes taken at the Constitutional Convention, it’s also possible simply to think through the structure of the Constitution itself in the same fashion that Stephen Jay Gould thinks about, say, the structure of frog skeletons: in terms of their relation to the purpose they serve. In this case, there is a kind of mathematical standard to which the Electoral College can be compared: a comparison that doesn’t necessarily imply that the Constitution was created simply and only to protect slavery, as Finkelman says—but does suggest that Finkelman is right to think that there is something in need of explanation. Contra Natapoff, the similarity between the Electoral College and the World Series does not suggest that the American way of electing a head of state is designed to produce the best possible leader, but instead that—like the World Series—it was designed with some other goal in mind. The Electoral College may or may not be the creation of an ideal craftsman, but it certainly isn’t a “beautiful machine”; after electing the political version of the 1984 Kansas City Royals—who, by the way, were swept by Detroit in the first round—to the highest office in the land, maybe the American people should stop treating it that way.

Noble Lie

With a crew and good captain well seasoned,
They left fully loaded for Cleveland.
—“The Wreck of the Edmund Fitzgerald.” 1976.

The comedian Bill Maher began the “panel” part of his show Real Time the other day—the last episode before the election—by noting that virtually every political expert had dismissed Donald Trump’s candidacy at every stage of the past year’s campaign. When Trump announced he was running, Maher observed, the pundits said “oh, he’s just saying that … because he just wants to promote his brand.” They said Trump wouldn’t win any voters, Maher noted—“then he won votes.” And then, Maher went on, they said he wouldn’t win any primaries—“then he won primaries.” And so on, until Trump became the Republican nominee. So much we know, but what was of interest about the show was the response one of Maher’s guests: David Frum, a Canadian who despite his immigrant origins became a speechwriter for George W. Bush, invented the phrase “axis of evil,” and has since joined the staff of the supposedly liberal magazine, The Atlantic. The interest of Frum’s response was not only how marvelously inane it was—but also how it had already been decisively refuted only hours earlier, by men playing a boy’s game on the Lake Erie shore.

Maybe I’m being cruel however: like most television shows, Real Time with Bill Maher is shot before it is aired, and this episode was released last Friday. Frum then may not have been aware, when he said what he said, that the Chicago Cubs won the World Series on Wednesday—and if he is like most people, Frum is furthermore unaware of the significance of that event, which goes (as I will demonstrate) far beyond matters baseball. Still, surely Frum must have been aware of how ridiculous what he said was, given that the conversation began with Maher reciting the failures of the pundit class—and Frum admitted to belonging to that class. “I was one of those pundits that you made fun of,” Frum confessed to Maher—yet despite that admission, Frum went on to make a breathtakingly pro-pundit argument.

Trump’s candidacy, Frum said, demonstrated the importance of the gatekeepers of the public interest—the editors of the national newspapers, for instance, or the anchors of the network news shows, or the mandarins of the political parties. Retailing a similar  argument to one made by, among others, Salon’s Bob Cesca—who contended in early October that “social media is the trough from which Trump feeds”—Frum proceeded to make the case that the Trump phenomena was only possible once apps like Facebook and Twitter enabled presidential candidates to bypass the traditional centers of power. To Frum, in other words, the proper response to the complete failure of the establishment (to defeat Trump) was to prop up the establishment (so as to defeat future Trumps). To protect against the failure of experts Frum earnestly argued—with no apparent sense of irony—that we ought to give more power to experts.

There is, I admit, a certain schadenfreude in witnessing a veteran of the Bush Administration tout the importance of experts, given that George W.’s regime was notable for, among other things, “systematically chang[ing] and supress[ing] … scientific reports about global warming” (according to the British Broadcasting Corporation)—and not even to discuss how Bush cadres torpedoed the advice of the professionals of the CIA vis á vis the weapons-buying habits of a certain Middle Eastern tyrant. But the larger issue, however, is that the very importance of “expert” knowledge has been undergoing a deep interrogation for decades now—and that the victory of the Chicago Cubs in this year’s World Series has brought much of that critique to the mainstream.

What I mean can be demonstrated by a story told by the physicist Freeman Dyson—a man who never won a Nobel Prize, nor even received a doctorate, but nevertheless was awarded a place at Princeton’s Institute of Advanced Study at the ripe age of thirty by none other than Robert Oppenheimer (the man in charge of the Manhattan Project) himself. Although Dyson has had a lot to say during his long life—and a lot worth listening to—on a wide range of subjects, from interstellar travel to Chinese domestic politics, of interest to me in connection to Frum’s remarks on Donald Trump is an article Dyson published in The New York Review of Books in 2011, about a man who did win the Nobel Prize: the Israeli psychologist Daniel Kahneman, who won the prize for economics in 2002. In that article, Dyson told a story about himself: specifically, what he did during World War II—an experience, it turns out, that leads by a circuitous path over the course of seven decades to the epic clash resolved by the shores of Lake Erie in the wee hours of 3 November.

Entitled “How to Dispel Your Illusions,” Dyson there tells the story of being a young statistician with the Royal Air Force’s Bomber Command in the spring of 1944—a force that suffered, according to the United Kingdom’s Bomber Command Museum, “a loss rate comparable only to the worst slaughter of the First World War trenches.” To combat this horror, Dyson was charged with discovering the common denominator between the bomber crews that survived until the end of their thirty-mission tour of duty (about 25% of all air crews). Since they were succeeding when three out of four of their comrades were failing, Dyson’s superiors assumed that those successful crews were doing something that their less-successful colleagues (who were mostly so much less successful that they were no longer among the living) were not.

Bomber Command, that is, had a theory about why some survived and some died: “As [an air crew] became more skillful and more closely bonded,” Dyson writes that everyone at Bomber Command thought, “their chances of survival would improve.” So Dyson, in order to discover what that something was, plunged in among the data of all the bombing missions the United Kingdom had run over Germany since the beginning of the war. If he could find it, maybe it could be taught to the others—and the war brought that much closer to an end. But despite all his searching, Dyson never found that magic ingredient.

It wasn’t that Dyson didn’t look hard enough for it: according to Dyson, he “did a careful analysis of the correlation between the experience of the crews and their loss rates, subdividing the data into many small packages so as to eliminate effects of weather and geography.” Yet, no matter how many different ways he looked at the data, he could not find evidence that the air crews that survived were any different than the ones shot down over Berlin or lost in the North Sea: “There was no effect of experience,” Dyson’s work found, “on loss rate.” Who lived and who died while attempting to burn Dresden or blow up Hamburg was not a matter of experience: “whether a crew lived or died,” Dyson writes, “was purely a matter of chance.” The surviving crews possessed no magical ingredient. They couldn’t—perhaps because there wasn’t one.

Still, despite the conclusiveness of Dyson’s results his studies had no effect on the operations of Bomber Command: “The crews continued to die, experienced and inexperienced alike, until Germany was overrun and the war finally ended.” While Dyson’s research suggested that dying in the stratosphere over Lübeck had no relation to skill, no one at the highest levels wanted to admit that the survivors weren’t experts—that they were instead just lucky. Perhaps, had the war continued, Dyson’s argument might eventually have won out—but the war ended, fortunately (or not) for the air crews of the Royal Air Force, before Bomber Command had to admit he was right.

All of that, of course, might appear to have little to do with the Chicago Cubs—until it’s recognized that the end of their century-long championship drought had everything to do with the eventual success of Dyson’s argument. Unlike Bomber Command, the Cubs have been at the forefront of what The Ringer’s Rany Jazayerli calls baseball’s “Great Analytics War”—and unlike the contest between Dyson and his superiors, that war has had a definite conclusion. The battle between what Jazayerli calls an “objective, data-driven view” and an older vision of baseball “ended at 48 minutes after midnight on November 3”—when the Cubs (led by a general manager who, like Dyson, trusted to statistical analysis) recorded the final out of the 2016 season.

That general manager is Theo Epstein—a man who was converted to Dyson’s “faith” at an early age. According to ESPN, Epstein, “when he was 12 … got his first Bill James historical abstract”—and as many now recognize, James pioneered applying the same basic approach Dyson used to think about how to bomb Frankfurt to winning baseball games. An obscure graduate of the University of Kansas, after graduation James took a job as a night security guard at the Stokely-Van Camp pork and beans cannery in Kansas City—and while isolated in what one imagines were the sultry (or wintry) Kansas City evenings of the 1970s, James had plenty of time to think about what interested him. That turned out to be somewhat like the problem Dyson had faced a generation earlier: where Dyson was concerned with how to win World War II, James was interested in what appeared to be the much-less portentous question of how to win the American League. James thereby invented an entire field—what’s now known as sabermetrics, or the statistical study of baseball—and in so doing, the tools James invented have become the keys to baseball’s kingdom. After all, Epstein—employed by a team owner who hired James as a consultant in 2003—not only used James’ work to end the Cubs’ errand in baseball’s wilderness but also, as all the world knows, constructed the Boston Red Sox championship teams of 2004 and 2007.

What James had done, of course, is shown how the supposed baseball “experts”—the ex-players and cronies that dominated front offices at the time—in fact knew very little about the game: they did not know, for example, that the most valuable single thing a batter can do is to get on base, or that stolen bases are, for the most part, a waste of time. (The risk of making an out, as per for example David Smith’s “Maury Wills and the Value of a Stolen Base,” is more significant than the benefit of gaining a base.) James’ insights had not merely furnished the weaponry used by Epstein; during the early 2000s another baseball team, the Oakland A’s, and their manager Billy Beane, had used James-inspired work to get to the playoffs four consecutive years (from 2000 to 2003), and won twenty consecutive games in 2002—a run famously chronicled by journalist Michael Lewis’ book, Moneyball: The Art of Winning an Unfair Game, which later became a Hollywood movie starring Brad Pitt. What isn’t much known, however, is that Lewis has noticed the intellectual connection between this work in the sport of baseball—and the work Dyson thought of as similar to his own work as a statistician for Bomber Command: the work of psychologist Kahneman and his now-deceased colleague, Amos Tversky.

The connection between James, Kahneman, and Tversky—an excellent name for a law firm—was first noticed, Lewis says, in a review of his Moneyball book by University of Chicago professors Cass Sunstein, of the law school, and Richard Thaler, an economist. When Lewis described the failures of the “old baseball men,” and conversely Beane’s success, the two professors observed that “Lewis is actually speaking here of a central finding in cognitive psychology”: the finding upon which Kahneman and Tversky based their careers. Whereas Billy Beane’s enemies on other baseball teams tended “to rely on simple rules of thumb, on traditions, on habits, on what other experts seem to believe,” Sunstein and Thaler pointed out that Beane relied on the same principle that Dyson found when examining the relative success of bomber pilots: “Statistics and simple arithmetic tell us more about ourselves than expert intuition.” While Bomber Command in other words relied on the word of their “expert” pilots, who perhaps might have said they survived a run over a ball-bearing plant because of some maneuver or other, baseball front offices relied for decades on ex-players who thought they had won some long-ago game on the basis of some clever piece of baserunning. Tversky and Kahneman’s work, however—like that of Beane and Dyson—suggested that much of what passes as “expert” judgment can be, for decades if not centuries, an edifice erected on sand.

That work has, as Lewis found after investigating the point when his attention was drawn to it by Sunstein and Thaler’s article, been replicated in several fields: in the work of the physician Atul Gawande, for instance, who, Lewis says, “has shown the dangers of doctors who place too much faith in their intuition.” The University of California, Berkeley finance professor Terry Odean “examined 10,000 individual brokerage accounts to see if stocks the brokers bought outperformed stocks they sold and found that the reverse was true.” And another doctor, Toronto’s Donald Redelmeier—who studied under Tversky—found “that an applicant was less likely to be admitted to medical school if he was interviewed on a rainy day.” In all of these cases (and this is not even to bring up the subject of, say, the financial crisis of 2007-08, a crisis arguably brought on precisely by the advice of “experts”), investigation has shown that “expert” opinion may not be what it is cracked up to be. It may in fact actually be worse than the judgment of laypeople.

If so, might I suggest, then David Frum’s “expert” suggestion about what to do to avoid a replay of the Trump candidacy—reinforce the rule of experts, a proposition that itself makes several questionable assumptions about the nature of the events of the past two years, if not decades—stops appearing to be a reasonable proposition. It begins, in fact, to appear rather more sinister: an attempt by those in Frum’s position in life—what we might call Eastern, Ivy League-types—to will themselves into believing that Trump’s candidacy is fueled by a redneck resistance to “reason,” along with good old-fashioned American racism and sexism. But what the Cubs’ victory might suggest is that what could actually be powering Trump is the recognition by the American people that many of the “cures” dispensed by the American political class are nothing more than snake oil proffered by cynical tools like David Frum. That snake oil doubles down on exactly the same “expert” policies (like freeing capital to wander the world, while increasingly shackling labor) that, debatably, is what led to the rise of Trump in the first place—a message that, presumably, must be welcome to Frum’s superiors at whatever the contemporary equivalent of Bomber Command is.

Still, despite the fact that the David Frums of the world continue to peddle their nonsense in polite society, even this descendant of South Side White Sox fans must allow that Theo Epstein’s victory has given cause for hope down here at the street-level of a Midwestern city that for has, for more years than the Cubs have been in existence, been the plaything of Eastern-elite labor and trade policies. It’s a hope that, it seems, now has a Ground Zero.

You can see it at the intersection of Clark and Addison.

Our Game

truck with battle flag and bumper stickers
Pick-up truck with Confederate battle flag.

 

[Baseball] is our game: the American game … [it] belongs as much to our institutions, fits into them as significantly, as our constitutions, laws: is just as important in the sum total of our historic life.
—Walt Whitman. April, 1889.

The 2015 Chicago Cubs are now a memory, yet while they lived nearly all of Chicago was enthralled—not least because of the supposed prophesy of a movie starring a noted Canadian. For this White Sox fan, the enterprise reeked of the phony nostalgia baseball has become enveloped by, of the sort sportswriters like to invoke whenever they, for instance, quote Walt Whitman’s remark that baseball “is our game: the American game.” Yet even while, to their fans, this year’s Cubs were a time machine to what many envisioned as a simpler, and perhaps better, America—much as the truck pictured may be such a kind of DeLorean to its driver—in point of fact the team’s success was built upon precisely the kind of hatred of tradition that was the reason why Whitman thought baseball was “America’s game”: baseball, Whitman said, had “the snap, go, fling of the American character.” It’s for that reason, perhaps, that the 2015 Chicago Cubs may yet prove a watershed edition of the Lovable Losers: they might prove not only the return of the Cubs to the elite of the National League, but also the resurgence of a type of thinking that was of the vanguard in Whitman’s time and—like World Series appearances for the North Siders—of rare vintage since. It’s a resurgence that may, in a year of Donald Trump, prove far more important than the victories of baseball teams, no matter how lovable.

That, to say the least, is an ambitious thesis: the rise of the Cubs signifies little but that their new owners possess a lot of money, some might reply. But the Cubs’ return to importance was undoubtedly caused by the team’s adherence, led by former Boston general manager Theo Epstein, to the principles of what’s been called the “analytical revolution.” It’s a distinction that was made clear during the divisional series against the hated St. Louis Cardinals: whereas, for example, St. Louis manager Matt Matheny asserted, regarding how baseball managers ought to handle their pitching staff,  that managers “first and foremost have to trust our gut,” the Cubs’ Joe Maddon (as I wrote about in a previous post) spent his entire season doing such things as batting his pitcher eighth, on the grounds that statistical analysis showed that by doing so his team gained a nearly-infinitesimal edge. (Cf. “Why Joe Maddon bats the pitcher eighth” ESPN.com)

Since the Cubs hired former Boston Red Sox general manager Theo Epstein, few franchises in baseball have been as devoted to what is known as the “sabermetric” approach. When the Cubs hired him, Epstein was well-known for “using statistical evidence”—as the New Yorker’s Ben McGrath put it a year before Epstein’s previous team, the Boston Red Sox, overcame their own near-century of futility in 2004—rather than relying upon what Epstein’s hero, the storied Bill James, has called “baseball’s Kilimanjaro of repeated legend and legerdemain”—the sort embodied by the Cardinals’ Matheny apparent reliance on seat-of-the-pants judgement.

Yet, while Bill James’ sort of thinking may be astonishingly new to baseball’s old guard, it would have been old hat to Whitman, who had the example of another Bill James directly in front of him. To follow the sabermetric approach after all requires believing (as the American philosopher William James did according to the Internet Encyclopedia of Philosophy), “that every event is caused and that the world as a whole is rationally intelligible”—an approach that not only would Whitman have understood, but applauded.

Such at least was the argument of the late American philosopher Richard Rorty, whose lifework was devoted to preserving the legacy of late nineteenth and early twentieth century writers like Whitman and James. To Rorty, both of those earlier men subscribed to a kind of belief in America rarely seen today: both implicitly believed in what James’ follower John Dewey would call “the philosophy of democracy,” in which “both pragmatism and America are expressions of a hopeful, melioristic, experimental frame of mind.” It’s in that sense, Rorty argued, that William James’ famous assertion that “the true is only the expedient in our way of thinking” ought to be understood: what James meant by lines like this was that what we call “truth” ought to be tested against reality in the same way that scientists test their ideas about the world via experiments instead of relying upon “guts.”

Such a frame of mind however has been out of fashion in academia since at least the 1940s, Rorty often noted: for example, as early as the 1940s Robert Hutchins and Mortimer Adler of the University of Chicago were reviling the philosophy of Dewey and James as “vulgar, ‘relativistic,’ and self-refuting.” To say, as James did say, “that truth is what works” was—according to thinkers like Hutchins and Adler—“to reduce the quest for truth to the quest for power.” To put it another way, Hutchins and Adler provided the Ur Example of what’s become known as Godwin’s Law: the idea that, sooner or later, every debater will eventually claim that the opponent’s position logically ends up at Nazism.

Such thinking is by no means extinct in academia: indeed, in many ways Rorty’s work at the end of his life was involved in demonstrating how the sorts of arguments Hutchins and Adler enlisted for their conservative politics had become the very lifeblood of those supposedly opposed to the conservative position. That’s why, to those whom Rorty called the “Unpatriotic Academy,” the above picture—taken at a gas station just over the Ohio River in southern Indiana—will be confirmation of the view of the United States held by those who “find pride in American citizenship impossible,” and “associate American patriotism with an endorsement of atrocities”: to such people, America and science are more or less the same thing as the kind of nearly-explicit racism demonstrated in the photograph of the truck.

The problem with those sorts of arguments, Rorty wanted to claim in return, was that it is all-too willing to take the views of some conservative Americans at face value: the view that, for instance, “America is a Christian country.” That sentence is remarkable precisely because it is not taken from the rantings of some Southern fundamentalist preacher or Republican candidate, but rather is the opening sentence of an article by the novelist and essayist Marilynne Robinson in, of all places, the New York Review of Books. That it could appear so, I think Rorty would have said, shows just how much today’s academia really shares the views of its supposed opponents.

Yet, as Rorty was always arguing, the ideas held by the pragmatists are not so easily characterized as mere American jingoism as the many critics of Dewey and James and the rest would like to portray them as—nor is “America” so easily conflated with simple racism. That is because the arguments of the American pragmatists were (arguably) simply a restatement of a set of ideas held by a man who lived long before North America was even added to the world’s geography: a man known to history as Ibn Khaldun, who was born in Tunis on Africa’s Mediterranean coastline in the year 1332 of the Western calendar.

Khaldun’s views of history, as set out by his book Muqaddimah (“Introduction,” often known by its Greek title, Prolegemena), can be seen as the forerunners of the ideas of John Dewey and William James, as well as the ideas of Bill James and the front office of the Chicago Cubs. According to a short one-page biography of the Arab thinker by one “Dr. A. Zahoor,” for example, Khaldun believed that writing history required such things as “relating events to each other through cause and effect”—much as both men named William James believe[d] that baseball events are not inexplicable. As Khaldun himself wrote:

The rule for distinguishing what is true from what is false in history is based on its possibility or impossibility: That is to say, we must examine human society and discriminate between the characteristics which are essential and inherent in its nature and those which are accidental and need not be taken into account, recognizing further those which cannot possibly belong to it. If we do this, we have a rule for separating historical truth from error by means of demonstrative methods that admits of no doubt.

This statement is, I think, hardly distinguishable from what the pragmatists or the sabermetricians are after: the discovery of what Khaldun calls “those phenomena [that] were not the outcome of chance, but were controlled by laws of their own.” In just the same way that Bill James and his followers wish to discover things like when, if ever, it is permissible or even advisable to attempt to steal a base, or lay down a bunt (both, he says, are more often inadvisable strategies, precisely on the grounds that employing them leaves too much to chance), Khaldun wishes to discover ways to identify ideal strategies in a wider realm.

Assuming then that we could say that Dewey and James were right to claim that such ideas ought to be one and the same as the idea of “America,” then we could say that Ibn Khaldun, if not the first, was certainly one of the first Americans—that is, one of the first to believe in those ideas we would later come to call “America.” That Khaldun was entirely ignorant of such places as southern Indiana should, by these lights, no more count against his Americanness than Donald Trump’s ignorance of more than geography ought to count against his. Indeed, conducted according to this scale, it should be no contest as to which—between Donald Trump, Marilynn Robinson, and Ibn Khaldun—is the the more likely to be a baseball fan. Nor, need it be added, which the better American.

The Oldest Mistake

Monte Ward traded [Willie] Keeler away for almost nothing because … he made the oldest mistake in management: he focused on what the player couldn’t do, rather than on what he could.
The New Bill James Historical Baseball Abstract

 

 

What does an American “leftist” look like? According to academics and the inhabitants of Brooklyn and its spiritual suburbs, there are means of tribal recognition: unusual hair or jewelry; a mode of dress either strikingly old-fashioned or futuristic; peculiar eyeglasses, shoes, or other accessories. There’s a deep concern about food, particularly that such food be the product of as small, and preferably foreign, an operation as possible—despite a concomitant enmity of global warming. Their subject of study at college was at minimum one of the humanities, and possibly self-designed. If they are fans of sports at all, it is either extremely obscure, obscenely technical, and does not involve a ball—think bicycle racing—or it is soccer. And so on. Yet, while each of us has exactly a picture of such a person in mind—probably you know at least a few, or are one yourself—that is not what a real American leftist looks like at the beginning of the twenty-first century. In reality, a person of the actual left today drinks macro-, not micro-, brews, studied computer science or some other such discipline at university, and—above all—is a fan of either baseball or football. And why is that? Because such a person understands statistics intuitively—and the great American political battle of the twenty-first century will be led by the followers of Strabo, not Pyrrho.

Each of those two men were Greeks: the one, a geographer, the other a philosopher—the latter often credited with being one of the first “Westerners” to visit India. “Nothing really exists,” Pyrrho reportedly held, “but human life is governed by convention”—a philosophy very like that of the current American “cultural left,” governed as it is by the notion, as put by American literary critic Stanley Fish, that “norms and standards and rules … are in every instance a function or extension of history, convention, and local practice.” Arguably, most of the “political” work of the American academy over the past several generations has been done under that rubric: as Fish and others have admitted in recent years, it’s only by acceding to some version of that doctrine that anyone can work as an American academic in the humanities these days.

Yet while “official” leftism has prospered in the academy under a Pyrrhonian rose, in the meantime enterprises like fantasy football and above all, sabermetrics, have expanded as a matter of “entertainment.” But what an odd form of relaxation! It’s an bizarre kind of escapism that requires a familiarity with both acronyms and the formulas used to compute them: WAR, OPS, DIPS, and above all (with a nod to Greek antecedents), the “Pythagorean expectation.” Yet the work on these matters has, mainly, been undertaken as a purely amateur endeavor—Bill James spent decades putting out his baseball work without any remuneration, until finally being hired latterly by the Boston Red Sox in 2003 (the same year that Michael Lewis published Moneyball, a book about how the Oakland A’s were using methods pioneered by James and his disciples). Still, all of these various methods of computing the value of both a player and a team have a perhaps-unintended effect: that of training the mind in the principle of Greek geographer, Strabo.

“It is proper to derive our explanations from things which are obvious,” Strabo wrote two thousand years ago, in a line that would later be adopted by the Englishman who constructed geology, Charles Lyell. In Lyell’s Principles of Geology (which largely founded the field) Lyell held—in contrast to the mysteriousness of Pyrrho—that the causes of things are likely to those already around us, and not due to unique, unrepeatable events. Similarly, sabermetricians—as opposed to the old-school scouts depicted in the film version of Moneyball—judge players based on their performance on the field, not on their nebulous “promise” or “intangibles.” (In Moneyball scouts were said to judge players on such qualities as the relative attractiveness of their girlfriends, which was said to signify the player’s own confidence in his ability.) Sabermetricians disregard such “methods” of analysis in favor of examination of the acts performed by the player as recorded by statistics.

Why, however, would that methodological commitment lead sabermetricians to be politically “liberal”—or for that matter, why would it lead in a political direction at all? The answer to the latter question is, I suspect, inevitable: sabermetrics, after all, is a discipline well-suited for the purpose of discovering how to run a professional sports team—and in its broadest sense, managing organizations simply is what “politics” is. The Greek philosopher Aristotle, for that reason, defined politics as a “practical science”—as the discipline of organizing human beings for particular purposes. It seems inevitable then that at least some people who have spent time wondering about, say, how to organize a baseball team most effectively might turn their imaginations towards some other end.

Still, even were that so, why “liberalism,” however that is defined, as opposed to some other kind political philosophy? Going by anecdotal evidence, after all, the most popular such doctrine among sports fans might be libertarianism. Yet, beside the fact that libertarianism is the philosophy of twelve-year-old boys (not necessarily a knockdown argument against its success), it seems to me that anyone following the methods of sabermetrics will be led towards positions usually called “liberal” in today’s America because from that sabermetrical, Strabonian perspective, certain key features of the American system will nearly instantly jump out.

The first of those features will be that, as it now stands, the American system is designed in a fashion contrary to the first principle of sabermetrical analysis: the Pythagorean expectation. As Charles Hofacker described it in a 1983 article for Baseball Analyst, the “Pythagorean equation was devised by Bill James to predict winning percentage from … the critical difference between runs that [a team] scores and runs that it allows.” By comparing these numbers—the ratio of a team’s runs scored and runs allowed versus the team’s actual winning percentage—James found that a rough approximation of a team’s real value could be determined: generally, a large difference between those two sets of numbers means that something fluky is happening.

If a team scores a lot of runs while also preventing its opponents from scoring, in other words, and yet somehow isn’t winning as many games as those numbers would suggest, then that suggests that that team is either tremendously unlucky or there is some hidden factor preventing success. Maybe, for instance, that team is scoring most of its runs at home because its home field is particularly friendly to the type of hitters the team has … and so forth. A disparity between runs scored/runs allowed and actual winning percentage, in short, compels further investigation.

Weirdly however the American system regularly produces similar disparities—and yet while, in the case of a baseball team, that would set off alerts for a sabermetrician, no such alarms are set off in the case of the so-called “official” American left, which apparently has resigned itself to the seemingly inevitable. In fact, instead of being the subject of curiosity and even alarm, many of the features of the U.S. constitution, like the Senate and the Electoral College—not to speak of the Supreme Court itself—are expressly designed to thwart what Chief Justice Earl Warren said was “the clear and strong command of our Constitution’s Equal Protection Clause”: the idea that “Legislators represent people … [and] are elected by voters, not farms or cities or economic interests.” Whereas a professional baseball team, in the post-James era, would be remiss if it were to ignore a difference between its ratio of runs scored and allowed and its games won and lost, under the American political system the difference between the will of the electorate as expressed by votes cast and the actual results of that system as expressed by legislation passed is not only ignored, but actively encouraged.

“The existence of the United States Senate”—for example wrote Justice Harlan in his dissent to the 1962 case of Baker v. Carr—“is proof enough” that “those who have the responsibility for devising a system of representation may permissibly consider that factors other than bare numbers should be taken into account.” That is, the existence of the U.S. Senate, which sends two senators from each state regardless of each state’s population, is support enough for those who believe—as the American “cultural left” does—in the importance of factors like “history” or the like in political decisions, as opposed to, say, the will of the American voters as expressed by the tally of all American votes.

As Jonathan Cohn remarked in The New Republic not long ago, in the Senate “predominantly rural, thinly populated states like Arkansas and North Dakota have the exact same representation as more urban, densely populated states like California and New York”—meaning that voters in those rural states have more effective political power than voters in the urban ones do. In sum, the Senate is, as Cohn says, one of Constitution’s “levers for thwarting the majority.” Or to put it in sabermetrical terms, it is a means of hiding a severe disconnect in America’s Pythagorean expectation.

Some will defend that disconnect, as Justice Harlan did over fifty years ago, on the grounds of terms familiar to the “cultural left”: that of “history” and “local practice” and so forth. In other words, that is how the Constitution originally constructed the American state. Yet, attempting (in Cohn’s words) to “prevent majorities from having the power to determine election outcomes” is a dangerous undertaking; as the Atlantic’s Ta Nehisi-Coates wrote recently about certain actions taken by the Republican party designed to discourage voting, to “see the only other major political party in the country effectively giving up on convincing voters, and instead embarking on a strategy of disenfranchisement, is a bad sign for American democracy.” In baseball, the sabermetricians know, a team with a high difference between its “Pythagorean expectation” and its win-loss record will usually “snap back” to the mean. In politics, as everyone since before Aristotle has known, such a “snap back” is usually a bit more costly than, say, the price of a new pitcher—which is to say that, if you see any American revolutionaries around you right now, he or she is likely wearing, not a poncho or a black turtleneck, but an Oakland A’s hat.