A Part of the Main

We may be confident that the Great American Poem will not be written, no matter what genius attempts it, until democracy, the idea of our day and nation and race, has agonized and conquered through centuries, and made its work secure.

But the Great American Novel—the picture of the ordinary emotions and manners of American existence … will, we suppose, be possible earlier.
—John William De Forest. “The Great American Novel.” The Nation 9 January 1868.

Things refuse to be mismanaged long.
—Theodore Parker. “Of Justice and the Conscience.” 1853.

 

“It was,” begins Chapter Seven of The Great Gatsby, “when curiosity about Gatsby was at its highest that the lights in his house failed to go on one Saturday night—and, as obscurely as it began, his career as Trimalchio was over.” Trimalchio is a character in the ancient Roman novel The Satyricon who, like Gatsby, throws enormous and extravagant parties; there’s a lot that could be said about the two novels compared, and some of it has been said by scholars. The problem with comparing the two novels however is that, unlike Gatsby, The Satyricon is “unfinished”: we today have only the 141, not-always-continguous chapters collated by 17th century editors from two medieval manuscript copies, which are clearly not the entire book. Hence, comparing The Satyricon to Gatsby, or to any other novel, is always handicapped by the fact that, as the Wikipedia page continues, “its true length cannot be known.” Yet, is it really true that estimating a message’s total length based only on a part of the whole is impossible? Contrary to the collective wisdom of classical scholars and Wikipedia contributors, it isn’t, which we know due to techniques developed at the behest of a megalomaniac Trimalchio convinced Shakespeare was not Shakespeare—work that eventually become the foundation of the National Security Agency.  

Before getting to the history of those techniques, however, it might be best to describe first what they are. Essentially, the problem of figuring out the actual length of The Satyricon is a problem of sampling: that is, of estimating whether you have, like Christopher Columbus, run up on an island—or, like John Cabot, smacked into a continent. In biology, for instance, a researcher might count the number of organisms in a given area, then extrapolate for the entire area. Another biological technique is to capture and tag or mark some animals in an area, then recapture the same number of animals in the same area some time later—the number of re-captured previously-tagged animals provides a ratio useful for estimating the true size of the population. (The fewer the numbers of re-captured, the larger the size of the total population.) Or, as the baseball writer Bill James did earlier this year on his website (in “Red Hot Start,” from 16 April), of forecasting the final record of a baseball team based upon its start: in this case, the “true underlying win percentage” of the Boston record given that the team’s record in its first fifteen games was 13-2. The way that James did it is, perhaps, instructive about possible methods for determining the length of The Satyricon.

James begins by noting that because the “probability that a .500 team would go 13-2 or better in a stretch of 15 games is  … one in 312,” while the “probability that a .600 team would go 13-2 in a stretch of 15 games is … one in 46,” it is therefore “much more likely that they are a .600 team than that they are a .500 team”—though with the caveat that, because “there are many more .500 teams than .600 teams,” this is not “EXACTLY true” (emp. James). Next, James finds the standard statistical measure called the standard deviation: that is, the amount by which actual team records distribute themselves around the .500 mark of 81-81. James finds this number for teams in the years 2000-2015 to be .070, a low number; meaning that most team records in that era bunched closely around .500. (By comparison, the historical standard deviation for “all [major league] teams in baseball history” is .102, meaning that there used to be a wider spread between first-place teams and last-place teams than there is now.) Finally, James arranges the possible records of baseball teams according to what mathematicians call the “Gaussian,” or “normal” distribution: that is, how team records would look were they to follow the familiar “bell-shaped” curve, familiar from basic statistical courses, in which most teams had .500 records and very few teams had either 100 wins—or 100 losses. 

If the records of actual baseball teams follow such a distribution, James finds that “in a population of 1,000 teams with a standard deviation of .070,” there should be 2 teams above .700, 4 teams with percentages from .675 to .700, 10 teams from .650 to .675, 21 teams from .625 to .650, and so on, down to 141 teams from .500 to .525. (These numbers are mirrored, in turn, by teams with losing records.) Obviously, teams with better final records have better chances of starting 13-2—but at the same time, there are a lot fewer teams with final records of .700 than there are of teams going .600. As James writes, it is “much more likely that a 13-2 team is actually a .650 to .675 team than that they are actually a .675 to .700 team—just because there are so many more teams” (i.e., 10 teams as compared to 4). So the chances of each level of the distribution producing a 13-2 team actually grows as we approach .500—until, James says, we approach a winning percentage of .550 to .575, where the number of teams finally gets outweighed by the quality of those teams. Whereas in a thousand teams there are 66 teams who might be expected to have winning percentages of .575 to .600, thereby meaning that it is likely that a bit better than one of those teams might have start 13-2 (1.171341 to be precise), the chance of one of the 97 teams starting at 13-2 is only 1.100297. Doing a bit more mathematics, which I won’t bore you with, James eventually concludes that it is most likely that the 2018 Boston Red Sox will finish the season with .585 winning percentage, which is between a 95-67 season and a 94-68 season. 

What, however, does all of this have to do with The Satyricon, much less with the National Security Agency? In the specific case of the Roman novel, James provides a model for how to go about estimating the total length of the now-lost complete work: a model that begins by figuring out what league Petronius is playing in, so to speak. In other words, we would have to know something about the distribution of the lengths of fictional works: do they tend to converge—i.e., have a low standard deviation—strongly on some average length, the way that baseball teams tend to converge around 81-81? Or, do they wander far afield, so that the standard deviation is high? The author(s) of the Wikipedia article appear to believe that this is impossible, or nearly so; as the Stanford literary scholar Franco Moretti notes, when he says that he works “on West European narrative between 1790 and 1930,” he “already feel[s] like a charlatan” because he only works “on its canonical fraction, which is not even one percent of published literature.” There are, Moretti observes for instance, “thirty thousand nineteenth-century British novels out there”—or are there forty, or fifty, or sixty? “[N]o one really knows,” he concludes—which is not even to consider the “French novels, Chinese, Argentinian, [or] American” ones. But to compare The Satyricon to all novels would be to accept a high standard deviation—and hence a fairly wide range of possible lengths. 

Alternately, The Satyricon could be compared only to its ancient comrades and competitors: the five ancient Greek novels that survive complete from antiquity, for example, along with the only Roman novel to survive complete—Apuleius’ The Metamorphoses. Obviously, were The Satyricon to be compared only to ancient novels (and of those, only the complete ones) the standard deviation would likely be higher, meaning that the lengths might cluster more tightly around the mean. That would thereby imply a tighter range of possible lengths—at the risk, since the six ancient novels could all differ in length from The Satyricon much more than all the novels written likely would, of making a greater error in the estimate. The choice of which set (all novels, ancient novels) to use thereby is the choice between a higher chance of being accurate, and a higher chance of being precise. Either way, Wikipedia’s claim that the length “cannot be known” is only so if the words “with absolute certainty” are added. The best guess we can make can either be nearly certain to contain the true length within it, or be nearly certain—if it is accurate at all—to be very close to the true length, which is to say that it is entirely possible that we could know what the true length of The Satyricon was, even if we were not certain that we did in fact know it. 

That then answers the question of how we could know the length of The Satyricon—but when I began this story I promised that I would (eventually) relate it to the foundations of the National Security Agency. Those, I mentioned, began with an eccentric millionaire convinced that William Shakespeare did not write the plays that now bear his name. The millionaire’s name was George Fabyan; in the early 20th century he brought together a number of researchers in the new field of cryptography in order to “prove” Fabyan’s pet theory that Francis Bacon was the true author of the Bard’s work Bacon having been known as the inventor of the code system that bears his name; Fabyan thusly subscribed to the proposition that Bacon had concealed the fact of his authorship by means of coded messages within the plays themselves. The first professional American codebreakers thereby found themselves employed on Fabyan’s 350-acre estate (“Riverbank”) on the Fox River just south of Geneva, Illinois, which is still there today—and where American military minds found them on the American entry into World War One in 1917. 

Specifically, they found Elizabeth Smith and William Friedman (who would later marry). During the war the couple helped to train several federal employees in the art of codebreaking. By 1921, they had been hired away by the War Department, which then led to spending the 1920s breaking the codes of gangsters smuggling liquor into the dry United States in the service of the Coast Guard. During World War Two, Elizabeth would be employed in breaking one of the Enigma codes used by the German Navy; meanwhile, her husband William had founded the Army’s Signal Intelligence Service—the outfit that broke the Imperial Japanese Navy’s “Purple” code (itself based on Enigma machines), and was the direct predecessor to the National Security Agency. William had also written the scientific papers that underlay their work; he had, in fact, even coined the word cryptanalysis itself.          

Central to Friedman’s work was something now called the “Friedman test,” but then called the “kappa test.” This test, like Bill James’ work, compared two probabilities: the first being the obvious probability of which letter a coded one is likely to be, which in English is in one in 26, or 0.0385. The second, however, was not so obvious, that being the chance that two randomly selected letters from a source text will turn out to be the same letter, which is known in English to be 0.067. Knowing those two points, plus how long the intercepted coded message is, allows the cryptographer to estimate the length of the key, the translation parameter that determines the output—just as James can calculate the likely final record of a team that starts 13-2 using two different probabilities. Figuring out the length of The Satyricon, then, might not be quite the Herculean task it’s been represented to be—which raises the question, why has it been represented that way? 

The answer to that question, it seems to me, has something to do with the status of the “humanities” themselves: using statistical techniques to estimate the length of The Satyricon would damage the “firewall” that preserves disciplines like Classics, or literary study generally, from the grubby no ’ccount hands of the sciences—a firewall, we are eternally reminded, necessary in order to foster what Geoffrey Harpham, former director of the National Institute for the Humanities, has called “the capacity to sympathize, empathize, or otherwise inhabit the experience of others” so “clearly essential to democratic citizenship.” That may be so—but it’s also true that maintaining that firewall allows law schools, as Sanford Levinson of the University of Texas remarked some time ago, to continue to emphasize “traditional, classical legal skills” at the expense of “‘finding out how the empirical world operates.’” And since that has allowed (in Gill v. Whitford) the U.S. Supreme Court the luxury of considering whether to ignore a statistical measure of gerrymandering, for example, while on the other hand it is quite sure that the disciplines known as the humanities collect students from wealthy backgrounds at a disproportionate rate, it perhaps ought to be wondered precisely in what way those disciplines are “essential to democratic citizenship”—or rather, what idea of “democracy” is really being preserved here. If so, then—perhaps using what Fitzgerald called “the dark fields of the republic”—the final record of the United States can quite easily be predicted.

Small Is Beautiful—Or At Least, Weird

… among small groups there will be greater variation …
—Howard Wainer and Harris Zwerling.
The central concept of allopatric speciation is that new species can arise only when a small local population becomes isolated at the margin of the geographic range of its parent species.
—Stephen Jay Gould and Niles Eldredge.
If you flipped a coin a thousand times, you were more likely to end up with heads or tails roughly half the time than if you flipped it ten times.
—Michael Lewis. 

No humanist intellectual today is a “reductionist.” To Penn State English professor Michael Bérubé for example, when the great biologist E.O. Wilson speculated—in 1998’s Consilience: The Unity of Knowledge—that “someday … even the disciplines of literary criticism and art history will find their true foundation in physics and chemistry,” Wilson’s claim was (Bérubé wrote) “almost self-parodic.” Nevertheless, despite the withering disdain of English professors and such, examples of reductionism abound: in 2002, journalist Malcolm Gladwell noticed that a then-recent book—Randall Collins’ The Sociology of Philosophies—argued that French Impressionism, German Idealism, and Chinese neo-Confucianism, among other artistic and philosophic movements, could all be understood by the psychological principle that “clusters of people will come to decisions that are far more extreme than any individual member would have come to on his own.” Collins’ claim, of course, is sure to call down the scorn of professors of the humanities like Bérubé for ignoring what literary critic Victor Shklovsky might have called the “stoniness of the stone”; i.e., the specificity of each movement’s work in its context, and so on. Yet from a political point of view (and despite both the bombastic claims of certain “leftist” professors of the humanities and their supposed political opponents) the real issue with Collins’ (and Gladwell’s) “reductionism” is not that they attempt to reduce complex artistic and philosophic movements to psychology—nor even, as I will show, to biology. Instead, the difficulty is that Collins (and Gladwell) do not reduce them to mathematics.  

Yet, to say that neo-Confucianism (or, to cite one of Gladwell’s examples, Saturday Night Live) can be reduced to mathematics first begs the question of what it means to “reduce” one sort of discourse to another—a question still largely governed, Kenneth Schaffner wrote in 2012, by Ernest Nagel’s “largely unchanging and immensely influential analysis of reduction.” According to Nagel’s 1961 The Structure of Science: Problems in the Logic of Scientific Explanation, a “reduction is effected when the experimental laws of the secondary science … are shown to be the logical consequences of the theoretical assumptions … of the primary science.” Gladwell for example, discussing “the Lunar Society”—which included Erasmus Darwin (grandfather to Charles), James Watt (inventor of the steam engine), Josiah Wedgwood (the pottery maker), and Joseph Priestly (who isolated oxygen)—says that this group’s activities bears all “the hallmarks of group distortion”: someone proposes “an ambitious plan for canals, and someone else tries to top that [with] a really big soap factory, and in that feverish atmosphere someone else decides to top them all with the idea that what they should really be doing is fighting slavery.” In other words, to Gladwell the group’s activities can be explained not by reference to the intricacies of thermodynamics or chemistry, nor even the political difficulties of the British abolitionist movement—or even the process of heating clay. Instead, the actions of the Lunar Society can be understood in somewhat the same fashion that, in bicycle racing, the peloton (which is not as limited by wind resistance) can reach speeds no single rider could by himself. 

Yet, if it is so that the principle of group psychology explains, for instance, the rise of chemistry as a discipline, it‘s hard to see why Gladwell should stop there. Where Gladwell uses a psychological law to explain the “Blues Brothers” or “Coneheads,” in other words, the late Harvard professor of paleontology Stephen Jay Gould might have cited a law of biology: specifically, the theory of “punctuated equilibrium”—a theory that Gould, along with his colleague Niles Eldredge, first advanced in 1972. The theory that the two proposed in “Punctuated Equilibria: an Alternative to Phyletic Gradualism” could, thereby, be used to explain the rise of the Not Ready For Prime Time Players as equally well as the psychological theory Gladwell advances.    

In that early 1970s paper, the two biologists attacked the reigning idea of how new species begin: what they called the “picture of phyletic gradualism.” In the view of that theory, Eldredge and Gould  wrote, new “species arise by the transformation of an ancestral population into its modified descendants.” Phyletic gradualism thusly answers the question of why dinosaurs went extinct by replying that they didn’t: dinosaurs are just birds now. More technically, under this theory the change from one species to another is a transformation that “is even and slow”; engages “usually the entire ancestral population”; and “occurs over all or a large part of the ancestral species’ geographic range.” For nearly a century after the publication of Darwin’s Origin of Species, this was how biologists understood the creation of new species. To Gould and Eldredge however that view simply was not in accordance with how speciation usually occurs. 

Instead of ancestor species gradually becoming descendant species, they argued that new species are created by a process they called “the allopatric theory of speciation”—a theory that might explain how Hegel’s The Philosophy of Right and Chevy Chase’s imitation of Gerald Ford could be produced by the same phenomena. Like Gladwell’s use of group psychology (which depends on the competition within a set of people who all know each other), where “phyletic gradualism” thinks that speciation occurs over a wide area to a large population, the allopatric theory thinks that speciation occurs in a narrow range to a small population: “The central concept of allopatric speciation,” Gould and Eldredge wrote, “is that new species can arise only when a small local population becomes isolated at the margin of the geographic range of its parent species.” Gould described this process for a non-professional audience in his essay, “The Golden Rule: A Proper Scale for Our Environmental Crisis,” from his 1982 book, Eight Little Piggies: Reflections on Natural History—a book that perhaps demonstrates just how considerations of biological laws might show why John Belushi’s “Samurai Chef,” or Gilda Radner’s “Roseanne Rosannadanna” succeeded. 

The Pinaleno Mountains, in New Mexico, house a population of squirrel called the Mount Graham Red Squirrel, which “is isolated from all other populations and forms the southernmost extreme of the species’s range.” The Mount Graham subspecies can survive in those mountains despite being so far south of the rest of its species because the Pinalenos are “‘sky islands,’” as Gould calls them: “patches of more northern microclimate surrounded by southern desert.” It’s in such isolated places, the theory of allopatric speciation holds, that new species develop: because the Pinalenos are “a junction of two biogeographic provinces” (the Nearctic “by way of the Colorado Plateau“ and the Neotropical “via the Mexican Plateau”), they are a space where new kinds of selection pressures can work upon a subpopulation than are available on the home range, and therefore places where subspecies can make the kinds of evolutionary “leaps” that can allow such new populations, after success in such “nurseries,” to return to the original species’ home range and replace the ancestral species. Such a replacement, of course, does not involve the entire previous population, nor does it occur over the entire ancestral range, nor is it even and slow, as the phyletic gradualist theory would suggest.

The application to the phenomena considered by Gladwell then is fairly simple. What was happening at 30 Rockefeller Center in New York City in the autumn of 1975 might not have been an example of “group psychology” at work, but instead an instance where a small population worked at the margins of two older comedic provinces: the new improvisational space created by such troupes as Chicago’s Second City, and the older tradition of live television created by such shows as I Love Lucy and Your Show of Shows. The features of the new form thereby forged under the influence of these pressures led, ultimately, to the extinction of older forms of television comedy like the standard three-camera situation comedy, and the eventual rise of single-camera shows like Seinfeld and The Office. Or so, at least, it can be imagined that the story might be told, rather than in the form of Gladwell’s idea of group psychology. 

Yet, it isn’t simply possible to explain a comedic phenomenon or a painting movement in terms of group psychology, instead of the terms familiar to scholars of the humanities—or even, one step downwards in the explanatory hierarchy, in terms of biology instead of psychology. That’s because, as the work of Israeli psychologists Daniel Kahneman and Amos Tversky suggests, there is something odd, mathematically, about small groups like subspecies—or comedy troupes. That “something odd” is this: they’re small. Being small has (the two pointed out in their 1971 paper, “Belief in the Law of Large Numbers”) certain mathematical consequences—and, perhaps oddly, those consequences may help to explain something about the success of Saturday Night Live. 

That’s anyway the point the two psychologists explored in their 1971 paper, “Belief in the Law of Large Numbers”—a paper whose message would, perhaps oddly, later be usefully summarized by Gould in a 1983 essay, “Glow, Big Glowworm”: “Random arrays always include some clumping … just as we will flip several heads in a row quite often so long as we can make enough tosses.” Or—as James Forbes of Edinburgh University noted in 1850—it would be absurd to expect to find “on 1000 throws [of a fair coin] there should be exactly 500 heads and 500 tails.” (In fact, as Forbes went on to remark, there’s less than a 3 percent chance of getting such a result.) But human beings do not usually realize that reality: in “Belief,” Kahneman and Tversky reported G.S. Tune’s 1964 study that found that when people “are instructed to generate a random sequence of hypothetical tosses of a fair coin … they produce sequences where the proportion of heads in any short segment stays far closer to .50 than the laws of chance would predict.” “We assume”—as Atul Gawande summarized the point of “Belief” for the New Yorker in 1998—“that a sequence of R-R-R-R-R-R is somehow less random than, say, R-R-B-R-B-B,” while in reality “the two sequences are equally likely.” Human beings find it difficult to understand true randomness—which may be why it may be so difficult to see how this law of probability might apply to, say, the Blues Brothers.

Yet, what the two psychologists were addressing in “Belief” was the idea expressed by statisticians Howard Wainer and Harris Zwerling in a 2006 article later cited by Kahneman in his recent bestseller, Thinking: Fast and Slow: the statistical law that “among small groups there will be greater variation.” In their 2006 piece, Wainer and Zwerling illustrated the point by observing that, for example, the lowest-population counties in the United States tend to have the highest kidney cancer rates per capita, or the smallest schools disproportionately appear on lists of the best-performing schools. What they mean is that a “county with, say, 100 inhabitants that has no cancer deaths would be in the lowest category” of kidney cancer rates—but “if it has one cancer death it would be among the highest”—while similarly, examining the Pennsylvania System of School Assessment for 2001-02 found “that, of the 50 top-scoring schools (the top 3%), six of them were among the 50 smallest schools (the smallest 3%),” which is “an overrepresentation by a factor of four.” “When the population is small,” they concluded, “there is wide variation”—but when “populations are large … there is very little variation.” Or, it may not be that small groups push each member to achieve more, it’s that small groups of people tend to have high amounts of variation, and (every so often) one of those groups varies so much that somebody invents the discipline of chemistry—or invent the Festrunk Brothers.

The $64,000 question, from this point of view, isn’t the groups that created a new way of painting—but instead all of the groups that nobody has ever heard of that tried, but failed, to invent something new. Yet as a humanist intellectual like Bérubé would surely point out, to investigate this question in this way is to miss nearly everything about Impressionism (or the Land Shark) that makes it interesting. Which, perhaps, is so—but then again, isn’t the fact that such widely scattered actions and organisms can be united under one theoretical lens interesting? Taken far enough, what matters to Bérubé is the individual peculiarities of everything in existence—an idea that recalls what Jorge Luis Borges once described as John Locke’s notion of “an impossible idiom in which each individual object, each stone, each bird and branch had an individual name.” To think of Bill Murray in the same frame as a New Mexican squirrel is, admittedly, to miss the smell of New York City at dawn on a Sunday morning after a show the night before—but it also involves a gain, and one that is applicable to many other situations besides the appreciation of the hard work of comedic actors. Although many in the humanities then like to attack what they call reductionism for its “anti-intellectual” tendencies, it’s well-known that a large enough group of trees constitutes more than a collection of individual plants. There is, I seem to recall, some kind of saying about it.  

Ribbit

 “‘The frog is almost five hundred million years old. Could you really say with much certainty that America, with all its strength and prosperity, with its fighting man that is second to none, and with its standard of living that is the highest in the world, will last as long as … the frog?’”
—Joseph Heller. Catch-22. (1961).
 … the fall of empires which aspired to universal dominion could be predicted with very high probability by one versed in the calculus of chance.
—Laplace. Theórie Analytique des Probabilities. (1814).

 

If sexism exists, how could it be proved? A recent lawsuit—Chen-Oster v. Goldman Sachs, Inc., filed in New York City on 19 May, 2014—aims to do just that. The suit makes four claims: that Goldman’s women employees make less than men at the same positions; that a “disproportionate” number of men have been promoted “over equally or more qualified women”; that women employees’ performance was “systematic[ally] underval[ued]”; and that “managers most often assign the most lucrative and promising opportunities to male employees.” The suit, then, echoes many of the themes developed by feminists over the past two generations, and in a general sense may perhaps be accepted, or even cheered, by those Americans sensitive to feminism. But those Americans may not be aware of the potential dangers of the second claim: dangers that threaten not merely the economic well-being of the majority of Americans, including women, but also America’s global leadership. Despite its seeming innocuousness, the second claim is potentially an existential threat to the future of the United States.

That, to be sure, is a broad assertion, and seems disproportionate, you might say, to the magnitude of the lawsuit: it hardly seems likely that a lawsuit over employment law, even one involving a firm so important to the global financial machinery as Goldman Sachs, could be so important as to threaten the future of the United States. Yet few today would deny the importance of nuclear weapons—nor that they pose an existential threat to humanity itself. And if nuclear weapons are such a threat, then the reasoning that led to those weapons must be at least as, if not more so, as important than the weapons themselves. As I will show, the second claim poses a threat to exactly that chain of reasoning.

That, again, may appear a preposterous assertion: how can a seemingly-minor allegation in a lawsuit about sexism have anything to do with nuclear weapons, much less the chain of logic that led to them? One means of understanding how requires a visit to what the late Harvard biologist Stephen Jay Gould called “the second best site on the standard tourist itinerary of [New Zealand’s] North Island—the glowworm grotto of Waitomo Cave.” Upon the ceiling of this cave, it seems, live fly larvas whose “illuminated rear end[s],” Gould tells us, turn the cave into “a spectacular underground amphitheater”—an effect that, it appears, mirrors the night sky. But what’s interesting about the Waitomo Cave is that it does this mirroring with a difference: upon observing the cave, Gould “found it … unlike the heavens” because whereas stars “are arrayed in the sky at random,” the glowworms “are spaced more evenly.” The reason why is that the “larvae compete with, and even eat, each other—and each constructs an exclusive territory”: since each larva has more or less the same power as every other larva, each territory is more or less the same size. Hence, as Gould says, the heaven of the glowworms is an “ordered heaven,” as opposed to the disorderly one visible on clear nights around the the world—a difference that not only illuminates just what’s wrong with the plaintiff’s second claim in Chen-Oster v. Goldman Sachs, Inc, but also how that claim concerns nuclear weapons.

Again, that might appear absurd: how can understanding a Southern Hemispheric cavern help illuminate—as it were—a lawsuit against the biggest of Wall Street players? To understand how requires another journey—though this one is in time, not space.

In 1767, an English clergyman named John Michell published a paper with the unwieldy title of “An Inquiry into the Probable Parallax, and Magnitude of the Fixed Stars, from the Quantity of Light Which They Afford us, and the Particular Circumstances of Their Situation.” Michell’s purpose in the paper, he wrote, was to inquire whether the stars “had been scattered by mere chance”—or, instead, by “their mutual gravitation, or to some other law or appointment of the Creator.” Since (according to Michell’s biographer, Russell McCommach), Michell assumed “that a random distribution of stars is a uniform distribution,” he concluded that—since the night sky does not resemble the roof of the Waitomo Cave—the distribution of stars must be the result of some natural law. Or even, he hinted, the will of the Creator himself.

So things might have stayed had Michell’s argument “‘remained buried in the heavy quartos of the Philosophical Transactions”—as James Forbes, the Professor of Natural Philosophy at Edinburgh University, would write nearly a century later. But Michell’s argument hadn’t; several writers, it seems, took his argument as evidence for the existence of the supernatural. Hence, Forbes felt obliged to refute an argument that, he thought, is “‘too absurd to require refutation.’” To think—as Michell did—that “a perfectly uniform and symmetrical disposition of the stars over the sky,” as Forbes wrote, “could alone afford no evidence of causation” would be “palpably absurd.” The reason Forbes thought that way, in turn, is the connection both to the Goldman lawsuit—and nuclear weapons.

Forbes made his point by an analogy to flipping a coin: to think that the stars had been distributed randomly because they were evenly spaced across the sky, he wrote, would be as ridiculous as the chances that “on 1000 throws [of a fair coin] there should be exactly 500 heads and 500 tails.” In fact, the Scotsman pointed out, mathematics demonstrates that in such a case of 1000 throws “there are almost forty chances to one [i.e., nearly 98%], that some one of the other possible events shall happen instead of the required one.” In 1000 throws of a fair coin, there’s less than a three percent chance that the flipper will get exactly 500 heads: it’s simply a lot more likely that there will be some other number of heads. In Gould’s essay about the Waitomo Cave, he put the same point like this: “Random arrays always include some clumping … just as we will flip several heads in a row quite often so long as we can make enough tosses.” Because the stars clump together, Forbes argued, that is evidence that they are randomly distributed—not of a benevolent Creator, like Michell thought. Forbes’ insight, in turn, about how to detect randomness, or chance, in astronomical data had implications far beyond the stars: in a story that would take much more space than this essay to tell, it eventually led a certain Swiss patent clerk to take up the phenomena called “Brownian motion.”

The clerk, of course, was Albert Einstein; the subject of his 1905 paper, “On the Movement of Small Particles Suspended In a Stationary Liquid Demanded by the Molecular-Kinetic Theory of Heat,” was the tendency—“easily observed in a microscope,” Einstein remarks—for tiny particles to move in an apparently-spontaneous manner. What Einstein realized (as physicist Leonard Mlodinow put it in his 2008 book, The Drunkard’s Walk: How Randomness Rules Our Lives) was that the “jiggly” motion of dust particles and so on results from collisions between them and even smaller particles, and so “there was a predictable relationship between factors such as the size, number, and speed of the molecules and the observable frequency and magnitude of the jiggling.” In other words, “though the collisions [between the molecules and the larger particles] occur very frequently, because the molecules are so light, those frequent isolated collisions have no visible effects” for the most part—but once in a while, “when pure luck occasionally leads to a lopsided preponderance of hits from some particular direction,” there are enough hits to send the particle moving. Or, to put it another way, when the flip of a 1000 coins all come up heads, the particle will move. Put in that fashion, to be sure, Einstein’s point might appear obscure at best—but as Mlodinow goes on to say, it is no accident that this seemingly-minor paper became the great physicist’s “most cited work.” That’s because the ultimate import of the paper was to demonstrate the existence … of the atom. Which is somewhat of a necessity for building an atom bomb.

The existence of the atomic bomb, then, can be said to depend on the insight developed by Forbes: just how significant the impact of chance can be in the formation of both the very large (the universe itself, according to Forbes), and the very small (the atom, according to Einstein). The point both men attempted to make, in turn, is that the existence of order is something very rare in this universe, at any rate (whatever may be the case in others). Far more common, then, is the existence of disorder—which brings us back to Goldman Sachs and the existence of sexism.

It is the contention of the second point in the plaintiffs’ brief in Chen-Oster v. Goldman Sachs, Inc., remember, that there exists (as University of Illinois English professor Walter Benn Michaels has noted) a “‘“stark” underrepresentation’ [of women] in management” because “‘just 29 percent of vice presidents, 17 percent of managing directors, and 14 percent of partners’” are women. Goldman Sachs, as it happens, has roughly 35,000 employees—which, it turns out, is about 0.001% of the total population of the United States, which is 323 million. Of those 323 million, as of the 2010 Census women number about 157 million, compared to around 151 million men. Hence, the question to be asked about the Goldman Sachs lawsuit (and I write this as someone with little sympathy for Goldman Sachs) is—if the reasoning Einstein followed to demonstrate the existence of the atom is correct—then if the chances of landing exactly 500 heads, when tossing a coin 1000 times, is less than three percent, how much less likely is it that a sample of 35,000 people will exactly mirror the proportions of 323 million? The answer, it would seem, is rather low: it’s simply a lot more likely that Goldman Sachs would have something other than a proportionate ratio of men to women than the reverse, just as it it’s a lot more likely that stars should clump together than be equally spaced like the worms in the New Zealand cave. And that is to say that the disproportionate number of men in leadership in positions at Goldman Sachs is merely evidence of the absence of a pro-woman bias at Goldman Sachs, not evidence of the existence of a bias against women.

To which it might be replied, of course, that the point isn’t the exact ratio, but rather that it is so skewed toward one sex: what are the odds, it might be said, that all three categories of employee should all be similarly bent in one direction? Admittedly, that is an excellent point. But it’s also a point that’s missing from the plaintiffs’ brief: there is no mention of a calculation respecting the particular odds in the case, despite the fact that the mathematical techniques necessary to do those calculations have been known since long before the atomic bomb, or even Einstein’s paper on the existence of the atom. And it’s that point, in turn, that concerns not merely the place of women in society—but ultimately the survival of the United States.

After all, the reason that the plaintiffs in the Goldman Sachs suit do not feel the need to include calculations of the probability of the disproportion they mention—despite the fact that it is the basis of their second claim—is that the American legal system is precisely structured to keep such arguments at bay. As Oliver Roeder observed in FiveThirtyEight last year, for example, the justices of the U.S. Supreme Court “seem to have a reluctance—even an allergy—to taking math and statistics seriously.” And that reluctance is not limited to the justices alone: according to Sanford Levinson, a University of Texas professor of law and government interviewed by Roeder in the course of reporting his story, “top-level law schools like Harvard … emphasize … traditional, classical legal skills” at the expense of what Levinson called “‘genuine familiarity with the empirical world’”—i.e., the world revealed by techniques pioneered by investigators like James Forbes. Since, as Roeder observes, all nine current Supreme Court justices attended either Harvard or Yale, that suggests that the curriculum followed at those schools has a connection to the decisions reached by their judicial graduates.

Still, that exclusion might not be so troublesome were it limited merely to the legal machinery. But as Nick Robinson reported last year in the Buffalo Law Review, attorneys have “dominated the political leadership of the United States” throughout its history: “Since independence,” Robinson pointed out there, “more than half of all presidents, vice presidents, and members of Congress have come from a law background.” That then implies that if the leadership class of the United States is derived from American law schools, and American law schools train students to disdain mathematics and the empirical world, then it seems plausible to conclude that much of the American leadership class is specifically trained to ignore both the techniques revealed by Forbes and the underlying reality they reveal: the role played by chance. Hence, while such a divergence may allow plaintiffs like those in the Goldman case to make allegations of sexism without performing the hard work of actually demonstrating how it might be possible mathematically, it might also have consequences for actual women who are living, say, in a nation increasingly characterized by a vast difference between the quantifiable wealth of those at the top (like people who work for Goldman Sachs) and those who aren’t.

And not merely that. For decades if not centuries, Americans have bemoaned the woeful lack of performance of American students in mathematics: “Even in Massachusetts, one of the country’s highest-performing states,” Elizabeth Green observed in the latest of one of these reports in the New York Times in 2014, “math students are more than two years behind their counterparts in Shanghai.” And results like that, as the journalist Michael Lewis put the point several years ago in Vanity Fair, risk “ceding … technical and scientific leadership to China”—and since, as demonstrated, it’s knowledge of mathematics (and specifically knowledge of the mathematics of probability) that made the atomic bomb possible, that implies conversely that ignorance of the subject is a serious threat to national existence. Yet, few Americans have, it seems, considered whether the fact that students do not take mathematics (and specifically probability) seriously may have anything to do with the fact that the American leadership class explicitly rules such topics, quite literally, out of court.

Of course, as Lewis also pointed out in his recent book, The Undoing Project: A Friendship that Changed Our Minds, American leaders may not be particularly alone in ignoring the impact of probabilistic reasoning: when, after the Yom Kippur War—which had caught Israel’s leaders wholly by surprise—future Nobel Prize winner Daniel Kahneman and intelligence officer Zvi Lanir attempted to “introduce a new rigor in dealing with questions of national security” by replacing intelligence reports written “‘in the form of essays’” with “probabilities, in numerical form,” they found that “the Israeli Foreign Ministry was ‘indifferent to the specific probabilities.’” Kahneman suspected that the ministry’s indifference, Lewis reports, was due to the fact that Israel’s leaders’ “‘understanding of numbers [was] so weak that [the probabilities did not] communicate’”—but betting that the leadership of other countries continues to match the ignorance of our own does not particularly appear wise. Still, as Oliver Roeder noted for FiveThirtyEight, not every American is willing to continue to roll those dice: University of Texas law professor Sanford Levinson, Roeder reported, thinks that the “lack of rigorous empirical training at most elite law schools” requires the “long-term solution” of “a change in curriculum.” And that, in turn, suggests that Chen-Oster v. Goldman Sachs, Inc. might be more than a flip of a coin over the existence of sexism on Wall Street.

Stayin’ Alive

And the sun stood still, and the moon stayed,
until the people had avenged themselves upon their enemies.
—Joshua 10:13.

 

“A Sinatra with a cold,” wrote Gay Talese for Esquire in 1966, “can, in a small way, send vibrations through the entertainment industry and beyond as surely as a President of the United States, suddenly sick, can shake the national economy”; in 1994, Nobel laureate economist Paul Krugman mused that a “commitment to a particular … doctrine” can eventually set “the tone for policy-making on all issues, even those which may seem to have nothing to do with that doctrine.” Like a world leader—or a celebrity—the health of an idea can have unforeseen consequences; for example, it is entirely possible that the legal profession’s intellectual bias against mathematics has determined the nation’s racial policy. These days after all, as literary scholar Walter Benn Michaels observed recently, racial justice in the United States is held to what Michaels calls “the ideal of proportional inequality”—an ideal whose nobility, it so happens that Nobel Prize-winner Daniel Kahneman and his colleague Amos Tversky have demonstrated, is matched only by its mathematical futility. The law, in short, has what Oliver Roeder of FiveThirtyEight recently called an “allergy” to mathematics; what I will argue is that, as a consequence, minority policy in the United States has a cold.

“The concept that mathematics can be relevant to the study of law,” law professor Michael I. Meyerson observed in 2002’s Political Numeracy: Mathematical Perspectives on Our Chaotic Constitution, “seems foreign to many modern legal minds.” In fact, he continued, to many lawyers “the absence of mathematics is one of law’s greatest appeals.” The strength of that appeal was on display recently in the 2011 Wisconsin case discussed by Oliver Roeder, Gill v. Whitford—a case that, as Roeder says, “hinges on math” because it involves the invention of a mathematical standard to measure “when a gerrymandered [legislative] map infringes on voters’ rights.” In oral arguments in Gill, Roeder observed, Chief Justice John Roberts said, about the mathematical techniques that are the heart of the case, that it “may be simply my educational background, but I can only describe [them] as sociological gobbledygook”—a derisory slight that recalls 19th-century Supreme Court Justice Joseph Story’s sneer concerning what he called “men of speculative ingenuity, and recluse habits.” Such statements are hardly foreign in the annals of the Supreme Court: “Personal liberties,” Justice Potter Stewart wrote in a 1975 opinion, “are not rooted in the law of averages.” (Stewart’s sentence, perhaps incidentally, uses a phrase—“law of averages”—found nowhere in the actual study of mathematics). Throughout the history of American law, in short, there is strong evidence of bias against the study and application of mathematics to jurisprudence.

Yet without the ability to impose that bias on others, even conclusive demonstrations of the law’s skew would not matter—but of course lawyers, as Nick Robinson remarked just this past summer in the Buffalo Law Review, have “dominated the political leadership of the United States.” As Robinson went on to note, “more than half of all presidents, vice presidents, and members of Congress have come from a law background.” This lawyer-heavy structure has had an effect, Robinson says: for instance, he claims “that lawyer-members of Congress have helped foster the centrality of lawyers and courts in the United States.” Robinson’s research then, which aggregates many studies on the subject, demonstrates that the legal profession is in a position to have effects on the future of the country—and if lawyers can affect the future of the country in one fashion, it stands to reason that they may have affected it in others. Not only then may the law have an anti-mathematical bias, but it is clearly positioned to impose that bias on others.

That bias in turn is what I suspect has led the Americans to what Michaels calls the theory of “proportional representation” when it comes to justice for minority populations. This theory holds, according to Michaels, that a truly just society would be a “society in which white people were proportionately represented in the bottom quintile [of income] (and black people proportionately represented in the top quintile)”—or, as one commenter on Michaels’ work has put it, it’s the idea that “social justice is … served if the top classes at Ivy League colleges contain a percentage of women, black people, and Latinos proportionate to the population.” Within the legal profession, the theory appears to be growing: as Michaels has also observed, the theory of the plaintiffs in the “the recent suit alleging discrimination against women at Goldman Sachs” complained of the ‘“stark” underrepresentation’ [of women] in management” because women represented “‘just 29 percent of vice presidents, 17 percent of managing directors, and 14 percent of partners’”—percentages that, of course, vary greatly from the roughly 50% of the American population who are women. But while the idea of a world in which the population of every institution mirrors the population as a whole may appear plausible to lawyers, it’s absurd to any mathematician.

People without mathematical training, that is, have wildly inaccurate ideas about probability—precisely the point of the work of social scientists Daniel Kahneman and Amos Tversky. “When subjects are instructed to generate a random sequence of hypothetical tosses of a fair coin,” wrote the two psychologists in 1971 (citing an earlier study), “they produce sequences where the proportion of heads in any short segment stays far closer to .50 than the laws of chance would predict.” In other words, when people are asked to write down the possible results of tossing a coin many times, they invariably give answers that are (nearly) half heads and half tails despite the fact that—as Brian Everitt observed in his 1999 book Chance Rules: An Informal Guide to Probability, Risk, and Statistics—in reality “in, say, 20 tosses of a fair coin, the number of heads is unlikely to be exactly 10.” (Everitt goes on to note that “an exact fifty-fifty split of heads and tails has a probability of a little less than 1 in 5.”) Hence, a small sample of 20 tosses has less than a twenty percent chance of being ten heads and ten tails—a fact that may appear yet more significant when it is noted that the chance of getting exactly 500 heads when flipping a coin 1000 times is less than 3%. Approximating the ideal of proportionality, then, is something that mathematics tells us is not simple or easy to do even once, and yet, in the case of college admissions, advocates of proportional representation suggest that colleges, and other American institutions, ought to be required to do something like what baseball player Joe DiMaggio did in the summer of 1941.

In that year in which “the Blitzkrieg raged” (as the Rolling Stones would write later), the baseball player Joe DiMaggio achieved what Gould says is “the greatest and most unattainable dream of all humanity, the hope and chimera of all sages and shaman”: the New York Yankee outfielder hit safely in 56 games. Gould doesn’t mean, of course, that all human history has been devoted to hitting a fist-sized sphere, but rather that while many baseball fans are aware of DiMaggio’s feat, what few are aware of is that the mathematics of DiMaggio’s streak shows that it was “so many standard deviations above the expected distribution that it should not have occurred at all.” In other words, Gould cites Nobel laureate Ed Purcell’s research on the matter.

What that research shows is that, to make it a better-than-even money proposition “that a run of even fifty games will occur once in the history of baseball,” then “baseball’s rosters would have to include either four lifetime .400 batters or fifty-two lifetime .350 batters over careers of one thousand games.” There are, of course, only three men who ever hit more than .350 lifetime (Cobb, Hornsby, and, tragically, Joe Jackson), which is to say that DiMaggio’s streak is, Gould wrote, “the most extraordinary thing that ever happened in American sports.” That in turn is why Gould can say that Joe DiMaggio, even as the Panzers drove a thousand miles of Russian wheatfields, actually attained a state chased by saints for millennia: by holding back, from 15 May to 17 July, 1941, the inevitable march of time like some contemporary Joshua, DiMaggio “cheated death, at least for a while.” To paraphrase Paul Simon, Joe DiMaggio fought a duel that, in every way that can be looked at, he was bound to lose—which is to say, as Gould correctly does, that his victory was in postponing that loss all of us are bound to one day suffer.

Woo woo woo.

What appears to be a simple baseball story, then, actually has a lesson for us here today: it tells us that advocates of proportional representation are thereby suggesting that colleges ought to be more or less required not merely to reproduce Joe DiMaggio’s hitting streak from the summer of 1941, but to do it every single season—a quest that in a practical sense is impossible. The question then must be how such an idea could ever have taken root in the first place—a question that Paul Krugman’s earlier comment about how a commitment to bad thinking about one issue can lead to bad thinking about others may help to answer. Krugman suggested in that essay that one reason why people who ought to know better might tolerate “a largely meaningless concept” was “precisely because they believe[d] they [could] harness it in the service of good policies”—and quite clearly, proponents of the proportional ideal have good intentions, which may be just why it has held on so long despite its manifest absurdity. But good intentions are not enough to ensure the staying power of a bad idea.

“Long streaks always are, and must be,” Gould wrote about DiMaggio’s feat of survival, “a matter of extraordinary luck imposed upon great skill”—which perhaps could be translated, in this instance, by saying that if an idea survives for some considerable length of time it must be because it serves some interest or another. In this case, it seems entirely plausible to think that the notion of “proportional representation” in relation to minority populations survives not because it is just, but instead because it allows the law, in the words of literary scholar Stanley Fish, “to have a formal existence”—that is, “to be distinct, not something else.” Without such a distinction, as Fish notes, the law would be in danger of being “declared subordinate to some other—non-legal—structure of concern,” and if so then “that discourse would be in the business of specifying what the law is.” But the legal desire Fish dresses up in a dinner jacket, attorney David Post of The Volokh Conspiracy website suggests, may merely be the quest to continue to wear a backwards baseball cap.

Apropos of Oliver Roeder’s article about the Supreme Court’s allergy to mathematics, Post says in other words, not only is there “a rather substantial library of academic commentary on ‘innumeracy’ at the court,” but “it is unfortunately well within the norms of our legal culture … to treat mathematics and related disciplines as kinds of communicable diseases with which we want no part.” What’s driving the theory of proportional representation, then, may not be the quest for racial justice, or even the wish to maintain the law’s autonomy, but instead the desire of would-be lawyers to avoid mathematics classes. But if so, then by seeking social justice through the prism of the law—which rules out of court at the outset any consideration of mathematics as a possible tool for thinking about human problems, and hence forbids (or at least, as in Gill v. Whitford, obstructs) certain possible courses of action to remedy social issues—advocates for African-Americans and others may be unnecessarily limiting their available options, which may be far wider, and wilder, than anyone viewing the problems of race through the law’s current framework can now see.

Yet—as any consideration of streaks and runs must, eventually, conclude—just because that is how things are at the moment is no reason to suspect that things will remain that way forever: as Gould says, the “gambler must go bust” when playing an opponent, like history itself, with near-infinite resources. Hence, Paul Simon to the contrary, the impressive thing about the Yankee Clipper’s feat in that last summer before the United States plunged into global war is not that after “Ken Keltner made two great plays at third base and lost DiMaggio the prospect of a lifetime advertising contract with the Heinz ketchup company” Joe DiMaggio left and went away. Instead, it is that the great outfielder lasted as long as he did; just so, in Oliver Roeder’s article he mentions that Sanford Levinson, a professor of law at the University of Texas at Austin and one of the best-known American legal scholars, has diagnosed “the problem [as] a lack of rigorous empirical training at most elite law schools”—which is to say that “the long-term solution would be a change in curriculum.” The law’s streak of avoiding mathematics, in other words, may be like all streaks. In the words of the poet of the subway walls,

Koo-koo …

Ka-choo.

Forked

Alice came to a fork in the road. “Which road do I take,” she asked.
“Where do you want to go?” responded the Cheshire Cat.
“I don’t know,” Alice answered.
“Then,” said the Cat, “it doesn’t matter.”
—Lewis Carroll. Alice’s Adventures in Wonderland. (1865).

 

At Baden Baden, 1925, Reti, the hypermodern challenger, opened with the Hungarian, or King’s Fianchetto; Alekhine—the only man to die still holding the title of world champion—countered with an unassuming king’s pawn to e5. The key moment did not take place, however, until Alekhine threw his rook nearly across the board at move 26, which appeared to lose the champion a tempo—but as C.J.S. Purdy would write for Chess World two decades, a global depression, and a world war later, “many of Alekhine’s moves depend on some surprise that comes far too many moves ahead for an ordinary mortal to have the slightest chance of foreseeing it.” The rook move, in sum, resulted in the triumphant slash of Alekhine’s bishop at move 42—a move that “forked” the only two capital pieces Reti had left: his knight and rook. “Alekhine’s chess,” Purdy would write later, “is like a god’s”—an hyperbole that not only leaves this reader of the political scientist William Riker thankful that the chess writer did not see the game Riker saw played at Freeport, 1858, but also grateful that neither man saw the game played at Moscow, 2016.

All these games, in other words, ended with what is known as a “fork,” or “a direct and simultaneous attack on two or more pieces by one piece,” as the Oxford Companion to Chess defines the maneuver. A fork, thereby, forces the opponent to choose; in Alekhine’s triumph, called “the gem of gems” by Chess World, the Russian grandmaster forced his opponent to choose which piece to lose. Just so, in The Art of Political Manipulation, from 1986, University of Rochester political scientist William Riker observed that “forks” are not limited to dinner or to chess. In Political Manipulation Riker introduced the term “heresthetics,” or—as Norman Schofield defined it in 2006—“the art of constructing choice situations so as to be able to manipulate outcomes.” Riker further said that  “the fundamental heresthetical device is to divide the majority with a new alternative”—or in other words, heresthetics is often a kind of political fork.

The premier example Riker used to illustrate such a political forking maneuver was performed, the political scientist wrote, by “the greatest of American politicians,” Abraham Lincoln, at the sleepy Illinois town of Freeport during the drowsy summer of 1858. Lincoln that year was running for the U.S. Senate seat for Illinois against Stephen Douglas—the man known as “the Little Giant” both for his less-than-imposing frame and his significance in national politics. So important had Douglas become by that year—by extending federal aid to the first “land grant” railroad, the Illinois Central, and successfully passing the Compromise of 1850, among many other achievements—that it was an open secret that he would run for president in 1860. And not merely run; the smart money said he would win.

Where the smart money was not was on Abraham Lincoln, a lanky and little-known one-term congressman in 1858. The odds against the would-be Illinois politician were so long, in fact, that according to Riker Lincoln had to take a big risk to win—which he did, by posing a question to Douglas at the little town of Freeport, near the Wisconsin border, towards the end of August. That question was this: “Can the people of a United States Territory, in any lawful way, against the wish of any citizen of the United States, exclude slavery from its limits prior to the formation of a state constitution?” It was a question, Riker wrote, that Lincoln had honed “stilletto-sharp.” It proved a knife in the heart of Stephen Douglas’ ambitions.

Lincoln was, of course, explicitly against slavery, and therefore thought that territories could ban slavery prior to statehood. But many others thought differently; in 1858 the United States stood poised at a precipice that, even then, only a few—Lincoln among them—could see. Already, the nation had been roiled by the Kansas-Nebraska Act of 1854; already, a state of war existed between pro- and anti-slavery men on the frontier. The year before, the U.S. Supreme Court had outlawed the prohibition of slavery in the territories by means of the Dred Scott decision—a decision that, in his “House Divided” speech in June that same year, Lincoln had already charged Douglas with conspiring with the president of the United States, James Buchanan, and Supreme Court Chief Justice Roger Taney to bring about. What Lincoln’s question was meant to do, Riker argued, was to “fork” Douglas between two constituencies: the local Illinois constituents who could return, if they chose, Douglas to the Senate in 1858—and the larger, national constituency that could deliver, if they chose, Douglas the presidency in 1860.

“If Douglas answered yes” to Lincoln’s question, Riker wrote, and thereby said that a territory could exclude slavery prior to statehood, “then he would please Northern Democrats for the Illinois election”—because he would take an issue away from Lincoln by explicitly stating they shared the same opinion. If so, he would take away one of Lincoln’s chief weapons—a weapon especially potent in far northern, German-settled, towns like Freeport. But what Lincoln saw, Riker says, is that if Douglas said yes he would also earn the enmity of Southern slaveowners, for whom it would appear “a betrayal of the Southern cause of the expansion of slave territory”—and thusly cost him a clean nomination for the leadership of the Democratic Party as candidate for president in 1860. If, however, Douglas answered no, “then he would appear to capitulate entirely to the Southern wing of the party and alienate free-soil Illinois Democrats”—thereby hurting “his chances in Illinois in 1858 but help[ing] his chances for 1860.” In Riker’s view, in other words, at Freeport in 1858 Lincoln forked Douglas much as the Russian grandmaster would fork his opponent at the German spa in 1925.

Yet just as that late winter game was hardly the last time the maneuver was used in chess, “forking” one’s political opponent scarcely ended in the little nineteenth-century Illinois farm village. Many of Hillary Clinton’s supporters in 2016 now believe that the Russians “interfered” with the American election—but what hasn’t been addressed is how the Russian state, led by Putin, could have interfered with an American election. Like a vampire who can only invade a home once invited, anyone attempting to “interfere” with an election must have some material to work with; Lincoln’s question at Freeport, after all, exploited a previously-existing difference between two factions within the Democratic Party. If the Russians did “interfere” with the 2016 election, that is, they could only have done so if there already existed yet another split within the Democratic ranks—which, as everyone knows, there was.

“Not everything is about an economic theory,” Hillary Clinton claimed in a February of 2016 speech in Nevada—a claim common enough to anyone who’s been on campus in the past two generations. After all, as gadfly Thomas Frank has remarked (referring to the work of James McGuigan), the “pervasive intellectual reflex” of our times is the “‘terror of economic reductionism.’” The idea that “not everything is about economics” is the core of what is sometimes known as the “cultural left,” or what Penn State University English professor (and former holder of the Paterno Chair) Michael Bérubé has termed “the left that aspires to analyze culture” as opposed to “the left that aspires to carry out public policy.” Clinton’s speech largely echoed the views of that “left,” which—according to the late philosopher Richard Rorty, in the book that inspired Bérubé’s remarks above—is more interested in “remedies … for American sadism” than those “for American selfishness.” It was that left that the rest of Clinton’s speech was designed to attract.

“If we broke up the big banks tomorrow,“ Clinton went on to ask after the remark about economic theory, “would that end racism?” The crowd, of course, answered “No.” “Would that end racism?” she continued, and then called again using the word “sexism,” and then again—a bit more convoluted, now—with “discrimination against the LGBT community?” Each time, the candidate was answered with a “No.” With this speech, in other words, Clinton visibly demonstrated the arrival of this “cultural left” at the very top of the Democratic Party—the ultimate success of the agenda pushed by English professors and others throughout the educational system. If, as Richard Rorty wrote, it really is true that “the American Left could not handle more than one initiative at a time,” so that “it either had to ignore stigma in order to concentrate on money, or vice versa,” then Clinton’s speech signaled the victory of the “stigma” crowd over the “money” crowd. Which is why what Clinton said next was so odd.

The next line of Clinton’s speech went like this: “Would that”—i.e., breaking up the big banks—“give us a real shot at ensuring our political system works better because we get rid of gerrymandering and redistricting and all of these gimmicks Republicans use to give themselves safe seats, so they can undo the progress we have made?” It’s a strange line; in the first place, it’s not exactly the most euphonious group of words I’ve ever heard in a political speech. But more importantly—well, actually, breaking up the big banks could perhaps do something about gerrymandering. According to OpenSecrets.org, after all, “72 percent of the [commercial banking] industry’s donations to candidates and parties, or more than $19 million, went to Republicans” in 2014—hence, maybe breaking them up could reduce the money available to Republican candidates, and so lessen their ability to construct gerrymandered districts. But, of course, doing so would require precisely the kinds of thought pursued by the “public policy” left—which Clinton had already signaled she had chosen against. The opening lines of her call-and-response, in other words, demonstrated that she had chosen to sacrifice the “public policy” left—the one that speaks the vocabulary of science—in favor of the “cultural left”—the one that speaks the vocabulary of the humanities. By choosing the “cultural left,” Clinton was also in effect saying that she would do nothing about either big banks or gerrymandering.

That point was driven home in an article in Fivethirtyeight this past October. In “The Supreme Court Is Allergic To Math,” Oliver Roeder discussed the case of Gill v. Whitford—a case that not only “will determine the future of partisan gerrymandering,” but also “hinges on math.” At issue in the case is something called “the efficiency gap,” which calculates “the difference between each party’s ‘wasted’ votes—votes for losing candidates and votes for winning candidates beyond what the candidate needed to win—and divide that by the total number of votes cast.” The basic argument, in other words, is fairly simple: if a mathematical test determines that a given arrangement of legislative districts provides a large difference, that is evidence of gerrymandering. But in oral arguments, Roeder went on to say, the “most powerful jurists in the land” demonstrated “a reluctance—even an allergy—to taking math and statistics seriously.” Chief Justice John Roberts, for example, said it “may simply be my educational background, but I can only describe [the case] as sociological gobbledygook.” Neil Gorsuch, the man who received the office that Barack Obama was prevented from awarding, compared “the metric to a secret recipe.” In other words, in this case it was the disciplines of mathematics and above all, statistics, that are on the side of those wanting to get rid of gerrymandering, not those analyzing “culture” and fighting “stigma”—concepts that were busy being employed by the justices, essentially to wash their hands of the issue of gerrymandering.

Just as, in other words, Lincoln exploited the split between Douglas’ immediate voters in Illinois who could give him the Senate seat, and the Southern slaveowners who could give him the presidency, Putin (or whomever else one wishes to nominate for that role) may have exploited the difference between Clinton supporters influenced by the current academy—and those affected by the yawning economic chasm that has opened in the United States. Whereas academics are anxious to avoid discussing money in order not to be accused of “economic reductionism,” in other words, the facts on the ground demonstrate that today “more money goes to the top (more than a fifth of all income goes to the top 1%), more people are in poverty at the bottom, and the middle class—long the core strength of our society—has seen its income stagnate,” as Nobel Prize-winning economist Joseph Stiglitz put the point in testimony to the U.S. Senate in 2014. Furthermore, Stiglitz noted, America today is not merely “the advanced country … with the highest level of inequality, but is among those with the least equality of opportunity.” Or in other words, as David Rosnick and Dean Baker put the point in November of that same year, “most [American] households had less wealth in 2013 than they did in 2010 and much less than in 1989.” To address such issues, however, would require precisely the sorts of intellectual tools—above all, mathematical ones—that the current bien pensant orthodoxy of the sort represented by Hillary Clinton, the orthodoxy that abhors sadism more than selfishness, thinks of as irrelevant.

But maybe that’s too many moves ahead.

No Justice, No Peace

 

‘She’s never found peace since she left his arms, and never will again till she’s as he is now!’
—Thomas Hardy. Jude the Obscure. (1895).

Done because we are too menny,” writes little “Father Time,” in Thomas Hardy’s Jude the Obscure—a suicide note that is meant to explain why the little boy has killed his siblings, and then hanged himself. The boy’s family, in other words, is poor, which is why Father Time’s father Jude (the titular obscurity) is never able, as he wished, to become the scholar he once dreamed of becoming. Yet, although Jude is a great tragedy, it is also something of a mathematical textbook: the principle taught by little Jude instructs not merely about why his father does not get into university, but perhaps also about just why, as Natasha Warikoo remarked in last week’s London Review of Books blog, “[o]ne third of Oxford colleges admitted no black British students in 2015.” Unfortunately, Warikoo never considers that possibility suggested by Jude: although Warikoo considers a number of reasons why black British students do not go to Oxford, she does not consider what we might call, in honor of Jude, the “Judean Principle”: that minorities simply cannot be proportionately represented everywhere always. Why? Well, because of the field goal percentages of the 1980-81 Philadelphia 76ers—and math.

“The Labour MP David Lammy,” wrote Warikoo, “believes that Oxford and Cambridge are engaging in social apartheid,” while “others have blamed the admissions system.” These explanations, Warikoo suggests, are incorrect: due to interviews with “15 undergraduates at Oxford who were born in the UK to immigrant parents, and 52 of their white peers born to British parents,” she believes that the reason for the “massive underrepresentation” of black British students is “related to a university culture that does not welcome them.” Or in other words, the problem is racism. But while it’s undoubtedly the case that many people, even today, are prejudiced, is prejudice really adequate to explain the case here?

Consider, after all, what it is that Warikoo is claiming—beginning with the idea of “massive underrepresentation.” As Walter Benn Michaels of the University of Illinois at Chicago has pointed out, the goal of many on the political “left” these days appears to be a “society in which white people were proportionately represented in the bottom quintile (and black people proportionately represented in the top quintile)”—in other words, a society in which every social strata contained precisely the same proportion of minority groups. In line with that notion, Warikoo assumes that, because Oxford and Cambridge do not contain the same proportion of black British people as the larger society does, that necessarily implies the racism of the system. But such an argument betrays an ignorance of how mathematics works—or more specifically, as MacArthur grant-winning psychologist Amos Tversky and his co-authors explained more than three decades ago, how basketball works.

In “The Hot Hand in Basketball: On the Misperception of Random Sequences,” Tversky and company investigated an entire season’s worth of shooting data from the NBA’s Philadelphia 76ers in order to discover whether there was evidence “that the performance of a player during a particular period is significantly better than expected on the basis of the player’s own record”—that is, whether players sometimes shot better (or “got hot”) than their overall shot record would predict. Prior to the research, it seems, everyone involved in basketball—fans, players, and coaches—appeared to believe that sometimes players did “get hot”—a belief that seems to predict that, sometimes, players have a better chance of making the second basket of a series than they did the first one:

Consider a professional basketball player who makes 50% of his shots. This player will occasionally hit four or more shots in a row. Such runs can properly be called streak shooting, however, only if their length or frequency exceeds what is expected on the basis of chance alone.

In other words, if a player really did get “hot,” or was “clutch,” then that fact would be reflected in the statistical record by a showing that sometimes players made second and third (and so on) baskets at a rate higher than that player’s chance of making a first basket: “the probability of a hit should be greater following a hit than following a miss.” If the “hot hand” existed, in other words, there should be evidence for it.

Unfortunately—or not—there was no such evidence, the investigators found: after analyzing the data for the nine players who took the vast majority of the 76ers shots for the 1980-81 season, Tversky and company found that “for eight of the nine players the probability of a hit is actually lower following a hit … than following a miss,” which is clearly “contrary to the hot-hand hypothesis.” (The exception is Daryl Dawkins, who played center—and was best known, as older fans may recall, for his backboard shattering dunks; i.e., a high-percentage shot.) There was no such thing as the “hot hand,” in short. (To use an odd turn of phrase with regards to the NBA.)

Yet, what has that to do with the fact that there were no black British students at one third of Oxford’s colleges in 2015? After all, not many British people play basketball, black or not. But as Tversky and his co-authors argue in “The Hot Hand,” the existence of the belief in a “hot hand” intimates that people’s “intuitive conception of randomness depart systematically from the laws of chance.” That is, when faced with a coin flip for example “people expect even short sequences of heads and tails to reflect the fairness of a coin and contain roughly 50% heads and 50% tails.” Yet, in reality, “the occurrence of, say, four heads in a row … is quite likely in a sequence of 20 tosses.” In just the same way, in other words, professional basketball players (who are obviously quite skilled at shooting baskets) are likely to make several baskets in a row—not because of any special quality of “heat” they possess, but instead simply because they are good shooters. It’s this inability to perceive randomness, in other words, that may help explain the absence of black British students at many Oxford colleges.

As we saw above, when Warikoo asserts that black students are “massively underrepresented” at Oxford colleges, what she means is that the proportion of black students at Oxford is not the same as the percentage of black people in the United Kingdom as a whole. But as “The Hot Hand” shows, to “expect [that] the essential characteristics of a chance process to be represented not only globally in the entire sequence, but also locally, in each of its parts” is irrational: in reality, a “locally representative sequence … deviates systematically from chance expectation.” Since Oxford colleges, after all, are much smaller population samples than the United Kingdom as a whole is, it would be absurd to believe that their populations could somehow exactly replicate precisely the same proportions as the larger population.

Maybe though you still don’t see why, which is why I’ll now call on some backup: professors of statistics Howard Wainer and Harris Zwerling. In 2006, the two observed that, during the 1990s, many became convinced that smaller schools were the solution to America’s “education crisis”—the Bill and Melinda Gates Foundation, they note, became so convinced of the fact that they spent $1.7 billion on it. That’s because “when one looks at high-performing schools, one is apt to see an unrepresentatively large proportion of smaller schools.” But while that may be so, the two say, in reality “seeing a greater than anticipated number of small schools” in the list of better schools “does not imply that being small means having a greater likelihood of being high performing.” The reason, they say, is precisely the same reason that you don’t have a higher risk of kidney cancer by living in the American South.

Why might you think that? Turns out, Wainer and Zwerling say, that U.S. counties with the highest apparent risk of kidney cancer are all “rural and located in the Midwest, the South, and the West.” So, should you avoid those parts of the country if you are afraid of kidney cancer? Not at all—because the U.S. counties with the lowest apparent risk of kidney cancer are all “rural and located in the Midwest, the South, and the West.” The county characteristics that tend to have both the highest and lowest rates of cancer are precisely the same.

What Wainer and Zwerling’s example shows is precisely the same as that shown by Tversky and company’s work on the field goal rates of the Philadelphia 76ers. It’s a “same” that can be expressed with the words of journalist Michael Lewis, who recently authored a book about Amos Tversky and his long-time research partner (and Nobel Prize-winner) Daniel Kahneman called The Undoing Project: A Friendship That Changed Our Minds: “the smaller the sample, the lower the likelihood that it would mirror the broader population.” As Brian S. Everitt notes in 1999’s Chance Rules: An Informal Guide to Probability, Risk, and Statistics, “in, say, 20 tosses of a fair coin, the number of heads is unlikely to be exactly 10”—the probability, in fact, is “a little less than 1 in 5.” In other words, a sample of 20 tosses is much more likely to come up biased towards either heads or tails—and much, much more likely to be heavily biased towards one or the other than a larger population of coin flips is. Getting extreme results is much more likely in smaller populations.

Oxford colleges are, of course, very small samples of the population of the United Kingdom, which is about 66 million people. Oxford University as a whole, on the other hand, contains about 23,000 students. There are 38 colleges (as well as some other institutions), and some of these—like All Souls, for example—do not even admit undergraduate students; those that that do consist largely of a few hundred students each. The question then that Natasha Warikoo ought to ask first about the admission of black British students to Oxford colleges is, “how likely is it that a sample of 300 would mirror a population of 66 million?” The answer, as the work of Tversky et al. demonstrates,  is “not very”—it’s even less likely, in other words, than the likelihood of throwing exactly 2 heads and 2 tails when throwing a coin four times.

Does that mean that racism does not exist? No, certainly not. But Warikoo says that “[o]nly when Oxford and Cambridge succeed in including young Britons from all walks of life will they be what they say they are: world-class universities.” In fact, however, the idea that institutional populations ought to mirror the broader population is not only not easy to obtain—but flatly absurd. It isn’t that that a racially proportionate society is a difficult goal, in other words—it is that it is an impossible one. To get 300 people, or even 23,000, to reflect the broader population would require, essentially, rewiring the system to such an extent that it’s possible that no other goals—like, say, educating qualified students—could also be achieved; it would require so much effort fighting the entropy of chance that the cause would, eventually, absorb all possible resources. In other words, Oxford can either include “young Britons from all walks of life”—or it can be a world-class university.  It can’t, however, be both; which is to say that Natasha Warikoo—like one character says about little “Father Time’s” stepmother, Sue, at the end of Jude the Obscure—will never find peace.

Blind Shots

… then you are apt to have what one of the tournament’s press bus drivers
describes as a “bloody near-religious experience.”
—David Foster Wallace. “Roger Federer As Religious Experience.” The New York Times, 20 Aug. 2006.

Not much gets by the New York Times, unless it’s the non-existence of WMDs—or the rules of tennis. The Gray Lady is bamboozled by the racquet game: “The truth is,” says The New York Times Guide to Essential Knowledge, Third Edition, not only that “no one knows for sure how … the curious scoring system came about.” But in what might be an example of the Times’ famously droll sense of fun, an article by Stuart Miller entitled “Quirks of the Game: How Tennis Got Its Scoring System” not only does not provide the answer its title promises, but actually even only addresses its ostensible subject by merely noting that “No one can pinpoint exactly when and how” the ostensible subject of the piece came into existence. So much, one supposes, for reportorial tenacity. Yet despite the failure of the Times, in fact there is an explanation for tennis’ scoring system—an explanation that is so simple that while the Times’ inability to see why tennis is scored the way it is is amusing, also leads to disquieting thoughts about what else the Times can’t see. That’s because solving the mystery of why tennis is scored the way it is also could explain a great deal about political reality in the United States.

To be fair, the Times is not alone in its befuddlement: “‘It’s a difficult topic,’” says one “Steve Flink, an historian and author of ‘The Greatest Tennis Matches of All Time,’” in the “How Tennis Got Its Scoring System” story. So far as I can tell, all tennis histories are unclear about the origins of the scoring system: about all anyone knows for sure—or at least, is willing to put on paper—is that (as Rolf Potts put it in an essay for The Smart Set a few years ago) when modern lawn tennis was codified in 1874, it “appropriated the scoring system of the ancient French game” of jeu de paume, or “real tennis” as it is known in English. The origins of the modern game of tennis, all the histories do agree, lie in this older game—most of all, the scoring system.

Yet, while that does push back the origins of the system a few centuries, no one seems to know why jeu de paume adopted the system it did, other than to observe that the scoring breakdowns of 15, 30, and 40 seem to be, according to most sources, allusions to the face of a clock. (Even the Times, it seems, is capable of discovering this much: the numbers of the points, Miller says, appear “to derive from the idea of a clock face.”) But of far more importance than the “15-30-40” numbering is why the scoring system is qualitatively different than virtually every other kind of sport—a difference even casual fans are aware of and yet even the most erudite historians, so far as I am aware, cannot explain.

Psychologist Allen Fox once explained the difference in scoring systems in Tennis magazine: whereas, the doctor said, the “score is cumulative throughout the contest in most other sports, and whoever has the most points at the end wins,” in tennis “some points are more important than others.” A tennis match, in other words, is divided up into games, sets, and matches: instead of adding up all the points each player scores at the end, tennis “keeps score” by counting the numbers of games, and sets, won. This difference, although it might appear trivial, actually isn’t—and it’s a difference that explains not only a lot about tennis, but much else besides.

Take the case of Roger Federer, who has won 17 major championships in men’s tennis: the all-time record in men’s singles. Despite this dominating record, many people argue that he is not the sport’s Greatest Of All Time—at least, according to New York Times writer Michael Steinberger. Not long ago, Steinberger said that the reason people can argue that way is because Federer “has a losing record against [Rafael] Nadal, and a lopsided one at that.” (Currently, the record stands at 23-10 in favor of Nadal—a nearly 70% edge.) Steinberger’s article—continuing the pleasing simplicity in the titles of New York Times tennis articles, it’s named “Why Roger Federer Is The Greatest Of All Time”—then goes on to argue that Federer should be called the “G.O.A.T.” anyway, record be damned.

Yet weirdly, Steinberger didn’t attempt—and neither, so far as I can tell, has anyone else—to do what an anonymous blogger did in 2009: a feat that demonstrates just why tennis’ scoring system is so curious, and why it has implications, perhaps even sinister implications from a certain point of view, far beyond tennis. What that blogger did, on a blog entitled SW19—postal code for Wimbledon, site of the All-England Tennis Club—was very simple.

He counted up the points.

In any other sport, with a couple of exceptions, that act might seem utterly banal: in those sports, in order to see who’s better you’d count up how many one player scored and then count up how many the other guy scored when they played head-to-head. But in tennis that apparently simple act is not so simple—and the reason it isn’t is what makes tennis such a different game than virtually all other sports. “In tennis, the better player doesn’t always win,” as Carl Bialik for FiveThirtyEight.com pointed out last year: because of the scoring system, what matters is whether you win “more sets than your opponent”—not necessarily more points.

Why that matters is because the argument against Federer as the Greatest Of All Time rests on the grounds that he has a losing record against Nadal: at the time the anonymous SW19 blogger began his research in 2009, that record was 13-7 in Nadal’s favor. As the mathematically-inclined already know, that record translates to a 65 percent edge to Nadal: a seemingly-strong argument against Federer’s all-time greatness because the percentage seems so overwhelmingly tilted toward the Spaniard. How can the greatest player of all time be so weak against one opponent?

In fact, however, as the SW19 blogger discovered, Nadal’s seemingly-insurmountable edge was an artifact of the scoring system, not a sign of Federer’s underlying weakness. Of the 20 matches the two men had played up until 2009, the two men played 4,394 total points: that is, where one player served and the two volleyed back and forth until one player failed to deliver the ball to the other court according to the rules. If tennis had a straightforward relationship between points and wins—like baseball or basketball or football—then it might be expected that Nadal has won about 65 percent of those 4,394 points played, which would be about 2,856 points. In other words, to get a 65 percent edge in total matches, Nadal should have about a 65 percent edge in total points: the point total, as opposed to the match record, between the two ought to be about 2,856 to 1,538.

Yet that, as the SW19 blogger realized, is not the case: the real margin between the two players was Nadal, 2,221, and Federer, 2,173. Further, those totals included Nadal’s victory in the 2008 French Open final—which was played on Nadal’s best surface, clay—in straight sets, 6-1, 6-3, 6-0. In other words, even including the epic beating at Roland Garros in 2008, Nadal had only beaten Federer by a total of 48 points over the course of their careers: a total of less than one percent of all the points scored.

And that is not all. If the single match at the 2008 French Open final is excluded, then the margin becomes eight points. In terms of points scored, in other words, Nadal’s edge is about a half of a percentage point—and most of that percentage was generated by a single match. So, it may be so that Federer is not the G.O.A.T., but an argument against Federer cannot coherently be based on the fact of Nadal’s “dominating” record over the Swiss—because going by the act that is the central, defining act of the sport, the act of scoring points, the two players were, mathematically speaking, exactly equal.

Now, many will say here that, to risk making a horrible pun, I’ve missed the point: in tennis, it will be noted, not all acts of scoring are equal, and neither are all matches. It’s important that the 2008 match was a final, not an opening round … And so on. All of which certainly could be allowed, and reasonable people can differ about it, and if you don’t understand that then you really haven’t understood tennis, have you? But there’s a consequence to the scoring system—one that makes the New York Times’ inability to understand the origins of a scoring system that produces such peculiar results something more than simply another charming foible of the matriarch of the American press.

That’s because of something else that is unusual about tennis by comparison to other sports: its propensity for gambling scandals. In recent years, this has become something of an open secret within the game: when in 2007 the fourth-ranked player in the world, Nikolay Davydenko of Russia, was investigated for match-fixing, Andy Murray—the Wimbledon champion currently ranked third in the world—“told BBC Radio that although it is difficult to prove who has ‘tanked’ a match, ‘everyone knows it goes on,” according to another New York Times story, this one by reporter Joe Drape.

Around that same time Patrick McEnroe, brother of the famous champion John McEnroe, told the Times that tennis “is a very easy game to manipulate,” and that it is possible to “throw a match and you’d never know.” During that scandal year of 2007, the problem seemed about to break out into public awareness: in the wake of the Davydenko case the Association of Tennis Professionals, one of the sport’s governing bodies, commissioned an investigation conducted by former Scotland Yard detectives into match-fixing and other chicanery—the Environmental Review of Integrity In Professional Tennis, issued in May of 2008. That investigation resulted in four lowly-ranked players being banned from the professional ranks, but not much else.

Perhaps however that papering-over should not be surprising, given the history of the game. As mentioned, today’s game of tennis owes its origins in the game of real tennis, or jeu de paume—a once-hugely popular game very well-known for its connection to gambling. “Gambling was closely associated with tennis,” as Elizabeth Wilson puts it in her Love Game: A History of Tennis, from Victorian Pastime to Global Phenomenon, and jeu de paume had a “special association with court life and the aristocracy.” Henry VIII of England, for example, was an avid player—he had courts built in several of his palaces—and, as historian Alison Weir has put it in her Henry VIII: The King and His Court, “Gambling on the outcome of a game was common.” In Robert E. Gensemer’s 1982 history of tennis, the historian points out that “monetary wagers on tennis matches soon became commonplace” as jeu de paume grew in popularity. Yet eventually, as historians of jeu de paume have repeatedly shown, by “the close of the eighteenth century … game fixing and gambling scandals had tarnished Jeu de Paume’s reputation,” as a history of real tennis produced by an English real tennis club has put it.

Oddly however, despite all this evidence directly in front of all the historians, no one, not even the New York Times, seems to have put together the connection between tennis’ scoring system and the sport’s origins in gambling. It is, apparently, something to be pitied, and then moved past: what a shame it is that these grifters keep interfering with this noble sport! But that is to mistake the cart for the horse. It isn’t that the sport attracts con artists—it’s rather because of gamblers that the sport exists at all. Tennis’ scoring system, in other words, was obviously designed by, and for, gamblers.

Why, in other words, should tennis break up its scoring into smaller, discrete units—so that  the total number of points scored is only indirectly related to the outcome of a match? The answer to that question might be confounding to sophisticates like the New York Times, but child’s play to anyone familiar with a back-alley dice game. Perhaps that’s why places like Wimbledon dress themselves up in the “pageantry”—the “strawberries and cream” and so on—that such events have: because if people understood tennis correctly, they’d realize that were this sport played in Harlem or Inglewood or 71st and King Drive in Chicago, everyone involved would be doing time.

That’s because—as Nassim Nicholas Taleb, author of The Black Swan: The Impact of the Highly Improbable, would point out—breaking a game into smaller, discrete chunks, as tennis’ scoring system does, is—exactly, precisely—how casino operators make money. And if that hasn’t already made sense to you—if, say, it makes more sense to explain a simple, key feature of the world by reference to advanced physics rather than merely to mention the bare fact—Taleb is also gracious enough to explain how casinos make money via a metaphor drawn from that ever-so-simple subject, quantum mechanics.

Consider, Taleb asks in that book, that because a coffee “cup is the sum of trillions of very small particles” there is little chance that any cup will “jump two feet” of its own spontaneous accord—despite the fact that, according to the particle physicists, that event is not outside the realm of possibility. “Particles jump around all the time,” as Taleb says, so it is indeed possible that a cup could do that. But in order to to make that jump, it would require that all the particles in the cup made the same leap at precisely the same time—an event so unlikely that the odds of it are longer than the lifetime of the universe. Were any of the particles in the cup to make such a leap, that leap would be canceled out by the leap of some other particle in the cup—coordinating so many particles is effectively impossible.

Yet, observe that by reducing the numbers of particles to less than a coffee cup, it can be very easy to ensure that some number of particles jump: if there is only one particle, the chance that it will jump is effectively 100%. (It would be more surprising if it didn’t jump.) “Casino operators,” as Taleb drily adds, “understand this well, which is why they never (if they do things right) lose money.” All they have to do to make money, on the other hand, is to refuse to “let one gambler make a massive bet,” and instead to ensure “to have plenty of gamblers make a series of bets of limited size.” The secret of a casino is that it multiplies the numbers of gamblers—and hence the numbers of bets.

In this way, casino operators can guarantee that “the variations in the casino’s returns are going to be ridiculously small, no matter the total gambling activity.” By breaking up the betting into thousands, and even—over the course of time—millions or billions of bets, casino operators can ensure that their losses on any single bet are covered by some other bet elsewhere in the casino: there’s a reason that, as the now-folded website Grantland pointed out in 2014, during the previous 23 years “bettors have won twice, while the sportsbooks have won 21 times” in Super Bowl betting. The thing to do in order to make something “gamable”—or “bettable,” which is to say a commodity worth the house’s time—is to break its acts into as many discrete chunks as possible.

The point, I think, can be easily seen: by breaking up a tennis match into smaller sets and games, gamblers can commodify, or make the sport “more bettable”—at least, from the point of view of a sharp operator. “Gamblers may be a total of $20 million, but you needn’t worry about the casino’s health,” Taleb says—because the casino isn’t accepting ten $2 million bets. Instead, “the bets run, say, $20 on average; the casino caps the bets at a maximum.” Rather than making one bet on a match’s outcome, gamblers can make a series of bets on the “games within the game”—bets that, as in the case of the casino, inevitably favor the house even without any match-fixing involved.

In professional tennis there are, as Louisa Thomas pointed out in Grantland a few years ago, every year “tens of thousands of professional matches, hundreds of thousands of games, millions of points, and patterns in the chaos.” (If there is match-fixing—and as mentioned there have been many allegations over the years—well then, you’re in business: an excellent player can even “tank” many, many early opportunities, allowing confederates to cash in, and still come back to put away a weaker opponent.) Anyway, just as Taleb says, casino operators inevitably wish to make bets as numerous as possible because, in the long run, that protects their investment—and tennis, what a co-inky-dink, has more opportunities for betting than virtually any sport you can name.

The august majesty of the New York Times, however, cannot imagine any of that. In their “How Tennis Got Its Scoring System” story, it mentions the speculations of amateur players who say things like: “The eccentricities are part of the fun,” and “I like the old-fashioned touches that tennis has.” It’s all so quaint, in the view of the Times. But since no one can account for tennis’ scoring system otherwise, and everyone admits not only that gambling flourished around lawn tennis’ predecessor game, jeu de paume (or real tennis), but also that the popularity of the sport was eventually brought down precisely because of gambling scandals—and tennis is to this day vulnerable to gamblers—the hypothesis that tennis is scored the way it is for the purposes of gambling makes much more sense than, say, tennis historian Elizabeth Wilson’s solemn pronouncement that tennis’ scoring system is “a powerful exception to the tendencies toward uniformity” that is so dreadfully, dreadfully common in our contemporary vale of tears.

The reality, of course, is that tennis’ scoring system was obviously designed to fleece suckers, not to entertain the twee viewers of Wes Anderson movies. Yet while such dimwittedness can be expected from college students or proper ladies who have never left the Upper East Side of Manhattan or Philadelphia’s Main Line, why is the New York Times so flummoxed by the historical “mystery” of it all? The answer, I suspect anyway, lies in some other, far more significant, sport that is played by with a very similar set of rules as tennis: one that equally breaks up the action into many more different acts than seem strictly necessary. In this game, too, there is an indirect connection between the central, defining act and wins and losses.

The name of that sport? Well, it’s really two versions of the same game.

One is called “the United States Senate”—and the other is called a “presidential election.”

Shut Out

But cloud instead, and ever-during dark
Surrounds me, from the cheerful ways of men
Cut off, and for the book of knowledge fair

And wisdom at one entrance quite shut out
Paradise Lost. Book III, 45-50

Hey everybody, let’s go out the baseball game,” the legendary 1960s Chicago disc jockey Dick Biondi said in the joke that (according to the myth) got him fired. “The boys,” Biondi is alleged to have said, “kiss the girls on the strikes, and …” In the story, of course, Biondi never finished the sentence—but you see where he was going, which is what makes the story interesting to a specific type of philosopher: the epistemologist. Epistemology is the study of how people know things: the question the epistemologist might ask about Biondi’s joke is, how do you know the ending to that story? For many academics today, the answer can be found in another baseball story, this time told by the literary critic Stanley Fish—a story that, oddly enough, also illustrates the political problems with that wildly popular contemporary concept: “diversity.”

As virtually everyone literate knows, “diversity” is one of the great adjectives of the present: something that has it is, ipso facto, usually held to be better than something that doesn’t. As a virtue, “diversity” has tremendous range, because it applies both in natural contexts—“biodiversity” is all the rage among environmentalists—and in social ones: in the 2003 case of Grutter v. Bollinger, for example, the Supreme Court held that the “educational benefits of diversity” were a “compelling state interest.” Yet, what often goes unnoticed about arguments in favor of “diversity” is that they themselves are dependent upon a rather monoglot account of how people know things—which is how we get back to epistemology.

Take, for instance, Stanley Fish’s story about the late, great baseball umpire Bill Klem. “It ain’t nothin’ til I call it,” Klem supposedly once said in response to a batter’s question about whether the previous pitch was a ball or a strike. (It’s a story I’ve retailed before: cf. “Striking Out”). The literature professor Stanley Fish has used that story, in turn, to illustrate what he views as the central lesson of what is sometimes called “postmodernism”: according to The New Yorker, Fish’s (and Klem’s) point is that “balls and strikes come into being only on the call of an umpire,” instead of being “facts in the world.” Klem’s remark in other words—Fish thinks—illustrates just how knowledge is what is sometimes called “socially constructed.”

The notion of “social construction” is the idea—as City College of New York professor Massimo Pigliucci recently put the point—that “no human being, or organized group of human beings, has access to a god’s eye of the world,” and that we ought therefore rely on an epistemic model in which “many individually biased points of view enter into dialogue with each other, yielding a less (but still) biased outcome.” The idea, in other words, is that meaning is—as Canadian philosopher Ian Hacking described the concept in The Social Construction of What?—“the product of historical events, social forces, and ideology.” Or, to put it another way, that we know things because of our culture, or social group: not by means of our own senses and judgement, but by the people around us.

For Pigliucci, this view of how human beings access reality suggests that we ought therefore rely on a particular epistemic model: rather than one in which each person ought to judge evidence for herself, we would instead rely on one in which “many individually biased points of view enter into dialogue with each other, yielding a less (but still) biased outcome.” In other words, we should rely upon diverse points of view, which is one reason why Pigliucci says, for instance, that because of the overall cognitive lack displayed by individuals, we ought “to work toward increasing diversity in the sciences.” Pigliucci’s reasoning is, of course, also what forms the basis of Grutter: “When universities are granted the freedom to assemble student bodies featuring multiple types of diversity,” wrote defendant Lee Bollinger (then dean of the University of Michigan law school) in an editorial for the Washington Post about the case, “the result is a highly sought-after learning environment that attracts the best students.” “Diversity,” in sum, is a tool to combat our epistemic weaknesses.

“Diversity” is thereby justified by means of a particular vision of epistemology: a particular theory of how people know things. On this theory, we are dependent upon other people in order to know anything. Yet, the very basis of Dick Biondi’s “joke” is that you, yourself, can “fill in” the punchline: it doesn’t take a committee to realize what the missing word at the end of the story is. And what that reality—your ability to furnish the missing word—perhaps illustrates is an epistemic distinction Keynes made in his magisterial 1920 work, A Treatise on Probability: a distinction that troubles the epistemology that underlies the concept of “diversity.”

“Now our knowledge,” Keynes writes in chapter two of that work, “seems to be obtained in two ways: directly, as the result of contemplating the objects of acquaintance; and indirectly, by argument” (italics in original). What Keynes is proposing, in other words, is an epistemic division between two ways of knowing—one of them being much like the epistemic model described by Fish or Pigliucci or Bollinger. As Keynes says, “it is usually agreed that we do not have direct knowledge” of such things as “the law of gravity … the cure for phthisis … [or] the contents of Bradshaw”—things like these, in other words, are only known through chains of reasoning, rather than direct experience. In order to know items like these, in other words, we have to have undergone a kind of socialization, otherwise known as education. We are dependent on other people to know those things.

Yet, as Keynes also recognizes, there is also another means of knowing:  “From an acquaintance with a sensation of yellow,” the Canadian economist and thinker wrote, “I can pass directly to a knowledge of the proposition ‘I have a sensation of yellow.’” In this epistemic model, human beings can know things by immediate apprehension—the chief example of this form of knowing being, as Keynes describes, our own senses. What Keynes says, in short, is that people can know things in more than one way: one way through other people yes, as Fish et al. say—but also through our own experience.

Or—to put the point differently—Keynes has a “diverse” epistemology. That would, at least superficially, seem to make Keynes’ argument a support for the theory of “diversity”: after all, he is showing how people can know things differently, which would appear to assist Lee Bollinger and Massimo Pigliucci’s argument for diversity in education. If people can know things in different ways, it would then appear necessary to gather more, and different, kinds of people in order to know anything. But just saying so exposes the weakness at the heart of Bollinger and Pigliucci’s ideal of “diversity.”

Whereas Keynes has a “diverse” epistemology, in short, Bollinger and Pigliucci do not: in their conception, human beings can only know things in one way. That is the way that Keynes called “indirect”: through argumentation and persuasion—or as its sometimes put, “social construction.” In other words, the defenders of “diversity” have a rather monolithic epistemology, which is why Fish for instance once attacked the view that it is possible to “survey the world in a manner free of assumptions about what it is like and then, from that … disinterested position, pick out the set of reasons that will be adequate to its description.” If such a thing were possible, after all, it would be possible to experience a direct encounter with the world—which “diversity” enthusiasts like Fish deny is possible: Fish says, for instance, that “the rhetoric of disinterested inquiry … is in fact”—just how he knows this is unclear—“a very interested assertion of the superiority of one set of beliefs.” In other words, any other epistemological view than their own is merely a deception.

Perhaps though this is all just one of the purest cases of an “academic” dispute: eggheads arguing, as the phrase goes, about how many angels can dance on a pin. At least, until one realizes that the nearly-undisputed triumph of epistemology retailed by Fish and company also has certain quite-real consequences. For example, as the case of Bollinger demonstrates, although the “socially-constructed” epistemology is an excellent means, as has been demonstrated over the past several decades, of—in the words of Fish’s fellow literary critic William Benn Michaels—“battling over what skin color the rich kids should have,” it isn’t so great for, say, dividing up legislative districts: a question that, as Elizabeth Kolbert noted last year in The New Yorker, “may simply be mathematical.” But if so, that presents a problem for those who think of their epistemological views as serving a political cause.

Mathematics, after all, is famously not something that can be understood “culturally”; it is, as Keynes—and before him, a silly fellow named Plato—knew, perhaps the foremost example of the sort of knowing demonstrated by Dick Biondi’s joke. Mathematics, in other words, is the chief example of something known directly: when you understand something in mathematics, you understand it either immediately—or not at all. Which, after all, is the significance of Kolbert’s remarks: to say that re-districting—perhaps the most political act of all in a democracy—is primarily a mathematical operation is to say that to understand redistricting, you have to understand directly the mathematics of the operation. Yet if the “diversity” promoters are correct, then only their epistemology has any legitimacy: an epistemology that a priori prevents anyone from sensibly discussing redistricting. In other words, it’s precisely the epistemological blindspots promoted by the often-ostensibly “politically progressive” promoters of “diversity” that allow the current American establishment to ignore the actual interests of actual people.

Which, one supposes, may be the real joke.

Home of the Brave


audentes Fortuna iuvat.
The Aeneid. Book X, line 284. 

American prosecutors in the last few decades have—Patrick Keefe recently noted in The New Yorker—come to use more and more “a type of deal, known as a deferred-prosecution agreement, in which the company would acknowledge wrongdoing, pay a fine, and pledge to improve its corporate culture,” rather than prosecuting either the company officers or the company itself for criminal acts. According to prosecutors, it seems, this is because “the problem with convicting a company was that it could have ‘collateral consequences’ that would be borne by employees, shareholders, and other innocent parties.” In other words, taking action against a corporation could put it out of business. Yet, declining to prosecute because of the possible consequences is an odd position for a prosecutor to take: “Normally a grand jury will indict a ham sandwich if a prosecutor asks it to,” former Virginia governor Chuck Robb, once a prosecutor himself, famously remarked. Prosecutors, in other words, aren’t usually known for their sensitivity to circumstance—so why the change in recent decades? The answer may lie, perhaps, in a knowledge of child-raising practices of the ancient European nobility—and the life of Galileo Galilei.

“In those days,” begins one of the stories described by Nicola Clarke in The Muslim Conquest of Iberia: Medieval Arabic Narratives, “the custom existed amongst the Goths that the sons and daughters of the nobles were brought up in the king’s palace.” Clarke is describing the tradition of “fosterage”: the custom, among the medieval aristocracy, of sending one’s children to be raised by another noble family while raising another such family’s children in turn. “It is not clear what … was the motive” for fostering children, according to Laurence Ginnell’s The Brehon Laws (from 1894), “but its practice, whether designed for that end or not, helped materially to strengthen the natural ties of kinship and sympathy which bound the chief and clan or the flaith and sept together.” In Ginnell’s telling, “a stronger affection oftentimes sprang up between persons standing in those relations than that between immediate relatives by birth.” One of the purposes of fostering, in other words, was to decrease the risk of conflict by ensuring that members of the ruling classes grew up together: it’s a lot harder to go to war, the thinking apparently went, when you are thinking of your potential opponent as the kid who skinned his knee that one time, instead of the fearsome leader of a gang of killers.

Perhaps one explanation for why prosecutors appear to be willing to go easier on corporate criminals these days than in the past might be because they share “natural ties”: they attended the same schools as those they are authorized to prosecute. Although statistics on the matter appear lacking, there’s reason to think that future white collar criminals and their (potential) prosecutors share the same “old school” ties more and more these days: there’s reason to think, in other words, that just as American law schools have seized a monopoly on the production of lawyers—Robert H. Jackson, who served from 1941 to 1954, was the last American Supreme Court Justice without a law degree—so too have America’s “selective” colleges seized a monopoly on the production of CEOs. “Just over 10% of the highest paid CEOs in America came from the Ivy League plus MIT and Stanford,” a Forbes article noted in 2012—a percentage higher than at any previous moment in American history. In other words, just as lawyers all come from the same schools these days, so too does upper management—producing the sorts of “natural ties” that not only lead to rethinking that cattle raid on your neighbor’s castle, but perhaps also any thoughts of subjecting Jaime Dimon to a “perp walk.” Yet as plausible an explanation as that might seem, it’s even more satisfying when it is combined with an incident in the life of the great astronomer.

In 1621, a Catholic priest named Scipio Chiaramonti published a book about a supernova that had occurred in 1572; the exploded star (as we now know it to have been) had been visible during daylight for several weeks in that year. The question for astronomers in that pre-Copernican time was whether the star had been one of the “fixed stars,” and thus existed beyond the moon, or whether it was closer to the earth than the moon: since—as James Franklin, from whose The Science of Conjecture: Evidence and Probability Before Pascal I take this account, notes—it was “the doctrine of the Aristotelians that there could be no change beyond the sphere of the moon,” a nova that far away would refute their theory. Chiaramonti’s book claimed that the measurements of 12 astronomers showed that the object was not as far as the moon—but Galileo pointed out that Chiaramonti’s work had, in effect, “cherrypicked”: he did not use all the data actually available, but merely used that which supported his thesis. Galileo’s argument, oddly enough, can also be applied to why American prosecutors aren’t pursuing financial crimes.

The point is supplied, Keefe tells us, by James Comey: the recent head of the FBI fired by President Trump. Before moving to Washington Comey was U.S. Attorney for the Southern District of New York, in which position he once called—Keefe informs us—some of the attorneys working for the Justice Department members of “the Chickenshit Club.” Comey’s point was that while a “perfect record of convictions and guilty pleas might signal simply that you’re a crackerjack attorney,” it might instead “mean that you’re taking only those cases you’re sure you’ll win.” To Comey’s mind, the marvelous winning records of those working under him was not a sign of not a guarantee of the ability of those attorneys, but instead a sign that his office was not pursuing enough cases. In other words, just as Chiaramonti chose only those data points that confirmed his thesis, the attorneys in Comey’s office were choosing only those cases they were sure they would win.

Yet, assuming that the decrease in financial prosecution is due to prosecutorial choice, why are prosecutors more likely, when it comes to financial crimes, to “cherrypick” today than they were a few decades ago? Keefe says this may be because “people who go to law school are risk-averse types”—but that begs the question of why today’s lawyers are more risk-averse than their predecessors. The answer, at least according to a former Yale professor, may be that they are more likely to cherrypick because they are the product of cherrypicking.

Such at least was the answer William Deresiewicz arrived at in 2014’s “Don’t Send Your Kid to the Ivy League”—the most downloaded article in the history of The New Republic. “Our system of elite education manufactures young people who are smart and talented and driven, yes,” Deresiewicz wrote  there—but, he wrote, it also produces students that are “anxious, timid, and lost.” Such students, the Yale faculty member wrote, had “little intellectual curiosity and a stunted sense of purpose”; they are “great at what they’re doing but [have] no idea why they’re doing it.” The question Deresiewicz wanted answered was, of course, why the students he saw in New Haven were this way; the answer he hit upon was that the students he saw were themselves the product of a cherrypicking process.

“So extreme are the admissions standards now,” Deresiewicz wrote in “Don’t,” “that kids who manage to get into elite colleges have, by definition, never experienced anything but success.” The “result,” he concluded, “is a violent aversion to risk.” Deresiewicz, in other words, is thinking systematically: in other words, it isn’t so much that prosecutors and white collar criminals share the same background that has made prosecutions so much less likely, but instead the fact that prosecutors have experienced a certain kind of winnowing process in the course of achieving their positions in life.

To most people, in other words, scarcity equals value: Harvard admits very few people, therefore Harvard must provide an excellent education. But what the Chiaramonti episode brings to light is the notion that what makes Harvard so great may not be that it provides an excellent education, but instead that it admits such “excellent” people in the first place: Harvard’s notably long list of excellent alumni may not be a result of what’s happening in the classroom, but instead in the admissions office. The usual understanding of education, in other words, takes the significant action of education to be what happens inside the school—but what Galileo’s statistical perspective says, instead, is that the important play may be what happens before the students even arrive.

The question that Deresiewicz’ work suggests, in turn, is that this very process may itself have unseen effects: efforts to make Harvard (along with other schools) more “exclusive”—and thus, ostensibly, provide a better education—may actually be making students worse off than they might otherwise be. Furthermore, Keefe’s work intimates that this insidious effect might not be limited to education; it may be causing invisible ripples throughout American society—ripples that may not be limited to the criminal justice system. If the same effects Keefe says are affecting lawyers is also affecting the future CEOs the prosecutors are not prosecuting, then perhaps CEOs are becoming less likely to pursue the legitimate risks that are the economic lifeblood of the nation—and perhaps more susceptible to pursuing illegitimate risks, of the sort that once landed CEOs in non-pinstriped suits. Accordingly, perhaps that old conservative bumper sticker really does have something to teach American academics—it’s just that what both sides ought perhaps to realize is that this relationship may be, at bottom, a mathematical one. That relation, you ask?

The “land of the free” because of “the brave.”

Nunc Dimittis

Nunc dimittis servum tuum, Domine, secundum verbum tuum in pace:
Quia viderunt oculi mei salutare tuum
Quod parasti ante faciem omnium populorum:
Lumen ad revelationem gentium, et gloriam plebis tuae Israel.
—“The Canticle of Simeon.”
What appeared obvious was therefore rendered problematical and the question remains: why do most … species contain approximately equal numbers of males and females?
—Stephen Jay Gould. “Death Before Birth, or a Mite’s Nunc dimittis.”
    The Panda’s Thumb: More Reflections in Natural History. 1980.
 4199HWwzLWL._AC_US218_

Since last year the attention of most American liberals has been focused on the shenanigans of President Trump—but the Trump Show has hardly been the focus of the American right. Just a few days ago, John Nichols of The Nation observed that ALEC—the business-funded American Legislative Exchange Council that has functioned as a clearinghouse for conservative proposals for state laws—“is considering whether to adopt a new piece of ‘model legislation’ that proposes to do away with an elected Senate.” In other words, ALEC is thinking of throwing its weight behind the (heretofore) fringe idea of overturning the Seventeenth Amendment, and returning the right to elect U.S. Senators to state legislatures: the status quo of 1913. Yet, why would Americans wish to return to a period widely known to be—as the most recent reputable academic history, Wendy Schiller and Charles Stewart’s Electing the Senate: Indirect Democracy Before the Seventeenth Amendment has put the point—“plagued by significant corruption to a point that undermined the very legitimacy of the election process and the U.S. Senators who were elected by it?” The answer, I suggest, might be found in a history of the German higher educational system prior to the year 1933.

“To what extent”—asked Fritz K. Ringer in 1969’s The Decline of the German Mandarins: The German Academic Community, 1890-1933—“were the German mandarins to blame for the terrible form of their own demise, for the catastrophe of National Socialism?” Such a question might sound ridiculous to American ears, to be sure: as Ezra Klein wrote in the inaugural issue of Vox, in 2014, there’s “a simple theory underlying much of American politics,” which is “that many of our most bitter political battles are mere misunderstandings” that can be solved with more information, or education. To blame German professors, then, for the triumph of the Nazi Party sounds paradoxical to such ears: it sounds like blaming an increase in rats on a radio station. From that view, then, the Nazis must have succeeded because the German people were too poorly-educated to be able to resist Hitler’s siren song.

As one appraisal of Ringer’s work in the decades since Decline has pointed out, however, the pioneering researcher went on to compare biographical dictionaries between Germany, France, England and the United States—and found “that 44 percent of German entries were academics, compared to 20 percent or less elsewhere”; another comparison of such dictionaries found that a much-higher percentage of Germans (82%) profiled in such books had exposure to university classes than those of other nations. Meanwhile, Ringer also found that “the real surprise” of delving into the records of “late nineteenth-century German secondary education” is that it “was really rather progressive for its time”: a higher percentage of Germans found their way to a high school education than did their peers in France or England during the same period. It wasn’t, in other words, for lack of education that Germany fell under the sway of the Nazis.

All that research, however, came after Decline, which dared to ask the question, “Did the work of German academics help the Nazis?” To be sure, there were a number of German academics, like philosopher Martin Heidegger and legal theorist Carl Schmitt, who not only joined the party, but actively cheered the Nazis on in public. (Heidegger’s connections to Hitler have been explored by Victor Farias and Emannuel Faye; Schmitt has been called “the crown jurist of the Third Reich.”) But that question, as interesting as it is, is not Ringer’s; he isn’t interested in the culpability of academics in direct support of the Nazis, perhaps the culpability of elevator repairmen could as well be interrogated. Instead, what makes Ringer’s argument compelling is that he connects particular intellectual beliefs to a particular historical outcome.

While most examinations of intellectuals, in other words, bewail a general lack of sympathy and understanding on the part of the public regarding the significance of intellectual labor, Ringer’s book is refreshing insofar as it takes the opposite tack: instead of upbraiding the public for not paying attention to the intellectuals, it upbraids the intellectuals for not understanding just how much attention they were actually getting. The usual story about intellectual work and such, after all, is about just how terrible intellectuals have it—how many first novels, after all, are about young writers and their struggles? But Ringer’s research suggests, as mentioned, the opposite: an investigation of Germany prior to 1933 shows that intellectuals were more highly thought of there than virtually anywhere in the world. Indeed, for much of its history before the Holocaust Germany was thought of as a land of poets and thinkers, not the grim nation portrayed in World War II movies. In that sense, Ringer has documented just how good intellectuals can have it—and how dangerous that can be.

All of that said, what are the particular beliefs that, Ringer thinks, may have led to the installation of the Fürher in 1933? The “characteristic mental habits and semantic preferences” Ringer documents in his book include such items as “the underlying vision of learning as an empathetic and unique interaction with venerated texts,” as well as a “consistent repudiation of instrumental or ‘utilitarian’ knowledge.” Such beliefs are, to be sure, seemingly required of the departments of what are now—but weren’t then—thought of, at least in the United States, as “the humanities”: without something like such foundational assumptions, subjects like philosophy or literature could not remain part of the curriculum. But, while perhaps necessary for intellectual projects to leave the ground, they may also have some costs—costs like, say, forgetting why the Seventeenth Amendment was passed.

That might sound surprising to some—after all, aren’t humanities departments hotbeds of leftism? Defenders of “the humanities”—like Gregory Harpham, once Director of the National Endowment for the Humanities—sometimes go even further and make the claim—as Harpham did in his 2011 book, The Humanities and the Dream of America—that “the capacity to sympathize, empathize, or otherwise inhabit the experience of others … is clearly essential to democratic society,” and that this “kind of capacity … is developed by an education that includes the humanities.” Such views, however, make a nonsense of history: traditionally, after all, it’s been the sciences that have been “clearly essential to democratic society,” not “the humanities.” And, if anyone thinks about it closely, the very notion of democracy itself depends on an idea that, at base, is “scientific” in nature—and one that is opposed to the notion of “the humanities.”

That idea is called, in scientific circles, “the Law of Large Numbers”—a concept first written down formally two centuries ago by mathematician Jacob Bernoulli, but easily illustrated in the words of journalist Michael Lewis’ most recent book. “If you flipped a coin a thousand times,” Lewis writes in The Undoing Project, “you were more likely to end up with heads or tails roughly half the time than if you flipped it ten times.” Or as Bernoulli put it in 1713’s Ars Conjectandi, “it is not enough to take one or another observation for such a reasoning about an event, but that a large number of them are needed.” It is a restatement of the commonsensical notion that the more times a result is repeated, the more trustworthy it is—an idea hugely applicable to human life.

For example, the Law of Large Numbers is why, as publisher Nate Silver recently put it, if “you want to predict a pitcher’s win-loss record, looking at the number of strikeouts he recorded and the number of walks he yielded is more informative than looking at his W’s and L’s from the previous season.” It’s why, when financial analyst John Bogle examined the stock market, he decided that, instead of trying to chase the latest-and-greatest stock, “people would be better off just investing their money in the entire stock market for a very cheap price”—and thereby invented the index fund. It’s why, Malcolm Gladwell has noted, the labor movement has always endorsed a national health care system: because they “believed that the safest and most efficient way to provide insurance against ill health or old age was to spread the costs and risks of benefits over the biggest and most diverse group possible.” It’s why casinos have limits on the amounts bettors can wager. In all these fields, as well as more “properly” scientific ones, it’s better to amass large quantities of results, rather than depend on small numbers of them.

What is voting, after all, but an act of sampling of the opinion of the voters, an act thereby necessarily engaged with the Law of Large Numbers? So, at least, thought the eighteenth-century mathematician and political theorist the Marquis de Condorcet—who called the result “the miracle of aggregation.” Summarizing a great deal of contemporary research, Sean Richey of Georgia State University has noted that Condorcet’s idea was that (as one of Richey’s sources puts the point) “[m]ajorities are more likely to select the ‘correct’ alternative than any single individual when there is uncertainty about which alternative is in fact the best.” Or, as Richey describes how Condorcet’s process actually works more concretely puts it, the notion is that “if ten out of twelve jurors make random errors, they should split five and five, and the outcome will be decided by the two who vote correctly.” Just as, in sum, a “betting line” demarks the boundary of opinion between gamblers, Condorcet provides the justification for voting: Condorcet’s theory was that “the law of large numbers shows that this as-if rational outcome will be almost certain in any large election if the errors are randomly distributed.” Condorcet, thereby, proposed elections as a machine for producing truth—and, arguably, democratic governments have demonstrated that fact ever since.

Key to the functioning of Condorcet’s machine, in turn, is large numbers of voters: the marquis’ whole idea, in fact, is that—as David Austen-Smith and Jeffrey S. Banks put the French mathematician’s point in 1996—“the probability that a majority votes for the better alternative … approaches 1 [100%] as n [the number of voters] goes to infinity.” In other words, the point is that the more voters, the more likely an election is to reach the correct decision. The Seventeenth Amendment is, then, just such a machine: its entire rationale is that the (extremely large) pool of voters of a state is more likely to reach a correct decision than an (extremely small) pool voters consisting of the state legislature alone.

Yet the very thought that anyone could even know what truth is, of course—much less build a machine for producing it—is anathema to people in humanities departments: as I’ve mentioned before, Bruce Robbins of Columbia University has reminded everyone that such departments were “founded on … the critique of Enlightenment rationality.” Such departments have, perhaps, been at the forefront of the gradual change in Americans from what the baseball writer Bill James has called “an honest, trusting people with a heavy streak of rationalism and an instinctive trust of science,” with the consequence that they had “an unhealthy faith in the validity of statistical evidence,” to adopting “the position that so long as something was stated as a statistic it was probably false and they were entitled to ignore it and believe whatever they wanted to [believe].” At any rate, any comparison of the “trusting” 1950s America described by James by comparison to what he thought of as the statistically-skeptical 1970s (and beyond) needs to reckon with the increasingly-large bulge of people educated in such departments: as a report by the Association of American Colleges and Universities has pointed out, “the percentage of college-age Americans holding degrees in the humanities has increased fairly steadily over the last half-century, from little over 1 percent in 1950 to about 2.5 percent today.” That might appear to be a fairly low percentage—but as Joe Pinsker’s headline writer put the point of Pinsker’s article in The Atlantic, “Rich Kids Major in English.” Or as a study cited by Pinsker in that article noted, “elite students were much more likely to study classics, English, and history, and much less likely to study computer science and economics.” Humanities students are a small percentage of graduates, in other words—but historically they have been (and given the increasingly-documented decreasing social mobility of American life, are increasingly likely to be) the people calling the shots later.

Or, as the infamous Northwestern University chant had it: “That‘s alright, that’s okay—you’ll be working for us someday!” By building up humanities departments, the professoriate has perhaps performed useful labor by clearing the ideological ground for nothing less than the repeal of the Seventeenth Amendment—an amendment whose argumentative success, even today, depends upon an audience familiar not only with Condorcet’s specific proposals, but also with the mathematical ideas that underlay them. That would be no surprise, perhaps, to Fritz Ringer, who described how the German intellectual class of the late nineteenth century and early twentieth constructed an “a defense of the freedom of learning and teaching, a defense which is primarily designed to combat the ruler’s meddling in favor of a narrowly useful education.” To them, the “spirit flourishes only in freedom … and its achievements, though not immediately felt, are actually the lifeblood of the nation.” Such an argument is reproduced by such “academic superstar” professors of humanities as Judith Butler, Maxine Elliot Professor in the Departments of Rhetoric and Comparative Literature at (where else?) the University of California, Berkeley, who has argued that the “contemporary tradition”—what?—“of critical theory in the academy … has shown how language plays an important role in shaping and altering our common or ‘natural’ understanding of social and political realities.”

Can’t put it better.