“‘The frog is almost five hundred million years old. Could you really say with much certainty that America, with all its strength and prosperity, with its fighting man that is second to none, and with its standard of living that is the highest in the world, will last as long as … the frog?’”
—Joseph Heller. Catch-22. (1961).
 … the fall of empires which aspired to universal dominion could be predicted with very high probability by one versed in the calculus of chance.
—Laplace. Theórie Analytique des Probabilities. (1814).


If sexism exists, how could it be proved? A recent lawsuit—Chen-Oster v. Goldman Sachs, Inc., filed in New York City on 19 May, 2014—aims to do just that. The suit makes four claims: that Goldman’s women employees make less than men at the same positions; that a “disproportionate” number of men have been promoted “over equally or more qualified women”; that women employees’ performance was “systematic[ally] underval[ued]”; and that “managers most often assign the most lucrative and promising opportunities to male employees.” The suit, then, echoes many of the themes developed by feminists over the past two generations, and in a general sense may perhaps be accepted, or even cheered, by those Americans sensitive to feminism. But those Americans may not be aware of the potential dangers of the second claim: dangers that threaten not merely the economic well-being of the majority of Americans, including women, but also America’s global leadership. Despite its seeming innocuousness, the second claim is potentially an existential threat to the future of the United States.

That, to be sure, is a broad assertion, and seems disproportionate, you might say, to the magnitude of the lawsuit: it hardly seems likely that a lawsuit over employment law, even one involving a firm so important to the global financial machinery as Goldman Sachs, could be so important as to threaten the future of the United States. Yet few today would deny the importance of nuclear weapons—nor that they pose an existential threat to humanity itself. And if nuclear weapons are such a threat, then the reasoning that led to those weapons must be at least as, if not more so, as important than the weapons themselves. As I will show, the second claim poses a threat to exactly that chain of reasoning.

That, again, may appear a preposterous assertion: how can a seemingly-minor allegation in a lawsuit about sexism have anything to do with nuclear weapons, much less the chain of logic that led to them? One means of understanding how requires a visit to what the late Harvard biologist Stephen Jay Gould called “the second best site on the standard tourist itinerary of [New Zealand’s] North Island—the glowworm grotto of Waitomo Cave.” Upon the ceiling of this cave, it seems, live fly larvas whose “illuminated rear end[s],” Gould tells us, turn the cave into “a spectacular underground amphitheater”—an effect that, it appears, mirrors the night sky. But what’s interesting about the Waitomo Cave is that it does this mirroring with a difference: upon observing the cave, Gould “found it … unlike the heavens” because whereas stars “are arrayed in the sky at random,” the glowworms “are spaced more evenly.” The reason why is that the “larvae compete with, and even eat, each other—and each constructs an exclusive territory”: since each larva has more or less the same power as every other larva, each territory is more or less the same size. Hence, as Gould says, the heaven of the glowworms is an “ordered heaven,” as opposed to the disorderly one visible on clear nights around the the world—a difference that not only illuminates just what’s wrong with the plaintiff’s second claim in Chen-Oster v. Goldman Sachs, Inc, but also how that claim concerns nuclear weapons.

Again, that might appear absurd: how can understanding a Southern Hemispheric cavern help illuminate—as it were—a lawsuit against the biggest of Wall Street players? To understand how requires another journey—though this one is in time, not space.

In 1767, an English clergyman named John Michell published a paper with the unwieldy title of “An Inquiry into the Probable Parallax, and Magnitude of the Fixed Stars, from the Quantity of Light Which They Afford us, and the Particular Circumstances of Their Situation.” Michell’s purpose in the paper, he wrote, was to inquire whether the stars “had been scattered by mere chance”—or, instead, by “their mutual gravitation, or to some other law or appointment of the Creator.” Since (according to Michell’s biographer, Russell McCommach), Michell assumed “that a random distribution of stars is a uniform distribution,” he concluded that—since the night sky does not resemble the roof of the Waitomo Cave—the distribution of stars must be the result of some natural law. Or even, he hinted, the will of the Creator himself.

So things might have stayed had Michell’s argument “‘remained buried in the heavy quartos of the Philosophical Transactions”—as James Forbes, the Professor of Natural Philosophy at Edinburgh University, would write nearly a century later. But Michell’s argument hadn’t; several writers, it seems, took his argument as evidence for the existence of the supernatural. Hence, Forbes felt obliged to refute an argument that, he thought, is “‘too absurd to require refutation.’” To think—as Michell did—that “a perfectly uniform and symmetrical disposition of the stars over the sky,” as Forbes wrote, “could alone afford no evidence of causation” would be “palpably absurd.” The reason Forbes thought that way, in turn, is the connection both to the Goldman lawsuit—and nuclear weapons.

Forbes made his point by an analogy to flipping a coin: to think that the stars had been distributed randomly because they were evenly spaced across the sky, he wrote, would be as ridiculous as the chances that “on 1000 throws [of a fair coin] there should be exactly 500 heads and 500 tails.” In fact, the Scotsman pointed out, mathematics demonstrates that in such a case of 1000 throws “there are almost forty chances to one [i.e., nearly 98%], that some one of the other possible events shall happen instead of the required one.” In 1000 throws of a fair coin, there’s less than a three percent chance that the flipper will get exactly 500 heads: it’s simply a lot more likely that there will be some other number of heads. In Gould’s essay about the Waitomo Cave, he put the same point like this: “Random arrays always include some clumping … just as we will flip several heads in a row quite often so long as we can make enough tosses.” Because the stars clump together, Forbes argued, that is evidence that they are randomly distributed—not of a benevolent Creator, like Michell thought. Forbes’ insight, in turn, about how to detect randomness, or chance, in astronomical data had implications far beyond the stars: in a story that would take much more space than this essay to tell, it eventually led a certain Swiss patent clerk to take up the phenomena called “Brownian motion.”

The clerk, of course, was Albert Einstein; the subject of his 1905 paper, “On the Movement of Small Particles Suspended In a Stationary Liquid Demanded by the Molecular-Kinetic Theory of Heat,” was the tendency—“easily observed in a microscope,” Einstein remarks—for tiny particles to move in an apparently-spontaneous manner. What Einstein realized (as physicist Leonard Mlodinow put it in his 2008 book, The Drunkard’s Walk: How Randomness Rules Our Lives) was that the “jiggly” motion of dust particles and so on results from collisions between them and even smaller particles, and so “there was a predictable relationship between factors such as the size, number, and speed of the molecules and the observable frequency and magnitude of the jiggling.” In other words, “though the collisions [between the molecules and the larger particles] occur very frequently, because the molecules are so light, those frequent isolated collisions have no visible effects” for the most part—but once in a while, “when pure luck occasionally leads to a lopsided preponderance of hits from some particular direction,” there are enough hits to send the particle moving. Or, to put it another way, when the flip of a 1000 coins all come up heads, the particle will move. Put in that fashion, to be sure, Einstein’s point might appear obscure at best—but as Mlodinow goes on to say, it is no accident that this seemingly-minor paper became the great physicist’s “most cited work.” That’s because the ultimate import of the paper was to demonstrate the existence … of the atom. Which is somewhat of a necessity for building an atom bomb.

The existence of the atomic bomb, then, can be said to depend on the insight developed by Forbes: just how significant the impact of chance can be in the formation of both the very large (the universe itself, according to Forbes), and the very small (the atom, according to Einstein). The point both men attempted to make, in turn, is that the existence of order is something very rare in this universe, at any rate (whatever may be the case in others). Far more common, then, is the existence of disorder—which brings us back to Goldman Sachs and the existence of sexism.

It is the contention of the second point in the plaintiffs’ brief in Chen-Oster v. Goldman Sachs, Inc., remember, that there exists (as University of Illinois English professor Walter Benn Michaels has noted) a “‘“stark” underrepresentation’ [of women] in management” because “‘just 29 percent of vice presidents, 17 percent of managing directors, and 14 percent of partners’” are women. Goldman Sachs, as it happens, has roughly 35,000 employees—which, it turns out, is about 0.001% of the total population of the United States, which is 323 million. Of those 323 million, as of the 2010 Census women number about 157 million, compared to around 151 million men. Hence, the question to be asked about the Goldman Sachs lawsuit (and I write this as someone with little sympathy for Goldman Sachs) is—if the reasoning Einstein followed to demonstrate the existence of the atom is correct—then if the chances of landing exactly 500 heads, when tossing a coin 1000 times, is less than three percent, how much less likely is it that a sample of 35,000 people will exactly mirror the proportions of 323 million? The answer, it would seem, is rather low: it’s simply a lot more likely that Goldman Sachs would have something other than a proportionate ratio of men to women than the reverse, just as it it’s a lot more likely that stars should clump together than be equally spaced like the worms in the New Zealand cave. And that is to say that the disproportionate number of men in leadership in positions at Goldman Sachs is merely evidence of the absence of a pro-woman bias at Goldman Sachs, not evidence of the existence of a bias against women.

To which it might be replied, of course, that the point isn’t the exact ratio, but rather that it is so skewed toward one sex: what are the odds, it might be said, that all three categories of employee should all be similarly bent in one direction? Admittedly, that is an excellent point. But it’s also a point that’s missing from the plaintiffs’ brief: there is no mention of a calculation respecting the particular odds in the case, despite the fact that the mathematical techniques necessary to do those calculations have been known since long before the atomic bomb, or even Einstein’s paper on the existence of the atom. And it’s that point, in turn, that concerns not merely the place of women in society—but ultimately the survival of the United States.

After all, the reason that the plaintiffs in the Goldman Sachs suit do not feel the need to include calculations of the probability of the disproportion they mention—despite the fact that it is the basis of their second claim—is that the American legal system is precisely structured to keep such arguments at bay. As Oliver Roeder observed in FiveThirtyEight last year, for example, the justices of the U.S. Supreme Court “seem to have a reluctance—even an allergy—to taking math and statistics seriously.” And that reluctance is not limited to the justices alone: according to Sanford Levinson, a University of Texas professor of law and government interviewed by Roeder in the course of reporting his story, “top-level law schools like Harvard … emphasize … traditional, classical legal skills” at the expense of what Levinson called “‘genuine familiarity with the empirical world’”—i.e., the world revealed by techniques pioneered by investigators like James Forbes. Since, as Roeder observes, all nine current Supreme Court justices attended either Harvard or Yale, that suggests that the curriculum followed at those schools has a connection to the decisions reached by their judicial graduates.

Still, that exclusion might not be so troublesome were it limited merely to the legal machinery. But as Nick Robinson reported last year in the Buffalo Law Review, attorneys have “dominated the political leadership of the United States” throughout its history: “Since independence,” Robinson pointed out there, “more than half of all presidents, vice presidents, and members of Congress have come from a law background.” That then implies that if the leadership class of the United States is derived from American law schools, and American law schools train students to disdain mathematics and the empirical world, then it seems plausible to conclude that much of the American leadership class is specifically trained to ignore both the techniques revealed by Forbes and the underlying reality they reveal: the role played by chance. Hence, while such a divergence may allow plaintiffs like those in the Goldman case to make allegations of sexism without performing the hard work of actually demonstrating how it might be possible mathematically, it might also have consequences for actual women who are living, say, in a nation increasingly characterized by a vast difference between the quantifiable wealth of those at the top (like people who work for Goldman Sachs) and those who aren’t.

And not merely that. For decades if not centuries, Americans have bemoaned the woeful lack of performance of American students in mathematics: “Even in Massachusetts, one of the country’s highest-performing states,” Elizabeth Green observed in the latest of one of these reports in the New York Times in 2014, “math students are more than two years behind their counterparts in Shanghai.” And results like that, as the journalist Michael Lewis put the point several years ago in Vanity Fair, risk “ceding … technical and scientific leadership to China”—and since, as demonstrated, it’s knowledge of mathematics (and specifically knowledge of the mathematics of probability) that made the atomic bomb possible, that implies conversely that ignorance of the subject is a serious threat to national existence. Yet, few Americans have, it seems, considered whether the fact that students do not take mathematics (and specifically probability) seriously may have anything to do with the fact that the American leadership class explicitly rules such topics, quite literally, out of court.

Of course, as Lewis also pointed out in his recent book, The Undoing Project: A Friendship that Changed Our Minds, American leaders may not be particularly alone in ignoring the impact of probabilistic reasoning: when, after the Yom Kippur War—which had caught Israel’s leaders wholly by surprise—future Nobel Prize winner Daniel Kahneman and intelligence officer Zvi Lanir attempted to “introduce a new rigor in dealing with questions of national security” by replacing intelligence reports written “‘in the form of essays’” with “probabilities, in numerical form,” they found that “the Israeli Foreign Ministry was ‘indifferent to the specific probabilities.’” Kahneman suspected that the ministry’s indifference, Lewis reports, was due to the fact that Israel’s leaders’ “‘understanding of numbers [was] so weak that [the probabilities did not] communicate’”—but betting that the leadership of other countries continues to match the ignorance of our own does not particularly appear wise. Still, as Oliver Roeder noted for FiveThirtyEight, not every American is willing to continue to roll those dice: University of Texas law professor Sanford Levinson, Roeder reported, thinks that the “lack of rigorous empirical training at most elite law schools” requires the “long-term solution” of “a change in curriculum.” And that, in turn, suggests that Chen-Oster v. Goldman Sachs, Inc. might be more than a flip of a coin over the existence of sexism on Wall Street.


Stayin’ Alive

And the sun stood still, and the moon stayed,
until the people had avenged themselves upon their enemies.
—Joshua 10:13.


“A Sinatra with a cold,” wrote Gay Talese for Esquire in 1966, “can, in a small way, send vibrations through the entertainment industry and beyond as surely as a President of the United States, suddenly sick, can shake the national economy”; in 1994, Nobel laureate economist Paul Krugman mused that a “commitment to a particular … doctrine” can eventually set “the tone for policy-making on all issues, even those which may seem to have nothing to do with that doctrine.” Like a world leader—or a celebrity—the health of an idea can have unforeseen consequences; for example, it is entirely possible that the legal profession’s intellectual bias against mathematics has determined the nation’s racial policy. These days after all, as literary scholar Walter Benn Michaels observed recently, racial justice in the United States is held to what Michaels calls “the ideal of proportional inequality”—an ideal whose nobility, it so happens that Nobel Prize-winner Daniel Kahneman and his colleague Amos Tversky have demonstrated, is matched only by its mathematical futility. The law, in short, has what Oliver Roeder of FiveThirtyEight recently called an “allergy” to mathematics; what I will argue is that, as a consequence, minority policy in the United States has a cold.

“The concept that mathematics can be relevant to the study of law,” law professor Michael I. Meyerson observed in 2002’s Political Numeracy: Mathematical Perspectives on Our Chaotic Constitution, “seems foreign to many modern legal minds.” In fact, he continued, to many lawyers “the absence of mathematics is one of law’s greatest appeals.” The strength of that appeal was on display recently in the 2011 Wisconsin case discussed by Oliver Roeder, Gill v. Whitford—a case that, as Roeder says, “hinges on math” because it involves the invention of a mathematical standard to measure “when a gerrymandered [legislative] map infringes on voters’ rights.” In oral arguments in Gill, Roeder observed, Chief Justice John Roberts said, about the mathematical techniques that are the heart of the case, that it “may be simply my educational background, but I can only describe [them] as sociological gobbledygook”—a derisory slight that recalls 19th-century Supreme Court Justice Joseph Story’s sneer concerning what he called “men of speculative ingenuity, and recluse habits.” Such statements are hardly foreign in the annals of the Supreme Court: “Personal liberties,” Justice Potter Stewart wrote in a 1975 opinion, “are not rooted in the law of averages.” (Stewart’s sentence, perhaps incidentally, uses a phrase—“law of averages”—found nowhere in the actual study of mathematics). Throughout the history of American law, in short, there is strong evidence of bias against the study and application of mathematics to jurisprudence.

Yet without the ability to impose that bias on others, even conclusive demonstrations of the law’s skew would not matter—but of course lawyers, as Nick Robinson remarked just this past summer in the Buffalo Law Review, have “dominated the political leadership of the United States.” As Robinson went on to note, “more than half of all presidents, vice presidents, and members of Congress have come from a law background.” This lawyer-heavy structure has had an effect, Robinson says: for instance, he claims “that lawyer-members of Congress have helped foster the centrality of lawyers and courts in the United States.” Robinson’s research then, which aggregates many studies on the subject, demonstrates that the legal profession is in a position to have effects on the future of the country—and if lawyers can affect the future of the country in one fashion, it stands to reason that they may have affected it in others. Not only then may the law have an anti-mathematical bias, but it is clearly positioned to impose that bias on others.

That bias in turn is what I suspect has led the Americans to what Michaels calls the theory of “proportional representation” when it comes to justice for minority populations. This theory holds, according to Michaels, that a truly just society would be a “society in which white people were proportionately represented in the bottom quintile [of income] (and black people proportionately represented in the top quintile)”—or, as one commenter on Michaels’ work has put it, it’s the idea that “social justice is … served if the top classes at Ivy League colleges contain a percentage of women, black people, and Latinos proportionate to the population.” Within the legal profession, the theory appears to be growing: as Michaels has also observed, the theory of the plaintiffs in the “the recent suit alleging discrimination against women at Goldman Sachs” complained of the ‘“stark” underrepresentation’ [of women] in management” because women represented “‘just 29 percent of vice presidents, 17 percent of managing directors, and 14 percent of partners’”—percentages that, of course, vary greatly from the roughly 50% of the American population who are women. But while the idea of a world in which the population of every institution mirrors the population as a whole may appear plausible to lawyers, it’s absurd to any mathematician.

People without mathematical training, that is, have wildly inaccurate ideas about probability—precisely the point of the work of social scientists Daniel Kahneman and Amos Tversky. “When subjects are instructed to generate a random sequence of hypothetical tosses of a fair coin,” wrote the two psychologists in 1971 (citing an earlier study), “they produce sequences where the proportion of heads in any short segment stays far closer to .50 than the laws of chance would predict.” In other words, when people are asked to write down the possible results of tossing a coin many times, they invariably give answers that are (nearly) half heads and half tails despite the fact that—as Brian Everitt observed in his 1999 book Chance Rules: An Informal Guide to Probability, Risk, and Statistics—in reality “in, say, 20 tosses of a fair coin, the number of heads is unlikely to be exactly 10.” (Everitt goes on to note that “an exact fifty-fifty split of heads and tails has a probability of a little less than 1 in 5.”) Hence, a small sample of 20 tosses has less than a twenty percent chance of being ten heads and ten tails—a fact that may appear yet more significant when it is noted that the chance of getting exactly 500 heads when flipping a coin 1000 times is less than 3%. Approximating the ideal of proportionality, then, is something that mathematics tells us is not simple or easy to do even once, and yet, in the case of college admissions, advocates of proportional representation suggest that colleges, and other American institutions, ought to be required to do something like what baseball player Joe DiMaggio did in the summer of 1941.

In that year in which “the Blitzkrieg raged” (as the Rolling Stones would write later), the baseball player Joe DiMaggio achieved what Gould says is “the greatest and most unattainable dream of all humanity, the hope and chimera of all sages and shaman”: the New York Yankee outfielder hit safely in 56 games. Gould doesn’t mean, of course, that all human history has been devoted to hitting a fist-sized sphere, but rather that while many baseball fans are aware of DiMaggio’s feat, what few are aware of is that the mathematics of DiMaggio’s streak shows that it was “so many standard deviations above the expected distribution that it should not have occurred at all.” In other words, Gould cites Nobel laureate Ed Purcell’s research on the matter.

What that research shows is that, to make it a better-than-even money proposition “that a run of even fifty games will occur once in the history of baseball,” then “baseball’s rosters would have to include either four lifetime .400 batters or fifty-two lifetime .350 batters over careers of one thousand games.” There are, of course, only three men who ever hit more than .350 lifetime (Cobb, Hornsby, and, tragically, Joe Jackson), which is to say that DiMaggio’s streak is, Gould wrote, “the most extraordinary thing that ever happened in American sports.” That in turn is why Gould can say that Joe DiMaggio, even as the Panzers drove a thousand miles of Russian wheatfields, actually attained a state chased by saints for millennia: by holding back, from 15 May to 17 July, 1941, the inevitable march of time like some contemporary Joshua, DiMaggio “cheated death, at least for a while.” To paraphrase Paul Simon, Joe DiMaggio fought a duel that, in every way that can be looked at, he was bound to lose—which is to say, as Gould correctly does, that his victory was in postponing that loss all of us are bound to one day suffer.

Woo woo woo.

What appears to be a simple baseball story, then, actually has a lesson for us here today: it tells us that advocates of proportional representation are thereby suggesting that colleges ought to be more or less required not merely to reproduce Joe DiMaggio’s hitting streak from the summer of 1941, but to do it every single season—a quest that in a practical sense is impossible. The question then must be how such an idea could ever have taken root in the first place—a question that Paul Krugman’s earlier comment about how a commitment to bad thinking about one issue can lead to bad thinking about others may help to answer. Krugman suggested in that essay that one reason why people who ought to know better might tolerate “a largely meaningless concept” was “precisely because they believe[d] they [could] harness it in the service of good policies”—and quite clearly, proponents of the proportional ideal have good intentions, which may be just why it has held on so long despite its manifest absurdity. But good intentions are not enough to ensure the staying power of a bad idea.

“Long streaks always are, and must be,” Gould wrote about DiMaggio’s feat of survival, “a matter of extraordinary luck imposed upon great skill”—which perhaps could be translated, in this instance, by saying that if an idea survives for some considerable length of time it must be because it serves some interest or another. In this case, it seems entirely plausible to think that the notion of “proportional representation” in relation to minority populations survives not because it is just, but instead because it allows the law, in the words of literary scholar Stanley Fish, “to have a formal existence”—that is, “to be distinct, not something else.” Without such a distinction, as Fish notes, the law would be in danger of being “declared subordinate to some other—non-legal—structure of concern,” and if so then “that discourse would be in the business of specifying what the law is.” But the legal desire Fish dresses up in a dinner jacket, attorney David Post of The Volokh Conspiracy website suggests, may merely be the quest to continue to wear a backwards baseball cap.

Apropos of Oliver Roeder’s article about the Supreme Court’s allergy to mathematics, Post says in other words, not only is there “a rather substantial library of academic commentary on ‘innumeracy’ at the court,” but “it is unfortunately well within the norms of our legal culture … to treat mathematics and related disciplines as kinds of communicable diseases with which we want no part.” What’s driving the theory of proportional representation, then, may not be the quest for racial justice, or even the wish to maintain the law’s autonomy, but instead the desire of would-be lawyers to avoid mathematics classes. But if so, then by seeking social justice through the prism of the law—which rules out of court at the outset any consideration of mathematics as a possible tool for thinking about human problems, and hence forbids (or at least, as in Gill v. Whitford, obstructs) certain possible courses of action to remedy social issues—advocates for African-Americans and others may be unnecessarily limiting their available options, which may be far wider, and wilder, than anyone viewing the problems of race through the law’s current framework can now see.

Yet—as any consideration of streaks and runs must, eventually, conclude—just because that is how things are at the moment is no reason to suspect that things will remain that way forever: as Gould says, the “gambler must go bust” when playing an opponent, like history itself, with near-infinite resources. Hence, Paul Simon to the contrary, the impressive thing about the Yankee Clipper’s feat in that last summer before the United States plunged into global war is not that after “Ken Keltner made two great plays at third base and lost DiMaggio the prospect of a lifetime advertising contract with the Heinz ketchup company” Joe DiMaggio left and went away. Instead, it is that the great outfielder lasted as long as he did; just so, in Oliver Roeder’s article he mentions that Sanford Levinson, a professor of law at the University of Texas at Austin and one of the best-known American legal scholars, has diagnosed “the problem [as] a lack of rigorous empirical training at most elite law schools”—which is to say that “the long-term solution would be a change in curriculum.” The law’s streak of avoiding mathematics, in other words, may be like all streaks. In the words of the poet of the subway walls,

Koo-koo …



Alice came to a fork in the road. “Which road do I take,” she asked.
“Where do you want to go?” responded the Cheshire Cat.
“I don’t know,” Alice answered.
“Then,” said the Cat, “it doesn’t matter.”
—Lewis Carroll. Alice’s Adventures in Wonderland. (1865).


At Baden Baden, 1925, Reti, the hypermodern challenger, opened with the Hungarian, or King’s Fianchetto; Alekhine—the only man to die still holding the title of world champion—countered with an unassuming king’s pawn to e5. The key moment did not take place, however, until Alekhine threw his rook nearly across the board at move 26, which appeared to lose the champion a tempo—but as C.J.S. Purdy would write for Chess World two decades, a global depression, and a world war later, “many of Alekhine’s moves depend on some surprise that comes far too many moves ahead for an ordinary mortal to have the slightest chance of foreseeing it.” The rook move, in sum, resulted in the triumphant slash of Alekhine’s bishop at move 42—a move that “forked” the only two capital pieces Reti had left: his knight and rook. “Alekhine’s chess,” Purdy would write later, “is like a god’s”—an hyperbole that not only leaves this reader of the political scientist William Riker thankful that the chess writer did not see the game Riker saw played at Freeport, 1858, but also grateful that neither man saw the game played at Moscow, 2016.

All these games, in other words, ended with what is known as a “fork,” or “a direct and simultaneous attack on two or more pieces by one piece,” as the Oxford Companion to Chess defines the maneuver. A fork, thereby, forces the opponent to choose; in Alekhine’s triumph, called “the gem of gems” by Chess World, the Russian grandmaster forced his opponent to choose which piece to lose. Just so, in The Art of Political Manipulation, from 1986, University of Rochester political scientist William Riker observed that “forks” are not limited to dinner or to chess. In Political Manipulation Riker introduced the term “heresthetics,” or—as Norman Schofield defined it in 2006—“the art of constructing choice situations so as to be able to manipulate outcomes.” Riker further said that  “the fundamental heresthetical device is to divide the majority with a new alternative”—or in other words, heresthetics is often a kind of political fork.

The premier example Riker used to illustrate such a political forking maneuver was performed, the political scientist wrote, by “the greatest of American politicians,” Abraham Lincoln, at the sleepy Illinois town of Freeport during the drowsy summer of 1858. Lincoln that year was running for the U.S. Senate seat for Illinois against Stephen Douglas—the man known as “the Little Giant” both for his less-than-imposing frame and his significance in national politics. So important had Douglas become by that year—by extending federal aid to the first “land grant” railroad, the Illinois Central, and successfully passing the Compromise of 1850, among many other achievements—that it was an open secret that he would run for president in 1860. And not merely run; the smart money said he would win.

Where the smart money was not was on Abraham Lincoln, a lanky and little-known one-term congressman in 1858. The odds against the would-be Illinois politician were so long, in fact, that according to Riker Lincoln had to take a big risk to win—which he did, by posing a question to Douglas at the little town of Freeport, near the Wisconsin border, towards the end of August. That question was this: “Can the people of a United States Territory, in any lawful way, against the wish of any citizen of the United States, exclude slavery from its limits prior to the formation of a state constitution?” It was a question, Riker wrote, that Lincoln had honed “stilletto-sharp.” It proved a knife in the heart of Stephen Douglas’ ambitions.

Lincoln was, of course, explicitly against slavery, and therefore thought that territories could ban slavery prior to statehood. But many others thought differently; in 1858 the United States stood poised at a precipice that, even then, only a few—Lincoln among them—could see. Already, the nation had been roiled by the Kansas-Nebraska Act of 1854; already, a state of war existed between pro- and anti-slavery men on the frontier. The year before, the U.S. Supreme Court had outlawed the prohibition of slavery in the territories by means of the Dred Scott decision—a decision that, in his “House Divided” speech in June that same year, Lincoln had already charged Douglas with conspiring with the president of the United States, James Buchanan, and Supreme Court Chief Justice Roger Taney to bring about. What Lincoln’s question was meant to do, Riker argued, was to “fork” Douglas between two constituencies: the local Illinois constituents who could return, if they chose, Douglas to the Senate in 1858—and the larger, national constituency that could deliver, if they chose, Douglas the presidency in 1860.

“If Douglas answered yes” to Lincoln’s question, Riker wrote, and thereby said that a territory could exclude slavery prior to statehood, “then he would please Northern Democrats for the Illinois election”—because he would take an issue away from Lincoln by explicitly stating they shared the same opinion. If so, he would take away one of Lincoln’s chief weapons—a weapon especially potent in far northern, German-settled, towns like Freeport. But what Lincoln saw, Riker says, is that if Douglas said yes he would also earn the enmity of Southern slaveowners, for whom it would appear “a betrayal of the Southern cause of the expansion of slave territory”—and thusly cost him a clean nomination for the leadership of the Democratic Party as candidate for president in 1860. If, however, Douglas answered no, “then he would appear to capitulate entirely to the Southern wing of the party and alienate free-soil Illinois Democrats”—thereby hurting “his chances in Illinois in 1858 but help[ing] his chances for 1860.” In Riker’s view, in other words, at Freeport in 1858 Lincoln forked Douglas much as the Russian grandmaster would fork his opponent at the German spa in 1925.

Yet just as that late winter game was hardly the last time the maneuver was used in chess, “forking” one’s political opponent scarcely ended in the little nineteenth-century Illinois farm village. Many of Hillary Clinton’s supporters in 2016 now believe that the Russians “interfered” with the American election—but what hasn’t been addressed is how the Russian state, led by Putin, could have interfered with an American election. Like a vampire who can only invade a home once invited, anyone attempting to “interfere” with an election must have some material to work with; Lincoln’s question at Freeport, after all, exploited a previously-existing difference between two factions within the Democratic Party. If the Russians did “interfere” with the 2016 election, that is, they could only have done so if there already existed yet another split within the Democratic ranks—which, as everyone knows, there was.

“Not everything is about an economic theory,” Hillary Clinton claimed in a February of 2016 speech in Nevada—a claim common enough to anyone who’s been on campus in the past two generations. After all, as gadfly Thomas Frank has remarked (referring to the work of James McGuigan), the “pervasive intellectual reflex” of our times is the “‘terror of economic reductionism.’” The idea that “not everything is about economics” is the core of what is sometimes known as the “cultural left,” or what Penn State University English professor (and former holder of the Paterno Chair) Michael Bérubé has termed “the left that aspires to analyze culture” as opposed to “the left that aspires to carry out public policy.” Clinton’s speech largely echoed the views of that “left,” which—according to the late philosopher Richard Rorty, in the book that inspired Bérubé’s remarks above—is more interested in “remedies … for American sadism” than those “for American selfishness.” It was that left that the rest of Clinton’s speech was designed to attract.

“If we broke up the big banks tomorrow,“ Clinton went on to ask after the remark about economic theory, “would that end racism?” The crowd, of course, answered “No.” “Would that end racism?” she continued, and then called again using the word “sexism,” and then again—a bit more convoluted, now—with “discrimination against the LGBT community?” Each time, the candidate was answered with a “No.” With this speech, in other words, Clinton visibly demonstrated the arrival of this “cultural left” at the very top of the Democratic Party—the ultimate success of the agenda pushed by English professors and others throughout the educational system. If, as Richard Rorty wrote, it really is true that “the American Left could not handle more than one initiative at a time,” so that “it either had to ignore stigma in order to concentrate on money, or vice versa,” then Clinton’s speech signaled the victory of the “stigma” crowd over the “money” crowd. Which is why what Clinton said next was so odd.

The next line of Clinton’s speech went like this: “Would that”—i.e., breaking up the big banks—“give us a real shot at ensuring our political system works better because we get rid of gerrymandering and redistricting and all of these gimmicks Republicans use to give themselves safe seats, so they can undo the progress we have made?” It’s a strange line; in the first place, it’s not exactly the most euphonious group of words I’ve ever heard in a political speech. But more importantly—well, actually, breaking up the big banks could perhaps do something about gerrymandering. According to OpenSecrets.org, after all, “72 percent of the [commercial banking] industry’s donations to candidates and parties, or more than $19 million, went to Republicans” in 2014—hence, maybe breaking them up could reduce the money available to Republican candidates, and so lessen their ability to construct gerrymandered districts. But, of course, doing so would require precisely the kinds of thought pursued by the “public policy” left—which Clinton had already signaled she had chosen against. The opening lines of her call-and-response, in other words, demonstrated that she had chosen to sacrifice the “public policy” left—the one that speaks the vocabulary of science—in favor of the “cultural left”—the one that speaks the vocabulary of the humanities. By choosing the “cultural left,” Clinton was also in effect saying that she would do nothing about either big banks or gerrymandering.

That point was driven home in an article in Fivethirtyeight this past October. In “The Supreme Court Is Allergic To Math,” Oliver Roeder discussed the case of Gill v. Whitford—a case that not only “will determine the future of partisan gerrymandering,” but also “hinges on math.” At issue in the case is something called “the efficiency gap,” which calculates “the difference between each party’s ‘wasted’ votes—votes for losing candidates and votes for winning candidates beyond what the candidate needed to win—and divide that by the total number of votes cast.” The basic argument, in other words, is fairly simple: if a mathematical test determines that a given arrangement of legislative districts provides a large difference, that is evidence of gerrymandering. But in oral arguments, Roeder went on to say, the “most powerful jurists in the land” demonstrated “a reluctance—even an allergy—to taking math and statistics seriously.” Chief Justice John Roberts, for example, said it “may simply be my educational background, but I can only describe [the case] as sociological gobbledygook.” Neil Gorsuch, the man who received the office that Barack Obama was prevented from awarding, compared “the metric to a secret recipe.” In other words, in this case it was the disciplines of mathematics and above all, statistics, that are on the side of those wanting to get rid of gerrymandering, not those analyzing “culture” and fighting “stigma”—concepts that were busy being employed by the justices, essentially to wash their hands of the issue of gerrymandering.

Just as, in other words, Lincoln exploited the split between Douglas’ immediate voters in Illinois who could give him the Senate seat, and the Southern slaveowners who could give him the presidency, Putin (or whomever else one wishes to nominate for that role) may have exploited the difference between Clinton supporters influenced by the current academy—and those affected by the yawning economic chasm that has opened in the United States. Whereas academics are anxious to avoid discussing money in order not to be accused of “economic reductionism,” in other words, the facts on the ground demonstrate that today “more money goes to the top (more than a fifth of all income goes to the top 1%), more people are in poverty at the bottom, and the middle class—long the core strength of our society—has seen its income stagnate,” as Nobel Prize-winning economist Joseph Stiglitz put the point in testimony to the U.S. Senate in 2014. Furthermore, Stiglitz noted, America today is not merely “the advanced country … with the highest level of inequality, but is among those with the least equality of opportunity.” Or in other words, as David Rosnick and Dean Baker put the point in November of that same year, “most [American] households had less wealth in 2013 than they did in 2010 and much less than in 1989.” To address such issues, however, would require precisely the sorts of intellectual tools—above all, mathematical ones—that the current bien pensant orthodoxy of the sort represented by Hillary Clinton, the orthodoxy that abhors sadism more than selfishness, thinks of as irrelevant.

But maybe that’s too many moves ahead.

No Justice, No Peace


‘She’s never found peace since she left his arms, and never will again till she’s as he is now!’
—Thomas Hardy. Jude the Obscure. (1895).

Done because we are too menny,” writes little “Father Time,” in Thomas Hardy’s Jude the Obscure—a suicide note that is meant to explain why the little boy has killed his siblings, and then hanged himself. The boy’s family, in other words, is poor, which is why Father Time’s father Jude (the titular obscurity) is never able, as he wished, to become the scholar he once dreamed of becoming. Yet, although Jude is a great tragedy, it is also something of a mathematical textbook: the principle taught by little Jude instructs not merely about why his father does not get into university, but perhaps also about just why, as Natasha Warikoo remarked in last week’s London Review of Books blog, “[o]ne third of Oxford colleges admitted no black British students in 2015.” Unfortunately, Warikoo never considers that possibility suggested by Jude: although Warikoo considers a number of reasons why black British students do not go to Oxford, she does not consider what we might call, in honor of Jude, the “Judean Principle”: that minorities simply cannot be proportionately represented everywhere always. Why? Well, because of the field goal percentages of the 1980-81 Philadelphia 76ers—and math.

“The Labour MP David Lammy,” wrote Warikoo, “believes that Oxford and Cambridge are engaging in social apartheid,” while “others have blamed the admissions system.” These explanations, Warikoo suggests, are incorrect: due to interviews with “15 undergraduates at Oxford who were born in the UK to immigrant parents, and 52 of their white peers born to British parents,” she believes that the reason for the “massive underrepresentation” of black British students is “related to a university culture that does not welcome them.” Or in other words, the problem is racism. But while it’s undoubtedly the case that many people, even today, are prejudiced, is prejudice really adequate to explain the case here?

Consider, after all, what it is that Warikoo is claiming—beginning with the idea of “massive underrepresentation.” As Walter Benn Michaels of the University of Illinois at Chicago has pointed out, the goal of many on the political “left” these days appears to be a “society in which white people were proportionately represented in the bottom quintile (and black people proportionately represented in the top quintile)”—in other words, a society in which every social strata contained precisely the same proportion of minority groups. In line with that notion, Warikoo assumes that, because Oxford and Cambridge do not contain the same proportion of black British people as the larger society does, that necessarily implies the racism of the system. But such an argument betrays an ignorance of how mathematics works—or more specifically, as MacArthur grant-winning psychologist Amos Tversky and his co-authors explained more than three decades ago, how basketball works.

In “The Hot Hand in Basketball: On the Misperception of Random Sequences,” Tversky and company investigated an entire season’s worth of shooting data from the NBA’s Philadelphia 76ers in order to discover whether there was evidence “that the performance of a player during a particular period is significantly better than expected on the basis of the player’s own record”—that is, whether players sometimes shot better (or “got hot”) than their overall shot record would predict. Prior to the research, it seems, everyone involved in basketball—fans, players, and coaches—appeared to believe that sometimes players did “get hot”—a belief that seems to predict that, sometimes, players have a better chance of making the second basket of a series than they did the first one:

Consider a professional basketball player who makes 50% of his shots. This player will occasionally hit four or more shots in a row. Such runs can properly be called streak shooting, however, only if their length or frequency exceeds what is expected on the basis of chance alone.

In other words, if a player really did get “hot,” or was “clutch,” then that fact would be reflected in the statistical record by a showing that sometimes players made second and third (and so on) baskets at a rate higher than that player’s chance of making a first basket: “the probability of a hit should be greater following a hit than following a miss.” If the “hot hand” existed, in other words, there should be evidence for it.

Unfortunately—or not—there was no such evidence, the investigators found: after analyzing the data for the nine players who took the vast majority of the 76ers shots for the 1980-81 season, Tversky and company found that “for eight of the nine players the probability of a hit is actually lower following a hit … than following a miss,” which is clearly “contrary to the hot-hand hypothesis.” (The exception is Daryl Dawkins, who played center—and was best known, as older fans may recall, for his backboard shattering dunks; i.e., a high-percentage shot.) There was no such thing as the “hot hand,” in short. (To use an odd turn of phrase with regards to the NBA.)

Yet, what has that to do with the fact that there were no black British students at one third of Oxford’s colleges in 2015? After all, not many British people play basketball, black or not. But as Tversky and his co-authors argue in “The Hot Hand,” the existence of the belief in a “hot hand” intimates that people’s “intuitive conception of randomness depart systematically from the laws of chance.” That is, when faced with a coin flip for example “people expect even short sequences of heads and tails to reflect the fairness of a coin and contain roughly 50% heads and 50% tails.” Yet, in reality, “the occurrence of, say, four heads in a row … is quite likely in a sequence of 20 tosses.” In just the same way, in other words, professional basketball players (who are obviously quite skilled at shooting baskets) are likely to make several baskets in a row—not because of any special quality of “heat” they possess, but instead simply because they are good shooters. It’s this inability to perceive randomness, in other words, that may help explain the absence of black British students at many Oxford colleges.

As we saw above, when Warikoo asserts that black students are “massively underrepresented” at Oxford colleges, what she means is that the proportion of black students at Oxford is not the same as the percentage of black people in the United Kingdom as a whole. But as “The Hot Hand” shows, to “expect [that] the essential characteristics of a chance process to be represented not only globally in the entire sequence, but also locally, in each of its parts” is irrational: in reality, a “locally representative sequence … deviates systematically from chance expectation.” Since Oxford colleges, after all, are much smaller population samples than the United Kingdom as a whole is, it would be absurd to believe that their populations could somehow exactly replicate precisely the same proportions as the larger population.

Maybe though you still don’t see why, which is why I’ll now call on some backup: professors of statistics Howard Wainer and Harris Zwerling. In 2006, the two observed that, during the 1990s, many became convinced that smaller schools were the solution to America’s “education crisis”—the Bill and Melinda Gates Foundation, they note, became so convinced of the fact that they spent $1.7 billion on it. That’s because “when one looks at high-performing schools, one is apt to see an unrepresentatively large proportion of smaller schools.” But while that may be so, the two say, in reality “seeing a greater than anticipated number of small schools” in the list of better schools “does not imply that being small means having a greater likelihood of being high performing.” The reason, they say, is precisely the same reason that you don’t have a higher risk of kidney cancer by living in the American South.

Why might you think that? Turns out, Wainer and Zwerling say, that U.S. counties with the highest apparent risk of kidney cancer are all “rural and located in the Midwest, the South, and the West.” So, should you avoid those parts of the country if you are afraid of kidney cancer? Not at all—because the U.S. counties with the lowest apparent risk of kidney cancer are all “rural and located in the Midwest, the South, and the West.” The county characteristics that tend to have both the highest and lowest rates of cancer are precisely the same.

What Wainer and Zwerling’s example shows is precisely the same as that shown by Tversky and company’s work on the field goal rates of the Philadelphia 76ers. It’s a “same” that can be expressed with the words of journalist Michael Lewis, who recently authored a book about Amos Tversky and his long-time research partner (and Nobel Prize-winner) Daniel Kahneman called The Undoing Project: A Friendship That Changed Our Minds: “the smaller the sample, the lower the likelihood that it would mirror the broader population.” As Brian S. Everitt notes in 1999’s Chance Rules: An Informal Guide to Probability, Risk, and Statistics, “in, say, 20 tosses of a fair coin, the number of heads is unlikely to be exactly 10”—the probability, in fact, is “a little less than 1 in 5.” In other words, a sample of 20 tosses is much more likely to come up biased towards either heads or tails—and much, much more likely to be heavily biased towards one or the other than a larger population of coin flips is. Getting extreme results is much more likely in smaller populations.

Oxford colleges are, of course, very small samples of the population of the United Kingdom, which is about 66 million people. Oxford University as a whole, on the other hand, contains about 23,000 students. There are 38 colleges (as well as some other institutions), and some of these—like All Souls, for example—do not even admit undergraduate students; those that that do consist largely of a few hundred students each. The question then that Natasha Warikoo ought to ask first about the admission of black British students to Oxford colleges is, “how likely is it that a sample of 300 would mirror a population of 66 million?” The answer, as the work of Tversky et al. demonstrates,  is “not very”—it’s even less likely, in other words, than the likelihood of throwing exactly 2 heads and 2 tails when throwing a coin four times.

Does that mean that racism does not exist? No, certainly not. But Warikoo says that “[o]nly when Oxford and Cambridge succeed in including young Britons from all walks of life will they be what they say they are: world-class universities.” In fact, however, the idea that institutional populations ought to mirror the broader population is not only not easy to obtain—but flatly absurd. It isn’t that that a racially proportionate society is a difficult goal, in other words—it is that it is an impossible one. To get 300 people, or even 23,000, to reflect the broader population would require, essentially, rewiring the system to such an extent that it’s possible that no other goals—like, say, educating qualified students—could also be achieved; it would require so much effort fighting the entropy of chance that the cause would, eventually, absorb all possible resources. In other words, Oxford can either include “young Britons from all walks of life”—or it can be a world-class university.  It can’t, however, be both; which is to say that Natasha Warikoo—like one character says about little “Father Time’s” stepmother, Sue, at the end of Jude the Obscure—will never find peace.

Shut Out

But cloud instead, and ever-during dark
Surrounds me, from the cheerful ways of men
Cut off, and for the book of knowledge fair

And wisdom at one entrance quite shut out
Paradise Lost. Book III, 45-50

Hey everybody, let’s go out the baseball game,” the legendary 1960s Chicago disc jockey Dick Biondi said in the joke that (according to the myth) got him fired. “The boys,” Biondi is alleged to have said, “kiss the girls on the strikes, and …” In the story, of course, Biondi never finished the sentence—but you see where he was going, which is what makes the story interesting to a specific type of philosopher: the epistemologist. Epistemology is the study of how people know things: the question the epistemologist might ask about Biondi’s joke is, how do you know the ending to that story? For many academics today, the answer can be found in another baseball story, this time told by the literary critic Stanley Fish—a story that, oddly enough, also illustrates the political problems with that wildly popular contemporary concept: “diversity.”

As virtually everyone literate knows, “diversity” is one of the great adjectives of the present: something that has it is, ipso facto, usually held to be better than something that doesn’t. As a virtue, “diversity” has tremendous range, because it applies both in natural contexts—“biodiversity” is all the rage among environmentalists—and in social ones: in the 2003 case of Grutter v. Bollinger, for example, the Supreme Court held that the “educational benefits of diversity” were a “compelling state interest.” Yet, what often goes unnoticed about arguments in favor of “diversity” is that they themselves are dependent upon a rather monoglot account of how people know things—which is how we get back to epistemology.

Take, for instance, Stanley Fish’s story about the late, great baseball umpire Bill Klem. “It ain’t nothin’ til I call it,” Klem supposedly once said in response to a batter’s question about whether the previous pitch was a ball or a strike. (It’s a story I’ve retailed before: cf. “Striking Out”). The literature professor Stanley Fish has used that story, in turn, to illustrate what he views as the central lesson of what is sometimes called “postmodernism”: according to The New Yorker, Fish’s (and Klem’s) point is that “balls and strikes come into being only on the call of an umpire,” instead of being “facts in the world.” Klem’s remark in other words—Fish thinks—illustrates just how knowledge is what is sometimes called “socially constructed.”

The notion of “social construction” is the idea—as City College of New York professor Massimo Pigliucci recently put the point—that “no human being, or organized group of human beings, has access to a god’s eye of the world,” and that we ought therefore rely on an epistemic model in which “many individually biased points of view enter into dialogue with each other, yielding a less (but still) biased outcome.” The idea, in other words, is that meaning is—as Canadian philosopher Ian Hacking described the concept in The Social Construction of What?—“the product of historical events, social forces, and ideology.” Or, to put it another way, that we know things because of our culture, or social group: not by means of our own senses and judgement, but by the people around us.

For Pigliucci, this view of how human beings access reality suggests that we ought therefore rely on a particular epistemic model: rather than one in which each person ought to judge evidence for herself, we would instead rely on one in which “many individually biased points of view enter into dialogue with each other, yielding a less (but still) biased outcome.” In other words, we should rely upon diverse points of view, which is one reason why Pigliucci says, for instance, that because of the overall cognitive lack displayed by individuals, we ought “to work toward increasing diversity in the sciences.” Pigliucci’s reasoning is, of course, also what forms the basis of Grutter: “When universities are granted the freedom to assemble student bodies featuring multiple types of diversity,” wrote defendant Lee Bollinger (then dean of the University of Michigan law school) in an editorial for the Washington Post about the case, “the result is a highly sought-after learning environment that attracts the best students.” “Diversity,” in sum, is a tool to combat our epistemic weaknesses.

“Diversity” is thereby justified by means of a particular vision of epistemology: a particular theory of how people know things. On this theory, we are dependent upon other people in order to know anything. Yet, the very basis of Dick Biondi’s “joke” is that you, yourself, can “fill in” the punchline: it doesn’t take a committee to realize what the missing word at the end of the story is. And what that reality—your ability to furnish the missing word—perhaps illustrates is an epistemic distinction Keynes made in his magisterial 1920 work, A Treatise on Probability: a distinction that troubles the epistemology that underlies the concept of “diversity.”

“Now our knowledge,” Keynes writes in chapter two of that work, “seems to be obtained in two ways: directly, as the result of contemplating the objects of acquaintance; and indirectly, by argument” (italics in original). What Keynes is proposing, in other words, is an epistemic division between two ways of knowing—one of them being much like the epistemic model described by Fish or Pigliucci or Bollinger. As Keynes says, “it is usually agreed that we do not have direct knowledge” of such things as “the law of gravity … the cure for phthisis … [or] the contents of Bradshaw”—things like these, in other words, are only known through chains of reasoning, rather than direct experience. In order to know items like these, in other words, we have to have undergone a kind of socialization, otherwise known as education. We are dependent on other people to know those things.

Yet, as Keynes also recognizes, there is also another means of knowing:  “From an acquaintance with a sensation of yellow,” the Canadian economist and thinker wrote, “I can pass directly to a knowledge of the proposition ‘I have a sensation of yellow.’” In this epistemic model, human beings can know things by immediate apprehension—the chief example of this form of knowing being, as Keynes describes, our own senses. What Keynes says, in short, is that people can know things in more than one way: one way through other people yes, as Fish et al. say—but also through our own experience.

Or—to put the point differently—Keynes has a “diverse” epistemology. That would, at least superficially, seem to make Keynes’ argument a support for the theory of “diversity”: after all, he is showing how people can know things differently, which would appear to assist Lee Bollinger and Massimo Pigliucci’s argument for diversity in education. If people can know things in different ways, it would then appear necessary to gather more, and different, kinds of people in order to know anything. But just saying so exposes the weakness at the heart of Bollinger and Pigliucci’s ideal of “diversity.”

Whereas Keynes has a “diverse” epistemology, in short, Bollinger and Pigliucci do not: in their conception, human beings can only know things in one way. That is the way that Keynes called “indirect”: through argumentation and persuasion—or as its sometimes put, “social construction.” In other words, the defenders of “diversity” have a rather monolithic epistemology, which is why Fish for instance once attacked the view that it is possible to “survey the world in a manner free of assumptions about what it is like and then, from that … disinterested position, pick out the set of reasons that will be adequate to its description.” If such a thing were possible, after all, it would be possible to experience a direct encounter with the world—which “diversity” enthusiasts like Fish deny is possible: Fish says, for instance, that “the rhetoric of disinterested inquiry … is in fact”—just how he knows this is unclear—“a very interested assertion of the superiority of one set of beliefs.” In other words, any other epistemological view than their own is merely a deception.

Perhaps though this is all just one of the purest cases of an “academic” dispute: eggheads arguing, as the phrase goes, about how many angels can dance on a pin. At least, until one realizes that the nearly-undisputed triumph of epistemology retailed by Fish and company also has certain quite-real consequences. For example, as the case of Bollinger demonstrates, although the “socially-constructed” epistemology is an excellent means, as has been demonstrated over the past several decades, of—in the words of Fish’s fellow literary critic William Benn Michaels—“battling over what skin color the rich kids should have,” it isn’t so great for, say, dividing up legislative districts: a question that, as Elizabeth Kolbert noted last year in The New Yorker, “may simply be mathematical.” But if so, that presents a problem for those who think of their epistemological views as serving a political cause.

Mathematics, after all, is famously not something that can be understood “culturally”; it is, as Keynes—and before him, a silly fellow named Plato—knew, perhaps the foremost example of the sort of knowing demonstrated by Dick Biondi’s joke. Mathematics, in other words, is the chief example of something known directly: when you understand something in mathematics, you understand it either immediately—or not at all. Which, after all, is the significance of Kolbert’s remarks: to say that re-districting—perhaps the most political act of all in a democracy—is primarily a mathematical operation is to say that to understand redistricting, you have to understand directly the mathematics of the operation. Yet if the “diversity” promoters are correct, then only their epistemology has any legitimacy: an epistemology that a priori prevents anyone from sensibly discussing redistricting. In other words, it’s precisely the epistemological blindspots promoted by the often-ostensibly “politically progressive” promoters of “diversity” that allow the current American establishment to ignore the actual interests of actual people.

Which, one supposes, may be the real joke.

Home of the Brave

audentes Fortuna iuvat.
The Aeneid. Book X, line 284. 

American prosecutors in the last few decades have—Patrick Keefe recently noted in The New Yorker—come to use more and more “a type of deal, known as a deferred-prosecution agreement, in which the company would acknowledge wrongdoing, pay a fine, and pledge to improve its corporate culture,” rather than prosecuting either the company officers or the company itself for criminal acts. According to prosecutors, it seems, this is because “the problem with convicting a company was that it could have ‘collateral consequences’ that would be borne by employees, shareholders, and other innocent parties.” In other words, taking action against a corporation could put it out of business. Yet, declining to prosecute because of the possible consequences is an odd position for a prosecutor to take: “Normally a grand jury will indict a ham sandwich if a prosecutor asks it to,” former Virginia governor Chuck Robb, once a prosecutor himself, famously remarked. Prosecutors, in other words, aren’t usually known for their sensitivity to circumstance—so why the change in recent decades? The answer may lie, perhaps, in a knowledge of child-raising practices of the ancient European nobility—and the life of Galileo Galilei.

“In those days,” begins one of the stories described by Nicola Clarke in The Muslim Conquest of Iberia: Medieval Arabic Narratives, “the custom existed amongst the Goths that the sons and daughters of the nobles were brought up in the king’s palace.” Clarke is describing the tradition of “fosterage”: the custom, among the medieval aristocracy, of sending one’s children to be raised by another noble family while raising another such family’s children in turn. “It is not clear what … was the motive” for fostering children, according to Laurence Ginnell’s The Brehon Laws (from 1894), “but its practice, whether designed for that end or not, helped materially to strengthen the natural ties of kinship and sympathy which bound the chief and clan or the flaith and sept together.” In Ginnell’s telling, “a stronger affection oftentimes sprang up between persons standing in those relations than that between immediate relatives by birth.” One of the purposes of fostering, in other words, was to decrease the risk of conflict by ensuring that members of the ruling classes grew up together: it’s a lot harder to go to war, the thinking apparently went, when you are thinking of your potential opponent as the kid who skinned his knee that one time, instead of the fearsome leader of a gang of killers.

Perhaps one explanation for why prosecutors appear to be willing to go easier on corporate criminals these days than in the past might be because they share “natural ties”: they attended the same schools as those they are authorized to prosecute. Although statistics on the matter appear lacking, there’s reason to think that future white collar criminals and their (potential) prosecutors share the same “old school” ties more and more these days: there’s reason to think, in other words, that just as American law schools have seized a monopoly on the production of lawyers—Robert H. Jackson, who served from 1941 to 1954, was the last American Supreme Court Justice without a law degree—so too have America’s “selective” colleges seized a monopoly on the production of CEOs. “Just over 10% of the highest paid CEOs in America came from the Ivy League plus MIT and Stanford,” a Forbes article noted in 2012—a percentage higher than at any previous moment in American history. In other words, just as lawyers all come from the same schools these days, so too does upper management—producing the sorts of “natural ties” that not only lead to rethinking that cattle raid on your neighbor’s castle, but perhaps also any thoughts of subjecting Jaime Dimon to a “perp walk.” Yet as plausible an explanation as that might seem, it’s even more satisfying when it is combined with an incident in the life of the great astronomer.

In 1621, a Catholic priest named Scipio Chiaramonti published a book about a supernova that had occurred in 1572; the exploded star (as we now know it to have been) had been visible during daylight for several weeks in that year. The question for astronomers in that pre-Copernican time was whether the star had been one of the “fixed stars,” and thus existed beyond the moon, or whether it was closer to the earth than the moon: since—as James Franklin, from whose The Science of Conjecture: Evidence and Probability Before Pascal I take this account, notes—it was “the doctrine of the Aristotelians that there could be no change beyond the sphere of the moon,” a nova that far away would refute their theory. Chiaramonti’s book claimed that the measurements of 12 astronomers showed that the object was not as far as the moon—but Galileo pointed out that Chiaramonti’s work had, in effect, “cherrypicked”: he did not use all the data actually available, but merely used that which supported his thesis. Galileo’s argument, oddly enough, can also be applied to why American prosecutors aren’t pursuing financial crimes.

The point is supplied, Keefe tells us, by James Comey: the recent head of the FBI fired by President Trump. Before moving to Washington Comey was U.S. Attorney for the Southern District of New York, in which position he once called—Keefe informs us—some of the attorneys working for the Justice Department members of “the Chickenshit Club.” Comey’s point was that while a “perfect record of convictions and guilty pleas might signal simply that you’re a crackerjack attorney,” it might instead “mean that you’re taking only those cases you’re sure you’ll win.” To Comey’s mind, the marvelous winning records of those working under him was not a sign of not a guarantee of the ability of those attorneys, but instead a sign that his office was not pursuing enough cases. In other words, just as Chiaramonti chose only those data points that confirmed his thesis, the attorneys in Comey’s office were choosing only those cases they were sure they would win.

Yet, assuming that the decrease in financial prosecution is due to prosecutorial choice, why are prosecutors more likely, when it comes to financial crimes, to “cherrypick” today than they were a few decades ago? Keefe says this may be because “people who go to law school are risk-averse types”—but that begs the question of why today’s lawyers are more risk-averse than their predecessors. The answer, at least according to a former Yale professor, may be that they are more likely to cherrypick because they are the product of cherrypicking.

Such at least was the answer William Deresiewicz arrived at in 2014’s “Don’t Send Your Kid to the Ivy League”—the most downloaded article in the history of The New Republic. “Our system of elite education manufactures young people who are smart and talented and driven, yes,” Deresiewicz wrote  there—but, he wrote, it also produces students that are “anxious, timid, and lost.” Such students, the Yale faculty member wrote, had “little intellectual curiosity and a stunted sense of purpose”; they are “great at what they’re doing but [have] no idea why they’re doing it.” The question Deresiewicz wanted answered was, of course, why the students he saw in New Haven were this way; the answer he hit upon was that the students he saw were themselves the product of a cherrypicking process.

“So extreme are the admissions standards now,” Deresiewicz wrote in “Don’t,” “that kids who manage to get into elite colleges have, by definition, never experienced anything but success.” The “result,” he concluded, “is a violent aversion to risk.” Deresiewicz, in other words, is thinking systematically: in other words, it isn’t so much that prosecutors and white collar criminals share the same background that has made prosecutions so much less likely, but instead the fact that prosecutors have experienced a certain kind of winnowing process in the course of achieving their positions in life.

To most people, in other words, scarcity equals value: Harvard admits very few people, therefore Harvard must provide an excellent education. But what the Chiaramonti episode brings to light is the notion that what makes Harvard so great may not be that it provides an excellent education, but instead that it admits such “excellent” people in the first place: Harvard’s notably long list of excellent alumni may not be a result of what’s happening in the classroom, but instead in the admissions office. The usual understanding of education, in other words, takes the significant action of education to be what happens inside the school—but what Galileo’s statistical perspective says, instead, is that the important play may be what happens before the students even arrive.

The question that Deresiewicz’ work suggests, in turn, is that this very process may itself have unseen effects: efforts to make Harvard (along with other schools) more “exclusive”—and thus, ostensibly, provide a better education—may actually be making students worse off than they might otherwise be. Furthermore, Keefe’s work intimates that this insidious effect might not be limited to education; it may be causing invisible ripples throughout American society—ripples that may not be limited to the criminal justice system. If the same effects Keefe says are affecting lawyers is also affecting the future CEOs the prosecutors are not prosecuting, then perhaps CEOs are becoming less likely to pursue the legitimate risks that are the economic lifeblood of the nation—and perhaps more susceptible to pursuing illegitimate risks, of the sort that once landed CEOs in non-pinstriped suits. Accordingly, perhaps that old conservative bumper sticker really does have something to teach American academics—it’s just that what both sides ought perhaps to realize is that this relationship may be, at bottom, a mathematical one. That relation, you ask?

The “land of the free” because of “the brave.”

Nunc Dimittis

Nunc dimittis servum tuum, Domine, secundum verbum tuum in pace:
Quia viderunt oculi mei salutare tuum
Quod parasti ante faciem omnium populorum:
Lumen ad revelationem gentium, et gloriam plebis tuae Israel.
—“The Canticle of Simeon.”
What appeared obvious was therefore rendered problematical and the question remains: why do most … species contain approximately equal numbers of males and females?
—Stephen Jay Gould. “Death Before Birth, or a Mite’s Nunc dimittis.”
    The Panda’s Thumb: More Reflections in Natural History. 1980.

Since last year the attention of most American liberals has been focused on the shenanigans of President Trump—but the Trump Show has hardly been the focus of the American right. Just a few days ago, John Nichols of The Nation observed that ALEC—the business-funded American Legislative Exchange Council that has functioned as a clearinghouse for conservative proposals for state laws—“is considering whether to adopt a new piece of ‘model legislation’ that proposes to do away with an elected Senate.” In other words, ALEC is thinking of throwing its weight behind the (heretofore) fringe idea of overturning the Seventeenth Amendment, and returning the right to elect U.S. Senators to state legislatures: the status quo of 1913. Yet, why would Americans wish to return to a period widely known to be—as the most recent reputable academic history, Wendy Schiller and Charles Stewart’s Electing the Senate: Indirect Democracy Before the Seventeenth Amendment has put the point—“plagued by significant corruption to a point that undermined the very legitimacy of the election process and the U.S. Senators who were elected by it?” The answer, I suggest, might be found in a history of the German higher educational system prior to the year 1933.

“To what extent”—asked Fritz K. Ringer in 1969’s The Decline of the German Mandarins: The German Academic Community, 1890-1933—“were the German mandarins to blame for the terrible form of their own demise, for the catastrophe of National Socialism?” Such a question might sound ridiculous to American ears, to be sure: as Ezra Klein wrote in the inaugural issue of Vox, in 2014, there’s “a simple theory underlying much of American politics,” which is “that many of our most bitter political battles are mere misunderstandings” that can be solved with more information, or education. To blame German professors, then, for the triumph of the Nazi Party sounds paradoxical to such ears: it sounds like blaming an increase in rats on a radio station. From that view, then, the Nazis must have succeeded because the German people were too poorly-educated to be able to resist Hitler’s siren song.

As one appraisal of Ringer’s work in the decades since Decline has pointed out, however, the pioneering researcher went on to compare biographical dictionaries between Germany, France, England and the United States—and found “that 44 percent of German entries were academics, compared to 20 percent or less elsewhere”; another comparison of such dictionaries found that a much-higher percentage of Germans (82%) profiled in such books had exposure to university classes than those of other nations. Meanwhile, Ringer also found that “the real surprise” of delving into the records of “late nineteenth-century German secondary education” is that it “was really rather progressive for its time”: a higher percentage of Germans found their way to a high school education than did their peers in France or England during the same period. It wasn’t, in other words, for lack of education that Germany fell under the sway of the Nazis.

All that research, however, came after Decline, which dared to ask the question, “Did the work of German academics help the Nazis?” To be sure, there were a number of German academics, like philosopher Martin Heidegger and legal theorist Carl Schmitt, who not only joined the party, but actively cheered the Nazis on in public. (Heidegger’s connections to Hitler have been explored by Victor Farias and Emannuel Faye; Schmitt has been called “the crown jurist of the Third Reich.”) But that question, as interesting as it is, is not Ringer’s; he isn’t interested in the culpability of academics in direct support of the Nazis, perhaps the culpability of elevator repairmen could as well be interrogated. Instead, what makes Ringer’s argument compelling is that he connects particular intellectual beliefs to a particular historical outcome.

While most examinations of intellectuals, in other words, bewail a general lack of sympathy and understanding on the part of the public regarding the significance of intellectual labor, Ringer’s book is refreshing insofar as it takes the opposite tack: instead of upbraiding the public for not paying attention to the intellectuals, it upbraids the intellectuals for not understanding just how much attention they were actually getting. The usual story about intellectual work and such, after all, is about just how terrible intellectuals have it—how many first novels, after all, are about young writers and their struggles? But Ringer’s research suggests, as mentioned, the opposite: an investigation of Germany prior to 1933 shows that intellectuals were more highly thought of there than virtually anywhere in the world. Indeed, for much of its history before the Holocaust Germany was thought of as a land of poets and thinkers, not the grim nation portrayed in World War II movies. In that sense, Ringer has documented just how good intellectuals can have it—and how dangerous that can be.

All of that said, what are the particular beliefs that, Ringer thinks, may have led to the installation of the Fürher in 1933? The “characteristic mental habits and semantic preferences” Ringer documents in his book include such items as “the underlying vision of learning as an empathetic and unique interaction with venerated texts,” as well as a “consistent repudiation of instrumental or ‘utilitarian’ knowledge.” Such beliefs are, to be sure, seemingly required of the departments of what are now—but weren’t then—thought of, at least in the United States, as “the humanities”: without something like such foundational assumptions, subjects like philosophy or literature could not remain part of the curriculum. But, while perhaps necessary for intellectual projects to leave the ground, they may also have some costs—costs like, say, forgetting why the Seventeenth Amendment was passed.

That might sound surprising to some—after all, aren’t humanities departments hotbeds of leftism? Defenders of “the humanities”—like Gregory Harpham, once Director of the National Endowment for the Humanities—sometimes go even further and make the claim—as Harpham did in his 2011 book, The Humanities and the Dream of America—that “the capacity to sympathize, empathize, or otherwise inhabit the experience of others … is clearly essential to democratic society,” and that this “kind of capacity … is developed by an education that includes the humanities.” Such views, however, make a nonsense of history: traditionally, after all, it’s been the sciences that have been “clearly essential to democratic society,” not “the humanities.” And, if anyone thinks about it closely, the very notion of democracy itself depends on an idea that, at base, is “scientific” in nature—and one that is opposed to the notion of “the humanities.”

That idea is called, in scientific circles, “the Law of Large Numbers”—a concept first written down formally two centuries ago by mathematician Jacob Bernoulli, but easily illustrated in the words of journalist Michael Lewis’ most recent book. “If you flipped a coin a thousand times,” Lewis writes in The Undoing Project, “you were more likely to end up with heads or tails roughly half the time than if you flipped it ten times.” Or as Bernoulli put it in 1713’s Ars Conjectandi, “it is not enough to take one or another observation for such a reasoning about an event, but that a large number of them are needed.” It is a restatement of the commonsensical notion that the more times a result is repeated, the more trustworthy it is—an idea hugely applicable to human life.

For example, the Law of Large Numbers is why, as publisher Nate Silver recently put it, if “you want to predict a pitcher’s win-loss record, looking at the number of strikeouts he recorded and the number of walks he yielded is more informative than looking at his W’s and L’s from the previous season.” It’s why, when financial analyst John Bogle examined the stock market, he decided that, instead of trying to chase the latest-and-greatest stock, “people would be better off just investing their money in the entire stock market for a very cheap price”—and thereby invented the index fund. It’s why, Malcolm Gladwell has noted, the labor movement has always endorsed a national health care system: because they “believed that the safest and most efficient way to provide insurance against ill health or old age was to spread the costs and risks of benefits over the biggest and most diverse group possible.” It’s why casinos have limits on the amounts bettors can wager. In all these fields, as well as more “properly” scientific ones, it’s better to amass large quantities of results, rather than depend on small numbers of them.

What is voting, after all, but an act of sampling of the opinion of the voters, an act thereby necessarily engaged with the Law of Large Numbers? So, at least, thought the eighteenth-century mathematician and political theorist the Marquis de Condorcet—who called the result “the miracle of aggregation.” Summarizing a great deal of contemporary research, Sean Richey of Georgia State University has noted that Condorcet’s idea was that (as one of Richey’s sources puts the point) “[m]ajorities are more likely to select the ‘correct’ alternative than any single individual when there is uncertainty about which alternative is in fact the best.” Or, as Richey describes how Condorcet’s process actually works more concretely puts it, the notion is that “if ten out of twelve jurors make random errors, they should split five and five, and the outcome will be decided by the two who vote correctly.” Just as, in sum, a “betting line” demarks the boundary of opinion between gamblers, Condorcet provides the justification for voting: Condorcet’s theory was that “the law of large numbers shows that this as-if rational outcome will be almost certain in any large election if the errors are randomly distributed.” Condorcet, thereby, proposed elections as a machine for producing truth—and, arguably, democratic governments have demonstrated that fact ever since.

Key to the functioning of Condorcet’s machine, in turn, is large numbers of voters: the marquis’ whole idea, in fact, is that—as David Austen-Smith and Jeffrey S. Banks put the French mathematician’s point in 1996—“the probability that a majority votes for the better alternative … approaches 1 [100%] as n [the number of voters] goes to infinity.” In other words, the point is that the more voters, the more likely an election is to reach the correct decision. The Seventeenth Amendment is, then, just such a machine: its entire rationale is that the (extremely large) pool of voters of a state is more likely to reach a correct decision than an (extremely small) pool voters consisting of the state legislature alone.

Yet the very thought that anyone could even know what truth is, of course—much less build a machine for producing it—is anathema to people in humanities departments: as I’ve mentioned before, Bruce Robbins of Columbia University has reminded everyone that such departments were “founded on … the critique of Enlightenment rationality.” Such departments have, perhaps, been at the forefront of the gradual change in Americans from what the baseball writer Bill James has called “an honest, trusting people with a heavy streak of rationalism and an instinctive trust of science,” with the consequence that they had “an unhealthy faith in the validity of statistical evidence,” to adopting “the position that so long as something was stated as a statistic it was probably false and they were entitled to ignore it and believe whatever they wanted to [believe].” At any rate, any comparison of the “trusting” 1950s America described by James by comparison to what he thought of as the statistically-skeptical 1970s (and beyond) needs to reckon with the increasingly-large bulge of people educated in such departments: as a report by the Association of American Colleges and Universities has pointed out, “the percentage of college-age Americans holding degrees in the humanities has increased fairly steadily over the last half-century, from little over 1 percent in 1950 to about 2.5 percent today.” That might appear to be a fairly low percentage—but as Joe Pinsker’s headline writer put the point of Pinsker’s article in The Atlantic, “Rich Kids Major in English.” Or as a study cited by Pinsker in that article noted, “elite students were much more likely to study classics, English, and history, and much less likely to study computer science and economics.” Humanities students are a small percentage of graduates, in other words—but historically they have been (and given the increasingly-documented decreasing social mobility of American life, are increasingly likely to be) the people calling the shots later.

Or, as the infamous Northwestern University chant had it: “That‘s alright, that’s okay—you’ll be working for us someday!” By building up humanities departments, the professoriate has perhaps performed useful labor by clearing the ideological ground for nothing less than the repeal of the Seventeenth Amendment—an amendment whose argumentative success, even today, depends upon an audience familiar not only with Condorcet’s specific proposals, but also with the mathematical ideas that underlay them. That would be no surprise, perhaps, to Fritz Ringer, who described how the German intellectual class of the late nineteenth century and early twentieth constructed an “a defense of the freedom of learning and teaching, a defense which is primarily designed to combat the ruler’s meddling in favor of a narrowly useful education.” To them, the “spirit flourishes only in freedom … and its achievements, though not immediately felt, are actually the lifeblood of the nation.” Such an argument is reproduced by such “academic superstar” professors of humanities as Judith Butler, Maxine Elliot Professor in the Departments of Rhetoric and Comparative Literature at (where else?) the University of California, Berkeley, who has argued that the “contemporary tradition”—what?—“of critical theory in the academy … has shown how language plays an important role in shaping and altering our common or ‘natural’ understanding of social and political realities.”

Can’t put it better.