Great! Again?

The utility of a subdivision of the legislative power into different branches … is, perhaps, at the present time admitted by most persons of sound reflection.But it has not always found general approbation; and is, even now, sometimes disputed by men of speculative ingenuity, and recluse habits.
—Joseph Story. Commentaries on the Constitution of the United States. 1833.

 

Nicolas de Caritat, Marquis of Condorcet (17 September 1743 – 28 March 1794)
Nicolas de Caritat, Marquis of Condorcet
(17 September 1743 – 28 March 1794)

We habitually underestimate the effect of randomness,” wrote Leonard Mlodinow of MIT in his 2008 book on the subject: The Drunkard’s Walk: How Randomness Rules Our Lives—so much so, in fact, that “even when careers and millions of dollars are at stake, chance events are often conspicuously misinterpreted as accomplishments or failures.” But while that may be true, it’s often very difficult to know just when chance has intervened; it’s a hard thing to ask people to focus on things that never happened—but could have. Yet while that is so, there remains some identifiable ways in which chance interjects itself into our lives. One of them, in fact, is how Americans pass their laws—an argument that has not only been ongoing for two centuries, but that America is losing.

When, in 1787, the United States wrote its constitution, Edmund Randolph introduced what has since been called “the Virginia Plan”—the third resolution of which asserted that “the national legislature ought to consist of two branches.” Those two branches are now called the Senate and the House of Representatives, which makes the American system of government a bicameral one: that is, one with two legislative houses. Yet, although many Americans tend to think of this structure as, apparently, created with the universe, in fact it is not one that has been widely copied.

“Worldwide,” wrote Betty Drexhage in a 2015 report to the government of the Netherlands, “only a minority of legislatures is bicameral.” More recently the Inter-Parliamentary Union, a kind of trade group for legislatures, noted that, of world governments, 77 are bicameral—while 116 have only one house. Furthermore, expressing that ratio without context over-represents bicameral legislatures: even in countries that have two legislative houses, few of them have houses that are equally powerful, as the American House and Senate are. The British House of Lords, for example—the model for the Senate—has not been on a par politically with the House of Commons, even theoretically, since 1911 at the latest, and arguably since 1832.

Yet, why should other countries have failed to adopt the bicameral structure? Alternately, why did some, including notable figures like Benjamin Franklin, oppose splitting the Congress in two? One answer is provided by an early opponent of bicameralism: the Marquis de Condorcet, who wrote in 1787’s Letters from a Freeman of New Haven to a Citizen of Virginia on the Futility of Dividing the Legislative Power Among Several Bodies that “‘increasing the number of legislative bodies could never increase the probability of obtaining true decisions.’” Probability is a curious word to use in this connection—but one natural for a mathematician, which is what the marquis was.

The astronomer Joseph-Jerôme de Lalande, after all, had “ranked … Condorcet as one of the ten leading mathematicians in Europe” at the age of twenty-one; his early skill attracted the attention of the great Jean d’Alembert, one of the most famous mathematicians of all time. By 1769, at the young age of 25, he was elected to the incredibly prestigious French Royal Academy of Sciences; later, he would work with Leonhard Euler, even more accomplished than the great d’Alembert. The field that the marquis plowed as a mathematician was the so-called “doctrine of chances”—what we today would call the study of probability.

Although in one sense then the marquis was only one among many opponents of bicameralism—his great contemporary, the Abbé Sieyes, was another—very few of them were as qualified, mathematically speaking, to consider the matter as the marquis was; if, as Justice Joseph Story of the United States would write later, the arguments against bicameralism “derived from the analogy between the movements of political bodies and the operations of physical nature,” then the marquis was one of the few who could knowledgeably argue from nature to politics, instead of the other way. And in this matter, the marquis had an ace.

Condorcet’s ace was the mathematical law first discovered by an Italian physician—and gambler—named Gerolamo Cardano. Sometime around 1550, Cardano had written a book called Liber de Ludo Alea; or, The Book on Games of Chance, and in that book Cardano took up the example of throwing two dice. Since the probability of throwing a single number on one die is one in six, the doctor reasoned, then the probability of throwing two of the same number is 1/6 multiplied by 1/6, which is 1/36. Since 1/36 is much, much less likely than 1/6, it follows that it is much less likely that a gambler will roll double sixes than it is that the same gambler will roll a single six.

According to J. Hoffman-Jørgensen of the University of Aarhus, what Cardano had discovered was the law that the “probability that two independent events occurs simultaneously equals the product of their probabilities.” In other words, the chance of two events happening is exponentially less than the chance of either one of those two events—which is why, for example, a perfecta bet in horse racing pays off so highly: it’s much more difficult to choose two horses than one. By the marquis’ time the mathematics was well-understood—indeed, it could not have been not known to virtually anyone with any knowledge of mathematics, much less one of the world’s authorities on the subject.

The application, of course, should be readily apparent: by requiring legislation to pass through two houses rather than one, bicameralism thereby—all by itself—exponentially lessens the chance of legislative passage. Anecdotally, this is something that has been, if imperfectly, well-known in the United States for some time: “Time and again a bill threatening to the South” prior to the Civil War, as Leonard Richards of the University of Massachusetts has pointed out, “made its way through the House only to be blocked in the Senate.” Or, as labor lawyer Thomas Geoghegan once remarked—and he is by no means young—his “old college teacher once said, ‘Just about every disaster in American history is the result of the Senate.’” And as political writer Daniel Lazare pointed out in Slate in 2014, even today the “US Senate is by now the most unrepresentative major legislature in the ‘democratic world’”—because there are two senators from every state, legislation desired by ninety percent of the population can be blocked. Hence, just as the Senate blocked anti-slavery legislation—and much else besides—from passage prior to the Civil War, so too does it continue to function in that role today.

Yet, although many Americans may know—the quotations could be multiplied—that there is something not quite right about the bicameral Congress, and some of them even mention it occasionally, it is very rare to notice any mention of the Marquis de Condorcet’s argument against bicameral legislatures in the name of the law of probability. Indeed, in the United States even the very notion of statistical knowledge is sometimes the subject of a kind of primitive superstition.

The baseball statistician Bill James, for example, once remarked that he gets “asked on talkshows a lot whether one can lie with statistics,” apparently because “a robust skepticism about statistics and their value had [so] permeated American life” that today (or at least, in the 1985 James wrote) “the intellectually lazy [have] adopted the position that so long as something was stated as a statistic it was probably false and they were entitled to ignore it and believe whatever they wanted to.” Whether there is a direct relationship between these two—the political import of the marquis’ argument so long ago, and the much later apprehension of statistics noted by James—is unclear, of course.

That may be about to change, however. James, for example, who was once essentially a kind of blogger before the Internet, has gradually climbed the best-seller lists; meanwhile, his advice and empirical method of thinking has gradually infected the baseball world—until last year the unthinkable happened, and the Chicago Cubs won the World Series while led by a man (Theo Epstein) who held up Bill James as his hero. At the same time, as I’ve documented in a previous blog post (“Size Matters”), Donald Trump essentially won the presidency because his left-wing opponents do not understand the mathematics involved in the Electoral College—or cannot, probably due to the fact of their prior commitment to “culture,” effectively communicate that knowledge to the public. In other words, chance may soon make the argument of the marquis—long conspicuously misinterpreted as a failure—into a sudden accomplishment.

Or perhaps rather—great again.

Advertisements

Lex Majoris

The first principle of republicanism is that the lex majoris partis is the fundamental law of every society of individuals of equal rights; to consider the will of the society enounced by the majority of a single vote, as sacred as if unanimous, is the first of all lessons in importance, yet the last which is thoroughly learnt. This law once disregarded, there is no other but that of force, which ends necessarily in military despotism.
—Thomas Jefferson. Letter to Baron von Humboldt. 13 June 1817.

Since Hillary Clinton lost the 2016 American presidential election, many of her supporters have been quick to cry “racism” on the part of voters for her opponent, Donald Trump. According to Vox’s Jenée Desmond-Harris, for instance, Trump won the election “not despite but because he expressed unfiltered disdain toward racial and religious minorities in the country.” Aside from being the easier interpretation, because it allows Clinton voters to ignore the role their own economic choices may have played in the broad support Trump received throughout the country, such accusations are counterproductive even on their own terms because—only seemingly paradoxically—they reinforce many of the supports racism still receives in the United States: above all, because they weaken the intellectual argument for a national direct election for the presidency. By shouting “racism,” in other words, Hillary Clinton’s supporters may end up helping to continue racism’s institutional support.

That institutional support begins with the method by which Americans elect their president: the Electoral College—a method that, as many have noted, is not used in any other industrialized democracy. Although many scholars and others have advanced arguments for the existence of the college through the centuries, most of these “explanations” are, in fact, intellectually incoherent: while the most common of the traditional “explanations” concerns the differences between the “large states” and the “small,” for instance, in the actual United States—as James Madison, known as the “Father of the Constitution,” noted at the time—there had not then, and has not ever been since, a situation in American history that involved a conflict between larger-population and smaller-population states. Meanwhile, the other “explanations” for the Electoral College do not even rise to this level of incoherence.

In reality there is only one explanation for the existence of the college, and that explanation has been most forcefully and clearly made by law professor Paul Finkelman, now serving as a Senior Fellow at the University of Pennsylvania after spending much of his career at obscure law schools like the University of Tulsa College of Law, the Cleveland-Marshall College of Law, and the Albany Law School. As Finkelman has been arguing for decades (his first papers on the subject were written in the 1980s), the Electoral College was originally invented by the delegates to the Constitutional Convention of 1787 in order to protect slavery. That such was the purpose of the College can be known, most obviously, because the delegates to the convention said so.

When the means of electing a president were first debated, it’s important to remember that the convention had already decided, for the purposes of representation in the newly-created House of Representatives, to count black slaves by the means of the infamous three-fifths ratio. That ratio, in turn, had its effect when discussing the means of electing a president: delegates like James Madison argued, as Finkelman notes, that the existence of such a college—whose composition would be based on each state’s representation in the House of Representatives—would “guarantee that the nonvoting slaves could nevertheless influence the presidential election.” Or as Hugh Williamson, a delegate from North Carolina, observed during the convention, if American presidents were elected by direct national vote the South would be shut out of electing a national executive because “her slaves will have no suffrage”—that is, because in a direct vote all that would matter is the number of voters, the Southern states would lose the advantage the three-fifths ratio gave them in the House. Hence, the existence of the Electoral College is directly tied to the prior decision to grant Southern slave states an advantage in Congress, and so the Electoral College is another in a string of institutional decisions made by convention delegates to protect domestic slavery.

Yet, assuming that Finkelman’s case for the racism of the Electoral College is true, how can decrying the racism of the American voter somehow inflict harm on the case for abolishing the Electoral College? The answer goes back to the very justifications of, not only presidential elections, but elections in general—the gradual discovery, during the eighteenth century Enlightenment, of what is today known as the Law of Large Numbers.

Putting the law in capital letters, I admit, tends to mystify it, but anyone who buys insurance already understands the substance of the concept. As New Yorker writer Malcolm Gladwell once explained insurance, “the safest and most efficient way to provide insurance” is “to spread the costs and risks of benefits over the biggest and most diverse group possible.” In other words, the more people participating in an insurance plan, the greater the possibility that the plan’s members will be protected. The Law of Large Numbers explains why that is.

That reason is the same as the reason that, as Peter Bernstein remarks in Against the Gods: The Remarkable Story of Risk, if we toss a coin enough times that “will correspondingly increase the probability that the ratio of heads thrown to total throws” will decrease. Or, the reason that—as physicist Leonard Mlodinow has pointed out—in order really to tell which baseball team is better than another a World Series would have to be at least 23 games long (if one team were much better than the other), and possibly as long as 269 games (between two closely-matched opponents). Only by playing so many games can random chance be confidently excluded: as Carl Bialik of FiveThirtyEight once pointed out, usually “in sports, the longer the contest, the greater the chance that the favorite prevails.” Or, as Israeli psychologists Daniel Kahneman and Amos Tversky put the point in 1971, “the law of large numbers guarantees that very large samples will indeed be representative”: it’s what scientists rely upon to know that, if they have performed enough experiments or poured over enough data, they know enough to exclude idiosyncratic results. The Law of Large Numbers asserts, in short, that the more times we repeat something, the closer we will approach its true value.

It’s for just that reason that many have noted the connection between science and democratic government: “Science and democracy are powerful partners,” as the website for the Union of Concerned Scientists has put it. What makes these two objects such “powerful” partners is that the Law of Large Numbers is what underlies the act of holding elections: as James Surowiecki put the point in his book, The Wisdom of Crowds, the theory of democracy is that “the larger the group, the more reliable its judgment will be.” Just as scientists think that, by replicating an experiment, they can more readily trust in its results, so too does a democratic government implicitly think that, by including more people in the decision-making process, the government can the more readily arrive at the “correct” solution: as James Madison put it in The Federalist No. 10, if you “take in a greater variety of parties and interests,” then “you make it less probable that a majority of the whole will have a common motive for invading the rights of other citizens.” Without such a belief, after all, there would be no reason not to trust, say, a ruling caste to make decisions for society—or even a single, perhaps orange-toned, individual. Without some concept of the Law of Large Numbers—some belief that increasing the numbers of trials, or increasing the number of inputs, will make for better results—there is no reason for democratic government at all.

That’s why, when people criticize the Electoral College, they are implicitly invoking the Law of Large Numbers. The Electoral College divides the pool of American voters into fifty smaller pools, but a national popular vote would collect all Americans into a single lump—a point that some defenders of the College sometimes seek to make into a virtue, instead of the vice it is. In the wake of the 2000 election, for example, Senator Mitch McConnell wrote that the “Electoral College served to center the post-election battles in Florida,” preventing the “vote recounts and court battles in nearly every state of the Union” that, McConnell assures us, would have occurred in the college’s absence. But as Timothy Noah pointed out in The New Republic in 2012, what McConnell’s argument “fails to realize is that when you’re assembling one big count rather than a lot of little ones it’s a lot less clear what’s to be gained from rigging any of the little ones.” If what matters is the popular vote, what happens in any one location doesn’t matter so much; hence, stealing votes in downstate Illinois won’t allow you to steal the entire state—just as, with enough samples or experiments run, the fact that the lab assistant was drowsy at the time she recorded one set of results won’t matter so much. Or why deliberately losing a single game in July hardly matters so much as tanking a game of the World Series.

Put in such a way, it’s hard to see how anyone without a vested stake in the construction of the present system could defend the Electoral College—yet, as I suspect we are about to see, the very people now ascribing Donald Trump’s victory to the racism of the American voter will soon be doing just that. The reason will be precisely the same reason that such advocates want to blame racism, rather than the ongoing thievery of economic elites, for the rejection of Clinton: because racism is a “cultural” phenomenon, and most left-wing critics of the United States now obtain credentials in “cultural,” rather than scientific, disciplines.

If, in other words, Donald Trump’s victory was due to a complex series of renegotiations of the global contract between capital and labor, then that would require experts in economic and other, similar, disciplines to explain it; if his victory was due to racism, however—racism being considered a cultural phenomenon—then that will call forth experts in “cultural” fields. Because those with “liberal” or “leftist” political leanings now tend to gather in “cultural” fields, those with those political leanings will (indeed, must) now attempt to shift the battleground towards their areas of expertise. That shift, I would wager, will in turn lead those who argue for “cultural” explanations for the rise of Trump against arguments for the elimination of the Electoral College.

The reason is not difficult to understand: it isn’t too much to say, in fact, that one way to define the study of the humanities is to say it comprises the disciplines that largely ignore, or even oppose, the Law of Large Numbers both as a practical matter and as a philosophic one. As literary scholar Franco Moretti, now of Stanford, observed in his Atlas of the European Novel, 1800-1900, just as “silver fork novels”—a genre published in England between the 1820s and the 1840s—do not “show ‘London,’ but only a small, monochrome portion of it,” so too does the average student of literature not really study her ostensible subject matter. “I work on west European narrative between 1790 and 1930, and already feel like a charlatan outside of Britain and France,” Moretti confesses in an essay entitled “Distant Reading”—and even then, he only works “on its canonical fraction, which is not even 1 percent of published literature.” As Joshua Rothman put the point in a New Yorker profile of Moretti a few years ago, Moretti instead insists that “if you really want to understand literature, you can’t just read a few books or poems over and over,” but instead “you have to work with hundreds or even thousands of texts at a time”—that is, he insists on the significance of the Law of Large Numbers in his field, an insistence whose very novelty demonstrates how literary study is a field that has historically resisted precisely that recognition.

In order to proceed, in other words, disciplines like literary study or art history—or even history itself—must argue for the representativeness of a given body of work: usually termed, at least in literary study, “the Canon.” Such disciplines are already, simply by their very nature, committed to the idea that it is not necessary to read all of what Moretti says is the “thirty thousand nineteenth-century British novels out there” in order to arrive at conclusions about the nineteenth-century British novel: in the first place, “no one really knows” how many there really are (there could easily be twice as many), and in the second “no one has read them [all], [and] no one ever will.” In order to get off the ground, such disciplines must necessarily deny the Law of Large Numbers: as Moretti says, “you invest so much in individual texts only if you think that very few of them really matter”—a belief with an obvious political corollary. Rejection of the Law of Large Numbers is thusly, as Moretti also observes, “an unconscious and invisible premiss” for most who study such fields—which is to say that although students of the humanities often make claims for the political utility of their work, they sometimes forget that the enabling presuppositions of their fields are inherently those of the pre-Enlightenment ancien régime.

Perhaps that’s why—as Joe Pinsker observed in a fascinating, but short, article for The Atlantic several years ago—studies of college students find that those “from lower-income families tend toward ‘useful’ majors, such as computer science, math, and physics,” while students “whose parents make more money flock to history, English, and the performing arts”: the baseline assumptions of those disciplines are, no matter the particular predilections of a given instructor, essentially aristocratic, not democratic. To put it most baldly, the disciplines of the humanities must reject the premise of the Law of Large Numbers, which says that as more examples are added, the closer we approach to the truth—a point that can be directly witnessed when, for instance, English professor Michael Bérubé of Pennsylvania State University observes that the “humanists at [his] end of the [academic] hallway roundly dismissed” Harvard biologist E.O. Wilson’s book, Consilience: The Unity of Knowledge for arguing that “all human knowledge can and eventually will be unified under the rubric of the natural sciences.” Rejecting the Law of Large Numbers is foundational to the very operation of the humanities: without making that rejection, they cannot exist.

In recent decades, of course, presumably Franco Moretti has not been the only professor of the humanities to realize that their disciplines stood on a collision course with the Law of Large Numbers—it may perhaps explain why disciplines like literature and others have, for years, been actively recruiting among members of minority groups. The institutional motivations of such hiring, in other words, ought to be readily apparent: by making such hires, departments of the humanities could insulate themselves from charges from the political left—while at the same time continuing the practices that, without such cover, might have appeared increasingly anachronistic in a democratic age. Minority hiring, that is, may not be so politically “progressive” as its defenders sometimes argue: it may, in fact, have prevented the intellectual reforms within the humanities urged by people like Franco Moretti for a generation or more. Of course, by joining such departments, members of minority groups also may have, consciously or not, tied their own fortunes to a philosophic rejection of concepts like the Law of Large Numbers—as African-American sportswriter Michael Wilbon, of ESPN fame, wrote this past May, black people supposedly have some kind of allergy to statistical analysis: “in ‘BlackWorld,’” Wilbon solemnly intoned, “never is heard an advanced analytical word.” I suspect then that many who claim to be on the political left will soon come out to defend the Electoral College. If that happens, then in one last cruel historical irony the final defenders of American slavery may end up being precisely those slavery meant to oppress.

Striking Out

When a man’s verses cannot be understood … it strikes a man more dead than a great reckoning in a little room.
As You Like It. III, iii.

 

There’s a story sometimes told by the literary critic Stanley Fish about baseball, and specifically the legendary early twentieth-century umpire Bill Klem. According to the story, Klem is working behind the plate one day. The pitcher throws a pitch; the ball comes into the plate, the batter doesn’t swing, and the catcher catches it. Klem doesn’t say anything. The batter turns around and says (Fish tells us),

“O.K., so what was it, a ball or a strike?” And Klem says, “Sonny, it ain’t nothing ’till I call it.” What the batter is assuming is that balls and strikes are facts in the world and that the umpire’s job is to accurately say which one each pitch is. But in fact balls and strikes come into being only on the call of an umpire.

Fish is expressing here what is now the standard view of American departments of the humanities: the dogma (a word precisely used) known as “social constructionism.” As Fish says elsewhere, under this dogma, “what is and is not a reason will always be a matter of faith, that is of the assumptions that are bedrock within a discursive system which because it rests upon them cannot (without self-destructing) call them into question.” To many within the academy, this view is inherently liberating: the notion that truth isn’t “out there” but rather “in here” is thought to be a sub rosa method of aiding the political change that, many have thought, has long been due in the United States. Yet, while joining the “social construction” bandwagon is certainly the way towards success in the American academy, it isn’t entirely obvious that it’s an especially good way to practice American politics: specifically, because the academy’s focus on the doctrines of “social constructionism” as a means of political change has obscured another possible approach—an approach also suggested by baseball. Or, to be more precise, suggested by the World Series of 1904 that didn’t happen.

“He’d have to give them,” wrote Will Hively, in Discover magazine in 1996, “a mathematical explanation of why we need the electoral college.” The article describes how one Alan Natapoff, a physicist at the Massachusetts Institute of Technology, became involved in the question of the Electoral College: the group, assembled once every four years, that actually elects an American president. (For those who have forgotten their high school civics lessons, the way an American presidential election works is that each American state elects a number of “electors” equal in number to that state’s representation  in Congress; i.e., the number of congresspeople each state is entitled to by population, plus two senators. Those electors then meet to cast their votes in what is the actual election.) The Electoral College has been derided for years: the House of Representatives introduced a constitutional amendment to abolish it in 1969, for instance, while at about the same time the American Bar Association called the college “archaic, undemocratic, complex, ambiguous, indirect, and dangerous.” Such criticisms have a point: as has been seen a number times in American history (most recently in 2000), the Electoral College makes it possible to elect a president without a majority of the votes. But to Natapoff, such criticisms fundamentally miss the point because, according to him, they misunderstood the math.

The example Natapoff turned to in order to support his argument for the Electoral College was drawn from baseball. As Anthony Ramirez wrote in a New York Times article about Natapoff and his argument, also from 1996, the physicist’s favorite analogy is to the World Series—a contest in which, as Natapoff says, “the team that scores the most runs overall is like a candidate who gets the most popular votes.” But scoring more runs than your opponent is not enough to win the World Series, as Natapoff goes on to say: in order to become the champion baseball team of the year, “that team needs to win the most games.” And scoring runs is not the same as winning games.

Take, for instance, the 1960 World Series: in that contest, as Lively says in Discover, “the New York Yankees, with the awesome slugging combination of Mickey Mantle, Roger Maris, and Bill ‘Moose’ Skowron, scored more than twice as many total runs as the Pittsburgh Pirates, 55 to 27.” Despite that difference in production, the Pirates won the last game of the series (in perhaps the most exciting game in Series history—the only one that has ever ended with a ninth-inning, walk-off home run) and thusly won the series, four games to three. Nobody would dispute, Natapoff’s argument runs, that the Pirates deserved to win the series—and so, similarly, nobody should dispute the legitimacy of the Electoral College.

Why? Because if, as Lively writes, in the World Series “[r]uns must be grouped in a way that wins games,” in the Electoral College “votes must be grouped in a way that wins states.” Take, for instance, the election of 1888—a famous case for political scientists studying the Electoral College. In that election, Democratic candidate Grover Cleveland gained over 5.5 million votes to Republican candidate Benjamin Harrison’s 5.4 million votes. But Harrison not only won more states than Cleveland, but also won states with more electoral votes: including New York, Pennsylvania, Ohio, and Illinois, each of whom had at least six more electoral votes than the most populous state Cleveland won, Missouri. In this fashion, Natapoff argues that Harrison is like the Pirates: although he did not win more votes than Cleveland (just as the Pirates did not score more runs than the Yankees), still he deserved to win—on the grounds that the total numbers of popular votes do not matter, but rather how those votes are spread around the country.

In this argument, then, games are to states just as runs are to votes. It’s an analogy that has an easy appeal to it: everyone feels they understand the World Series (just as everyone feels they understand Stanley Fish’s umpire analogy) and so that understanding appears to transfer easily to the matter of presidential elections. Yet, while clever, in fact most people do not understand the purpose of the World Series: although people think it is the task of the Series to identify the best baseball team in the major leagues, that is not what it is designed to do. It is not the purpose of the World Series to discover the best team in baseball, but instead to put on an exhibition that will draw a large audience, and thus make a great deal of money. Or so said the New York Giants, in 1904.

As many people do not know, there was no World Series in 1904. A World Series, as baseball fans do know, is a competition between the champions of the National League and the American League—which, because the American League was only founded in 1901, meant that the first World Series was held in 1903, between the Boston Americans (soon to become the Red Sox) and the same Pittsburgh Pirates also involved in Natapoff’s example. But that series was merely a private agreement between the two clubs; it created no binding precedent. Hence, when in 1904 the Americans again won their league and the New York Giants won the National League—each achieving that distinction by winning more games than any other team over the course of the season—there was no requirement that the two teams had to play each other. And the Giants saw no reason to do so.

As legendary Giants manager, John McGraw, said at the time, the Giants were the champions of the “only real major league”: that is, the Giants’ title came against tougher competition than the Boston team faced. So, as The Scrapbook History of Baseball notes, the Giants, “who had won the National League by a wide margin, stuck to … their plan, refusing to play any American League club … in the proposed ‘exhibition’ series (as they considered it).” The Giants, sensibly enough, felt that they could not gain much by playing Boston—they would be expected to beat the team from the younger league—and, conversely, they could lose a great deal. And mathematically speaking, they were right: there was no reason to put their prestige on the line by facing an inferior opponent that stood a real chance to win a series that, for that very reason, could not possibly answer the question of which was the better team.

“That there is,” writes Nate Silver and Dayn Perry in Baseball Between the Numbers: Why Everything You Know About the Game Is Wrong, “a great deal of luck involved in the playoffs is an incontrovertible mathematical fact.” But just how much luck is involved is something that the average fan hasn’t considered—though former Caltech physicist Leonard Mlodinow has. In Mlodinow’s book, The Drunkard’s Walk: How Randomness Rules Our Lives, the scientist writes that—just by virtue of doing the math—it can be concluded that “in a 7-game series there is a sizable chance that the inferior team will be crowned champion”:

For instance, if one team is good enough to warrant beating another in 55 percent of its games, the weaker team will nevertheless win a 7-game series about 4 times out of 10. And if the superior team could be expected to beat its opponent, on average, 2 out of each 3 times they meet, the inferior team will still win a 7-game series about once every 5 matchups.

What Mlodinow means is this: let’s say that, for every game, we roll a one-hundred sided die to determine whether the team with the 55 percent edge wins or not. If we do that four times, there’s still a good chance that the inferior team is still in the series: that is, that the superior team has not won all the games. In fact, there’s a real possibility that the inferior team might turn the tables, and instead sweep the superior team. Seven games, in short, is just not enough games to demonstrate conclusively that one team is better than another.

In fact, in order to eliminate randomness as much as possible—that is, make it as likely as possible for the better team to win—the World Series would have to be much longer than it currently is: “In the lopsided 2/3-probability case,” Mlodinow says, “you’d have to play a series consisting of at minimum the best of 23 games to determine the winner with what is called statistical significance, meaning the weaker team would be crowned champion 5 percent or less of the time.” In other words, even in a case where one team has a two-thirds likelihood of winning a game, it would still take 23 games to make the chance of the weaker team winning the series less than 5 percent—and even then, there would still be a chance that the weaker team could still win the series. Mathematically then, winning a seven-game series is meaningless—there have been just too few games to eliminate the potential for a lesser team to beat a better team.

Just how mathematically meaningless a seven-game series is can be demonstrated by the case of a team that is only five percent better than another team: “in the case of one team’s having only a 55-45 edge,” Mlodinow goes on to say, “the shortest statistically significant ‘world series’ would be the best of 269 games” (emp. added). “So,” Mlodinow writes, “sports playoff series can be fun and exciting, but being crowned ‘world champion’ is not a very reliable indication that a team is actually the best one.” Which, as a matter of fact about the history of the World Series, is simply a point that true baseball professionals have always acknowledged: the World Series is not a competition, but an exhibition.

What the New York Giants were saying in 1904 then—and Mlodinow more recently—is that establishing the real worth of something requires a lot of trials: many, many different repetitions. That’s something that, all of us, ought to know from experience: to learn anything, for instance, requires a lot of practice. (Even if the famous “10,000 hour rule” New Yorker writer Malcolm Gladwell concocted for this book, Outliers: The Story of Success, has been complicated by those who did the original research Gladwell based his research upon.) More formally, scientists and mathematicians call this the “Law of Large Numbers.”

What that law means, as the Encyclopedia of Mathematics defines it, is that “the frequency of occurence of a random event tends to become equal to its probability as the number of trials increases.” Or, to use the more natural language of Wikipedia, “the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.” What the Law of Large Numbers implies is that Natapoff’s analogy between the Electoral College and the World Series just might be correct—though for the opposite reason Natapoff brought it up. Namely, if the Electoral College is like the World Series, and the World Series is not designed to find the best team in baseball but instead be merely an exhibition, then that implies that the Electoral College is not a serious attempt to find the best president—because what the Law would appear to advise is that, in order to obtain a better result, it is better to gather more voters.

Yet the currently-fashionable dogma of the academy, it would seem, is expressly-designed to dismiss that possibility: if, as Fish says, “balls and strikes” (or just things in general) are the creations of the “umpire” (also known as a “discursive system”), then it is very difficult to confront the wrongheadedness of Natapoff’s defense of the Electoral College—or, for that matter, the wrongheadedness of the Electoral College itself. After all, what does an individual run matter—isn’t what’s important the game in which it is scored? Or, to put it another way, isn’t it more important where (to Natapoff, in which state; to Fish, less geographically inclined, in which “discursive system”) a vote is cast, rather than whether it was cast? The answer in favor of the former at the expense of the latter to many, if not most, literary-type intellectuals is clear—but as any statistician will tell you, it’s possible for any run of luck to continue for quite a bit longer than the average person might expect. (That’s one reason why it takes at least 23 games to minimize the randomness between two closely-matched baseball teams.) Even so, it remains difficult to believe—as it would seem that many today, both within and without the academy, do—that the umpire can continue to call every pitch a strike.

 

So Small A Number

How chance the King comes with so small a number?
The Tragedy of King Lear. Act II, Scene 4.

 

Who killed Michael Brown, in Ferguson, Missouri, in 2014? According to the legal record, it was police officer Darren Wilson who, in August of that year, fired twelve bullets at Brown during an altercation in Ferguson’s streets—the last being, said the coroner’s report, likely the fatal one. According to the protesters against the shooting (the protest that evolved into the #BlackLivesMatter movement), the real culprit was the racism of the city’s police department and civil administration; a charge that gained credibility later when questionable emails written, and sent to, city employees became public knowledge. In this account, the racism of Ferguson’s administration itself simply mirrored the racism that is endemic to the United States; Darren Wilson’s thirteenth bullet, in short, was racism. Yet, according to the work of Radley Balko of the Washington Post, among others, the issue that lay behind Brown’s death was not racism, per se, but rather a badly-structured political architecture that fails to consider a basic principle of reality banally familiar to such bastions of sophisticated philosophic thought as Atlantic City casinos and insurance companies: the idea that, in the words of the New Yorker’s Malcolm Gladwell, “the safest and most efficient way to provide [protection]” is “to spread the costs and risks … over the biggest and most diverse group possible.” If that is so, then perhaps it could be said that Brown’s killer was whoever caused Americans to forget that principle—if so, a case could be made that Brown’s killer was a Scottish philosopher who lived more than two centuries ago: the sage of skepticism, David Hume.

Hume is well-known in philosophical circles for, among other contributions, describing something he called the “is-ought problem”: in his early work, A Treatise of Human Nature, Hume said his point was that “the distinction of vice and virtue is not founded merely on the relations of objects”—or, that just because reality is a certain way, that does not mean that it ought to be that way. British philosopher G.E. Moore later called the act of mistaking is with ought the “naturalistic fallacy”: in 1903’s Principia Ethica, Moore asserted (as J.B. Schneewind of Johns Hopkins has paraphrased it) that “claims about morality cannot be derived from statements of facts.” It’s a claim, in other words, that serves to divide questions of morality, or values, from questions of science, or facts—and, as should be self-evident, the work of the humanities requires a intellectual claim of this form in order to exist. If morality, after all, is amenable to scientific analysis there would be little reason for the humanities.

Yet, there is widespread agreement among many intellectuals that the humanities are not subject to scientific analysis, and specifically because only the humanities can tackle subjects of “value.” Thus, for instance, we find professor of literature Michael Bérubé, of Pennsylvania State University—an institution noted for its devotion to truth and transparency—scoffing “as if social justice were a matter of discovering the physical properties of the universe” when faced with doubters like Harvard biologist E. O. Wilson, who has had the temerity to suggest that the humanities could learn something from the sciences. And, Wilson and others aside, even some scientists ascribe to some version of this split: the biologist Stephen Jay Gould, for example, echoed Moore in his essay “Non-Overlapping Magisteria” by claiming that while the “net of science covers the empirical universe: what is it made of (fact) and why does it work this way (theory),” the “net of religion”—which I take in this instance as a proxy for the humanities generally—“extends over questions of moral meaning and value.” Other examples could be multiplied.

How this seemingly-arid intellectual argument affected Michael Brown can be directly explained, albeit not easily. Perhaps the simplest route is by reference to the Malcolm Gladwell article I have already cited: the 2006 piece entitled “The Risk Pool.” In a superficial sense, the text is a social history about the particulars of how social insurance and pensions became widespread in the United States following the Second World War, especially in the automobile industry. But in a more inclusive sense, “The Risk Pool” is about what could be considered a kind of scientific law—or, perhaps, a law of the universe—and how, in a very direct sense, that law affects social justice.

In the 1940s, Gladwell tells us, the leader of the United Auto Workers union was Walter Reuther—a man who felt that “risk ought to be broadly collectivized.” Reuther thought that providing health insurance and pensions ought to be a function of government: that way, the largest possible pool of laborers would be paying into a system that could provide for the largest possible pool of recipients. Reuther’s thought, that is, most determinedly centered on issues of “social justice”: the care of the infirm and the aged.

Reuther’s notions however also could be thought of in scientific terms: as an instantiation of what is called, by statisticians, the “law of large numbers.” According to Caltech physicist Leonard Mlodinow, the law of large numbers can be described as “the way results reflect underlying probabilities when we make a large number of observations.” A more colorful way to think of it is the way trader and New York University professor Nassim Taleb puts it in his book, Fooled By Randomness: The Hidden Role of Chance in Life and in the Markets: there, Taleb observes that, were Russian roulette a game in which the survivors gained the savings of the losers, then “if a twenty-five-year-old played Russian roulette, say, once a year, there would be a very slim possibility of his surviving until his fiftieth birthday—but, if there are enough players, say thousands of twenty-five-year-old players, we can expect to see a handful of (extremely rich) survivors (and a very large cemetery).” In general, the law of large numbers is how casinos (or investment banks) make money legally (and bookies make it illegally): by taking enough bets (which thereby cancel each other out) the institution, whether it is located in a corner tavern or Wall Street, can charge customers for the privilege of betting—and never take the risk of failure that would accrue were that institution to bet one side or another. Less concretely, the same law is what allows us to assert confidently a belief in scientific results: because they can be repeated again and again, we can trust that they reflect something real.

Reuther’s argument about social insurance and pensions more or less explicitly mirrors that law: like a casino, the idea of social insurance is that, by including enough people, there will be enough healthy contributors paying into the fund to balance out the sick people drawing from it. In the same fashion, a pension fund works by ensuring that there are enough productive workers paying into the pension to cancel out the aged people receiving from it. In both casinos and pension funds, in other words, the only means by which they can work is by having enough people included in them—if there are too few, the fund or casino takes the risk that the numbers of those drawing out exceed those paying in, at which point the operation fails. (In gambling, this is called “breaking the bank”; Ward Wilson pithily explains why that doesn’t happen very often in his learned tome, Gambling for Winners; Your Hard-Headed, No B.S., Guide to Gaming Opportunities With a Long-Term, Mathematical, Positive Expectation: “the casino has more money than you.”) Both casinos and insurance funds must have large numbers of participants in order to function: as numbers decrease, the risk of failure increases. Reuther therefore thought that the safest possible way to provide social protection for all Americans was to include all Americans.

Yet, according to those following Moore’s concept of the “naturalistic fallacy,” Reuther’s argument would be considered an illicit intrusion of scientific ideas into the realm of politics, or “value.” Again, that might appear to be an abstruse argument between various schools of philosophers, or between varieties of intellectuals, scientific and “humanistic.” (It’s an argument that, in addition to accruing to the humanities the domain of “value,” also cedes categories like stylish writing—as if scientific arguments can only be expressed by equations rather than quality of expression, and as if there weren’t scientists who were brilliant writers and humanist scholars who weren’t awful ones.) But while in one sense this argument takes place in very rarified air, in another it takes place on the streets where we live. Or, more specifically, the streets where Michael Brown was shot and killed.

The problem of Ferguson, Radley Balko’s work for the Washington Post tells us, is not one of “race,” but instead a problem of poor people. More exactly, a problem of what happens when poor people are excluded from larger population pools—or in other words, when the law of large numbers is excluded from discussions of public policy. Balko’s story draws attention to two inarguable facts: the first, that there “are 90 municipalities in St. Louis County”—Ferguson’s county—and nearly all of them “have their own police force, mayor, city manager and town council,” while 81 of those towns also have their municipal court capable of sentencing lawbreakers to paying fines. By contrast, Balko draws attention to the second-largest, by population, Missouri urban county: Kansas City’s Jackson County, which is both “geographically larger than St. Louis County and has about two-thirds the population”—and yet “has just 19 municipalities, and just 15 municipal courts.” Comparing the two counties, that is, implies that St. Louis County is far more segmented than Jackson County is: there are many more population pools in the one than in the other.

Knowing what is known about the law of large numbers then, it might not be surprising that a number of the many municipalities of St. Louis County are worse off than the few municipalities of Jackson County: in St. Louis County some towns, Balko reports, “can derive 40 percent or more of their annual revenue from the petty fines and fees collected by their municipal courts”—rather than, say, property taxes. That, it seems likely, is due to the fact that instead of many property owners paying taxes, there are instead a large number of renters paying rent to a small number of landlords, who in turn are wealthy enough to minimize their tax burden by employing tax lawyers and other maneuvers. Because these towns thusly cannot depend on property tax revenue, they must instead depend on the fines and fees the courts can recoup from residents: an operation that, because of the chaos that necessarily implies for the lives of those citizens, usually results in more poverty. (It’s difficult to apply for a job, for example, if you are in jail due to failure to pay a parking ticket.) Yet, if the law of large numbers is excluded a priori from political discussion—as some in the humanities insist it must be, whether out of disciplinary self-interest or some other reason—that necessarily implies that residents of Ferguson cannot address the real causes of their misery, a fact that may explain just why those addressing the problems of Ferguson focus so much on “racism” rather than the structural issues raised by Balko.

The trouble however with identifying “racism” as an explanation for Michael Brown’s death is that it leads to a set of “solutions” that do not address the underlying issue. In the November following Brown’s death, for example, Trymaine Lee of MSNBC reported that the federal Justice Department “held a two-day training with St. Louis area police on implicit racial bias and fair and impartial policing”—as if the problem of Ferguson was wholly to blame on the police department or even the town administration as a whole. Not long afterwards, the Department of Justice reported (according to Ray Sanchez of CNN) that, while Ferguson is 67% African-American, in the two years prior to Brown’s death “85% of people subject to vehicle stops by Ferguson police were African-American,” while “90% of those who received citations were black and 93% of people arrested were black”—data that seems to imply that, were those numbers only closer to 67%, then there would be no problem in Ferguson.

Yet, even if the people arrested in Ferguson were proportionately black, that would have no effect on the reality that—as Mike Maciag of Governing reported shortly after Brown’s death—“court fine collections [accounted] for one-fifth of [Ferguson’s] total operating revenue” in the years leading up to the shooting.  The problem of Ferguson isn’t that its residents are black, and so that the town’s problems could be solved by, say, firing all the white police officers and hiring all black ones. Instead, Ferguson’s difficulty is not just that the town’s citizens are poor—but that they are politically isolated.

There is, in sum, a fundamental reason that the doctrine of “separate but equal” is not merely bad for American schools, as the Supreme Court held in the 1954 decision of Brown v. Board of Education, the landmark case that ended Jim Crow in the American South. That reason is the same at all scales: from the nuclear supercollider at CERN exploring the essential particles of the universe to the roulette tables of Las Vegas to the Social Security Administration, the greater the number of inputs the greater the certainty, and hence safety, of the results. Instead of affirming that law of the universe, however, the work of people like Michael Bérubé and others is devoted to questioning whether universal laws exist—in other words, to resisting the encroachment of the sciences on their turf. Perhaps that resistance is somehow helpful in some larger sense; perhaps it is so that, as is often claimed, the humanities enlarge our sense of what it means to be human, among other sometimes-described possible benefits—I make no claims on that score.

What’s absurd, however, is the monopolistic claim sometimes retailed by Bérubé and others that the humanities have an exclusive right to political judgment: if Michael Brown’s death demonstrates anything, it ought (a word I use without apology) to show that, by promoting the idea of the humanities as distinct from the sciences, humanities departments have in fact collaborated (another word I use without apology) with people who have a distinct interest in promoting division and discord for their own ends. That doesn’t mean, of course, that anyone who has ever read a novel or seen a film helped to kill Michael Brown. But, just as it is so that institutions that cover up child abuse—like the Catholic Church or certain institutions of higher learning in Pennsylvania—bear a responsibility to their victims, so too is there a danger in thinking that the humanities have a monopoly on politics. Darren Wilson did have a thirteenth bullet, though it wasn’t racism. Who killed Michael Brown? Why, if you think that morality should be divided from facts … you did.

Hot Shots

 

… when the sea was calm all boats alike
Show’d mastership in floating …
—William Shakespeare.
     Coriolanus Act IV, Scene 3 (1608).

 

 

“Indeed,” wrote the Canadian scholar Marshall McLuhan in 1964, “it is only too typical that the ‘content’ of any medium blinds us to the character of the medium.” Once, it was a well-known line among literate people, though much less now. It occurred to me recently however as I read an essay by Walter Benn Michaels of the University of Illinois at Chicago, in the course of which Michaels took issue with Matthew Yglesias of Vox. Yglesias, Michaels tells us, tried to make the argument that

although “straight white intellectuals” might tend to think of the increasing economic inequality of the last thirty years “as a period of relentless defeat for left-wing politics,” we ought to remember that the same period has also seen “enormous advances in the practical opportunities available to women, a major decline in the level of racism … and wildly more public and legal acceptance of gays and lesbians.”

Michaels replies to Yglesias’ argument that “10 percent of the U.S. population now earns just under 50 percent of total U.S. income”—a figure that is, unfortunately, just the tip of the economic iceberg when it comes to inequality in America. But the real problem—the problem that Michaels’ reply does not do justice to—is that there just is a logical flaw in the kind of “left” that we have now: one that advocates for the rights of minorities rather than labors for the benefit of the majority. That is, a “cultural” left rather than a scientific one: the kind we had when, in 1910, American philosopher John Dewey could write (without being laughed at), that Darwin’s Origin of Species “introduced a mode of thinking that in the end was bound to transform the logic of knowledge, and hence the treatment of morals, politics, and religion.” When he was just twenty years old the physicist Freeman Dyson discovered why, when Winston Churchill’s government paid him to think about what was really happening in the flak-filled skies over Berlin.

The British had a desperate need to know, because they were engaged in bombing Nazi Germany at least back to the Renaissance. Hence they employed Dyson as a statistician, to analyze the operations of Britain’s Bomber Command. Specifically, Dyson was to investigate whether bomber crews “learned by experience”: if whether the more missions each crew flew, the better each crew became at blowing up Germany—and the Germans in it. Obviously, if they did, then Bomber Command could try to isolate what those crews were doing and teach what it was to the others so that Germany and the Germans might be blown up better.

The bomb crews themselves believed, Dyson tells us, that as “they became more skillful and more closely bonded, their chances of survival would improve”—a belief that, for obvious reasons, was “essential to their morale.” But as Dyson went over the statistics of lost bombers, examining the relation between experience and loss rates while controlling for the effects of weather and geography, he discovered the terrible truth:

“There was no effect of experience on loss rate.”

The lives of each bomber crew, in other words, were dependent on chance, not skill, and the belief in their own expertise was just an illusion in the face of horror—an illusion that becomes the more awful when you know that, out of the 125,000 air crews who served in Bomber Command, 55,573 were killed in action.

“Statistics and simple arithmetic,” Dyson therefore concluded, “tell us more about ourselves than expert intuition”: a cold lesson to learn, particularly at the age of twenty—though that can be tempered by the thought that at least it wasn’t Dyson’s job to go to Berlin. Still, the lesson is so appalling that perhaps it is little wonder that, after the war, it was largely forgotten, and has only been taken up again by a subject nearly as joyful as the business of killing people on an industrial scale is horrifying: sport.

In one of the most cited papers in the history of psychology, “The Hot Hand in Basketball: On the Misperception of Random Sequences,” Thomas Gilovich, Robert Vallone and Amos Tversky studied how “players and fans alike tend to believe that a player’s chance of hitting a shot are greater following a hit than following a miss on the previous shot”—but “detailed analysis … provided no evidence for a positive correlation between the outcomes of successive shots.” Just as, in other words, the British airmen believed some crews had “skill” that kept them in the air, when in fact all that kept them aloft was, say, the poor aim of a German anti-aircraft gunner or a happily-timed cloud, so too did the three co-authors find that, in basketball, people believed some shooters could get “hot.” That is, reel off seemingly impossible numbers of shots in a row, like when Ben Gordon, then with the Chicago Bulls, knocked down 9 consecutive three-pointers against Washington in 2006. But in fact hits and misses are reliant on a player’s skill, not his “luck”: toss a coin enough times and the coin will produce “runs” of heads and tails too.

The “hot hand” concept in fact applies to more than simply the players: it extends to coaches also. “In sports,” says Leonard Mlodinow in his book The Drunkard’s Walk: How Randomness Rules Our Lives, “we have developed a culture in which, based on intuitive feelings of correlation, a team’s success or failure is often attributed largely to the ability of the coach”—a reality that perhaps explains just why, as Florida’s Lakeland Ledger reported in in 2014, the average tenure of NFL coaches over the past decade has been 38 months. Yet as Mlodinow also says, “[m]athematical analysis of firings in all major sports … has shown that those firings had, on average, no effect on team performance”: fans (and perhaps more importantly, owners) tend to think of teams rising and falling based on their coach, while in reality a team’s success has more to do with the talent the team has.

Yet while sports are a fairly trivial part of most peoples’ lives, that is not true when it comes to our “coaches”: the managers that run large corporations. As Diane Stafford found out for the Kansas City Star a few years back, it turns out that American corporations have as little sense of the real value of CEOs as NFL owners have of their coaches: the “pay gap between large-company CEOs and average American employees,” Stafford said, “vaulted from 195 to 1 in 1993 to 354 to 1 in 2012.” Meanwhile, more than a third “of the men who appeared on lists ranking America’s 25 highest-paid corporate leaders between 1993 and 2012 have led companies bailed out by U.S. taxpayers, been fired for poor performance or led companies charged with fraud.” Just like the Lancasters flown by Dyson’s aircrews, American workers (and their companies’ stockholders) have been taken for a ride by men flying on the basis of luck, not skill.

Again, of course, many in what’s termed the “cultural” left would insist that they too, stand with American workers against the bosses, that they too, wish things were better, and they too, think paying twenty bucks for a hot dog and a beer is an outrage. What matters however isn’t what professors or artists or actors or musicians or the like say—just as it didn’t matter what Britain’s bomber pilots thought about their own skills during the war. What matters is what their jobs say. And the fact of the matter is that cultural production, whether it be in academia or in New York or in Hollywood, simply is the same as thinking you’re a hell of a pilot, or you must be “hot,” or Phil Jackson is a genius. That might sound counterintuitive, of course—I thought writers and artists and, especially, George Clooney were all on the side of the little guy!—but, like McLuhan says, what matters is the medium, not the message.

The point is likely easiest to explain in terms of the academic study of the humanities, because at least there people are forced to explain themselves in order to keep their jobs. What one finds, across the political spectrum, is some version of the same dogma: students in literary studies can, for instance, refer to American novelist James Baldwin’s insistence, in the 1949 essay “Everybody’s Protest Novel,” that “literature and sociology are not the same,” while, at the other end of the political spectrum, political science students can refer to Leo Strauss’ attack on “the ‘scientific’ approach to society” in his 1958 Thoughts on Machiavelli. Every discipline in the humanities has some version of the point, because without such a doctrine they couldn’t exist: without them, there’s just a bunch of people sitting in a room reading old books.

The effect of these dogmas can perhaps be best seen by reference to the philosophical version of it, which has the benefit of at least being clear. David Hume called it the “is-ought problem”; as the Scotsman claimed in  A Treatise of Human Nature, “the distinction of vice and virtue is not founded merely on the relations of objects.” Later, in 1903’s Principe Ethica, British philosopher G.E. Moore called the same point the “naturalistic fallacy”: the idea that, as J.B. Schneewind of Johns Hopkins has put it, “claims about morality cannot be derived from statements of facts.” The advantage for philosophers is clear enough: if it’s impossible to talk about morality or ethics strictly by the light of science, that certainly justifies talking about philosophy to the exclusion of anything else. But in light of the facts about shooting hoops or being killed by delusional Germans, I would hope that the absurdity of Moore’s “idea” ought to be self-evident: if it can be demonstrated that something is a matter of luck, and not skill, that changes the moral calculation drastically.

That then is the problem with running a “left” based around the study of novels or rituals or films or whatever: at the end of the day, the study of the humanities, just like the practice of the arts, discourages the thought that, as Mlodinow puts it, “chance events are often conspicuously misinterpreted as accomplishments or failures.” And without such a consideration, I would suggest, any talk of “values” or “morality” or whatever you would like to call it, is empty. It matters if your leader is lucky or skillful, it matters if success is the result of hard work or who your parents are—and a “left” built on the opposite premises is not, to my mind, a “left” at all. Although many people in the “cultural left,” then, might have the idea that their overt exhortations to virtue might outweigh the covert message being told by their institutional positions, reality tells a different tale: by telling people they can fly, you should not be shocked when they crash.