Blind Shots

… then you are apt to have what one of the tournament’s press bus drivers
describes as a “bloody near-religious experience.”
—David Foster Wallace. “Roger Federer As Religious Experience.” The New York Times, 20 Aug. 2006.

Not much gets by the New York Times, unless it’s the non-existence of WMDs—or the rules of tennis. The Gray Lady is bamboozled by the racquet game: “The truth is,” says The New York Times Guide to Essential Knowledge, Third Edition, not only that “no one knows for sure how … the curious scoring system came about.” But in what might be an example of the Times’ famously droll sense of fun, an article by Stuart Miller entitled “Quirks of the Game: How Tennis Got Its Scoring System” not only does not provide the answer its title promises, but actually even only addresses its ostensible subject by merely noting that “No one can pinpoint exactly when and how” the ostensible subject of the piece came into existence. So much, one supposes, for reportorial tenacity. Yet despite the failure of the Times, in fact there is an explanation for tennis’ scoring system—an explanation that is so simple that while the Times’ inability to see why tennis is scored the way it is is amusing, also leads to disquieting thoughts about what else the Times can’t see. That’s because solving the mystery of why tennis is scored the way it is also could explain a great deal about political reality in the United States.

To be fair, the Times is not alone in its befuddlement: “‘It’s a difficult topic,’” says one “Steve Flink, an historian and author of ‘The Greatest Tennis Matches of All Time,’” in the “How Tennis Got Its Scoring System” story. So far as I can tell, all tennis histories are unclear about the origins of the scoring system: about all anyone knows for sure—or at least, is willing to put on paper—is that (as Rolf Potts put it in an essay for The Smart Set a few years ago) when modern lawn tennis was codified in 1874, it “appropriated the scoring system of the ancient French game” of jeu de paume, or “real tennis” as it is known in English. The origins of the modern game of tennis, all the histories do agree, lie in this older game—most of all, the scoring system.

Yet, while that does push back the origins of the system a few centuries, no one seems to know why jeu de paume adopted the system it did, other than to observe that the scoring breakdowns of 15, 30, and 40 seem to be, according to most sources, allusions to the face of a clock. (Even the Times, it seems, is capable of discovering this much: the numbers of the points, Miller says, appear “to derive from the idea of a clock face.”) But of far more importance than the “15-30-40” numbering is why the scoring system is qualitatively different than virtually every other kind of sport—a difference even casual fans are aware of and yet even the most erudite historians, so far as I am aware, cannot explain.

Psychologist Allen Fox once explained the difference in scoring systems in Tennis magazine: whereas, the doctor said, the “score is cumulative throughout the contest in most other sports, and whoever has the most points at the end wins,” in tennis “some points are more important than others.” A tennis match, in other words, is divided up into games, sets, and matches: instead of adding up all the points each player scores at the end, tennis “keeps score” by counting the numbers of games, and sets, won. This difference, although it might appear trivial, actually isn’t—and it’s a difference that explains not only a lot about tennis, but much else besides.

Take the case of Roger Federer, who has won 17 major championships in men’s tennis: the all-time record in men’s singles. Despite this dominating record, many people argue that he is not the sport’s Greatest Of All Time—at least, according to New York Times writer Michael Steinberger. Not long ago, Steinberger said that the reason people can argue that way is because Federer “has a losing record against [Rafael] Nadal, and a lopsided one at that.” (Currently, the record stands at 23-10 in favor of Nadal—a nearly 70% edge.) Steinberger’s article—continuing the pleasing simplicity in the titles of New York Times tennis articles, it’s named “Why Roger Federer Is The Greatest Of All Time”—then goes on to argue that Federer should be called the “G.O.A.T.” anyway, record be damned.

Yet weirdly, Steinberger didn’t attempt—and neither, so far as I can tell, has anyone else—to do what an anonymous blogger did in 2009: a feat that demonstrates just why tennis’ scoring system is so curious, and why it has implications, perhaps even sinister implications from a certain point of view, far beyond tennis. What that blogger did, on a blog entitled SW19—postal code for Wimbledon, site of the All-England Tennis Club—was very simple.

He counted up the points.

In any other sport, with a couple of exceptions, that act might seem utterly banal: in those sports, in order to see who’s better you’d count up how many one player scored and then count up how many the other guy scored when they played head-to-head. But in tennis that apparently simple act is not so simple—and the reason it isn’t is what makes tennis such a different game than virtually all other sports. “In tennis, the better player doesn’t always win,” as Carl Bialik for FiveThirtyEight.com pointed out last year: because of the scoring system, what matters is whether you win “more sets than your opponent”—not necessarily more points.

Why that matters is because the argument against Federer as the Greatest Of All Time rests on the grounds that he has a losing record against Nadal: at the time the anonymous SW19 blogger began his research in 2009, that record was 13-7 in Nadal’s favor. As the mathematically-inclined already know, that record translates to a 65 percent edge to Nadal: a seemingly-strong argument against Federer’s all-time greatness because the percentage seems so overwhelmingly tilted toward the Spaniard. How can the greatest player of all time be so weak against one opponent?

In fact, however, as the SW19 blogger discovered, Nadal’s seemingly-insurmountable edge was an artifact of the scoring system, not a sign of Federer’s underlying weakness. Of the 20 matches the two men had played up until 2009, the two men played 4,394 total points: that is, where one player served and the two volleyed back and forth until one player failed to deliver the ball to the other court according to the rules. If tennis had a straightforward relationship between points and wins—like baseball or basketball or football—then it might be expected that Nadal has won about 65 percent of those 4,394 points played, which would be about 2,856 points. In other words, to get a 65 percent edge in total matches, Nadal should have about a 65 percent edge in total points: the point total, as opposed to the match record, between the two ought to be about 2,856 to 1,538.

Yet that, as the SW19 blogger realized, is not the case: the real margin between the two players was Nadal, 2,221, and Federer, 2,173. Further, those totals included Nadal’s victory in the 2008 French Open final—which was played on Nadal’s best surface, clay—in straight sets, 6-1, 6-3, 6-0. In other words, even including the epic beating at Roland Garros in 2008, Nadal had only beaten Federer by a total of 48 points over the course of their careers: a total of less than one percent of all the points scored.

And that is not all. If the single match at the 2008 French Open final is excluded, then the margin becomes eight points. In terms of points scored, in other words, Nadal’s edge is about a half of a percentage point—and most of that percentage was generated by a single match. So, it may be so that Federer is not the G.O.A.T., but an argument against Federer cannot coherently be based on the fact of Nadal’s “dominating” record over the Swiss—because going by the act that is the central, defining act of the sport, the act of scoring points, the two players were, mathematically speaking, exactly equal.

Now, many will say here that, to risk making a horrible pun, I’ve missed the point: in tennis, it will be noted, not all acts of scoring are equal, and neither are all matches. It’s important that the 2008 match was a final, not an opening round … And so on. All of which certainly could be allowed, and reasonable people can differ about it, and if you don’t understand that then you really haven’t understood tennis, have you? But there’s a consequence to the scoring system—one that makes the New York Times’ inability to understand the origins of a scoring system that produces such peculiar results something more than simply another charming foible of the matriarch of the American press.

That’s because of something else that is unusual about tennis by comparison to other sports: its propensity for gambling scandals. In recent years, this has become something of an open secret within the game: when in 2007 the fourth-ranked player in the world, Nikolay Davydenko of Russia, was investigated for match-fixing, Andy Murray—the Wimbledon champion currently ranked third in the world—“told BBC Radio that although it is difficult to prove who has ‘tanked’ a match, ‘everyone knows it goes on,” according to another New York Times story, this one by reporter Joe Drape.

Around that same time Patrick McEnroe, brother of the famous champion John McEnroe, told the Times that tennis “is a very easy game to manipulate,” and that it is possible to “throw a match and you’d never know.” During that scandal year of 2007, the problem seemed about to break out into public awareness: in the wake of the Davydenko case the Association of Tennis Professionals, one of the sport’s governing bodies, commissioned an investigation conducted by former Scotland Yard detectives into match-fixing and other chicanery—the Environmental Review of Integrity In Professional Tennis, issued in May of 2008. That investigation resulted in four lowly-ranked players being banned from the professional ranks, but not much else.

Perhaps however that papering-over should not be surprising, given the history of the game. As mentioned, today’s game of tennis owes its origins in the game of real tennis, or jeu de paume—a once-hugely popular game very well-known for its connection to gambling. “Gambling was closely associated with tennis,” as Elizabeth Wilson puts it in her Love Game: A History of Tennis, from Victorian Pastime to Global Phenomenon, and jeu de paume had a “special association with court life and the aristocracy.” Henry VIII of England, for example, was an avid player—he had courts built in several of his palaces—and, as historian Alison Weir has put it in her Henry VIII: The King and His Court, “Gambling on the outcome of a game was common.” In Robert E. Gensemer’s 1982 history of tennis, the historian points out that “monetary wagers on tennis matches soon became commonplace” as jeu de paume grew in popularity. Yet eventually, as historians of jeu de paume have repeatedly shown, by “the close of the eighteenth century … game fixing and gambling scandals had tarnished Jeu de Paume’s reputation,” as a history of real tennis produced by an English real tennis club has put it.

Oddly however, despite all this evidence directly in front of all the historians, no one, not even the New York Times, seems to have put together the connection between tennis’ scoring system and the sport’s origins in gambling. It is, apparently, something to be pitied, and then moved past: what a shame it is that these grifters keep interfering with this noble sport! But that is to mistake the cart for the horse. It isn’t that the sport attracts con artists—it’s rather because of gamblers that the sport exists at all. Tennis’ scoring system, in other words, was obviously designed by, and for, gamblers.

Why, in other words, should tennis break up its scoring into smaller, discrete units—so that  the total number of points scored is only indirectly related to the outcome of a match? The answer to that question might be confounding to sophisticates like the New York Times, but child’s play to anyone familiar with a back-alley dice game. Perhaps that’s why places like Wimbledon dress themselves up in the “pageantry”—the “strawberries and cream” and so on—that such events have: because if people understood tennis correctly, they’d realize that were this sport played in Harlem or Inglewood or 71st and King Drive in Chicago, everyone involved would be doing time.

That’s because—as Nassim Nicholas Taleb, author of The Black Swan: The Impact of the Highly Improbable, would point out—breaking a game into smaller, discrete chunks, as tennis’ scoring system does, is—exactly, precisely—how casino operators make money. And if that hasn’t already made sense to you—if, say, it makes more sense to explain a simple, key feature of the world by reference to advanced physics rather than merely to mention the bare fact—Taleb is also gracious enough to explain how casinos make money via a metaphor drawn from that ever-so-simple subject, quantum mechanics.

Consider, Taleb asks in that book, that because a coffee “cup is the sum of trillions of very small particles” there is little chance that any cup will “jump two feet” of its own spontaneous accord—despite the fact that, according to the particle physicists, that event is not outside the realm of possibility. “Particles jump around all the time,” as Taleb says, so it is indeed possible that a cup could do that. But in order to to make that jump, it would require that all the particles in the cup made the same leap at precisely the same time—an event so unlikely that the odds of it are longer than the lifetime of the universe. Were any of the particles in the cup to make such a leap, that leap would be canceled out by the leap of some other particle in the cup—coordinating so many particles is effectively impossible.

Yet, observe that by reducing the numbers of particles to less than a coffee cup, it can be very easy to ensure that some number of particles jump: if there is only one particle, the chance that it will jump is effectively 100%. (It would be more surprising if it didn’t jump.) “Casino operators,” as Taleb drily adds, “understand this well, which is why they never (if they do things right) lose money.” All they have to do to make money, on the other hand, is to refuse to “let one gambler make a massive bet,” and instead to ensure “to have plenty of gamblers make a series of bets of limited size.” The secret of a casino is that it multiplies the numbers of gamblers—and hence the numbers of bets.

In this way, casino operators can guarantee that “the variations in the casino’s returns are going to be ridiculously small, no matter the total gambling activity.” By breaking up the betting into thousands, and even—over the course of time—millions or billions of bets, casino operators can ensure that their losses on any single bet are covered by some other bet elsewhere in the casino: there’s a reason that, as the now-folded website Grantland pointed out in 2014, during the previous 23 years “bettors have won twice, while the sportsbooks have won 21 times” in Super Bowl betting. The thing to do in order to make something “gamable”—or “bettable,” which is to say a commodity worth the house’s time—is to break its acts into as many discrete chunks as possible.

The point, I think, can be easily seen: by breaking up a tennis match into smaller sets and games, gamblers can commodify, or make the sport “more bettable”—at least, from the point of view of a sharp operator. “Gamblers may be a total of $20 million, but you needn’t worry about the casino’s health,” Taleb says—because the casino isn’t accepting ten $2 million bets. Instead, “the bets run, say, $20 on average; the casino caps the bets at a maximum.” Rather than making one bet on a match’s outcome, gamblers can make a series of bets on the “games within the game”—bets that, as in the case of the casino, inevitably favor the house even without any match-fixing involved.

In professional tennis there are, as Louisa Thomas pointed out in Grantland a few years ago, every year “tens of thousands of professional matches, hundreds of thousands of games, millions of points, and patterns in the chaos.” (If there is match-fixing—and as mentioned there have been many allegations over the years—well then, you’re in business: an excellent player can even “tank” many, many early opportunities, allowing confederates to cash in, and still come back to put away a weaker opponent.) Anyway, just as Taleb says, casino operators inevitably wish to make bets as numerous as possible because, in the long run, that protects their investment—and tennis, what a co-inky-dink, has more opportunities for betting than virtually any sport you can name.

The august majesty of the New York Times, however, cannot imagine any of that. In their “How Tennis Got Its Scoring System” story, it mentions the speculations of amateur players who say things like: “The eccentricities are part of the fun,” and “I like the old-fashioned touches that tennis has.” It’s all so quaint, in the view of the Times. But since no one can account for tennis’ scoring system otherwise, and everyone admits not only that gambling flourished around lawn tennis’ predecessor game, jeu de paume (or real tennis), but also that the popularity of the sport was eventually brought down precisely because of gambling scandals—and tennis is to this day vulnerable to gamblers—the hypothesis that tennis is scored the way it is for the purposes of gambling makes much more sense than, say, tennis historian Elizabeth Wilson’s solemn pronouncement that tennis’ scoring system is “a powerful exception to the tendencies toward uniformity” that is so dreadfully, dreadfully common in our contemporary vale of tears.

The reality, of course, is that tennis’ scoring system was obviously designed to fleece suckers, not to entertain the twee viewers of Wes Anderson movies. Yet while such dimwittedness can be expected from college students or proper ladies who have never left the Upper East Side of Manhattan or Philadelphia’s Main Line, why is the New York Times so flummoxed by the historical “mystery” of it all? The answer, I suspect anyway, lies in some other, far more significant, sport that is played by with a very similar set of rules as tennis: one that equally breaks up the action into many more different acts than seem strictly necessary. In this game, too, there is an indirect connection between the central, defining act and wins and losses.

The name of that sport? Well, it’s really two versions of the same game.

One is called “the United States Senate”—and the other is called a “presidential election.”

Don Thumb

Then there was the educated Texan from Texas who looked like someone in Technicolor and felt, patriotically, that people of means—decent folk—should be given more votes than drifters, whores, criminals, degenerates, atheists, and indecent folk—people without means.
Joseph Heller. Catch-22. (1961).

 

“Odd arrangements and funny solutions,” the famed biologist Stephen Jay Gould once wrote about the panda’s thumb, “are the proof of evolution—paths that a sensible God would never tread but that a natural process, constrained by history, follows perforce.” The panda’s thumb, that is, is not really a thumb: it is an adaptation of another bone (the radial sesamoid) in the animal’s paw; Gould’s point is that the bamboo-eater’s thumb is not “a beautiful machine,” i.e. not the work of “an ideal engineer.” Hence, it must be the product of an historical process—a thought that occurred to me once again when I was asked recently by one of my readers (I have some!) whether it’s really true, as law professor Paul Finkelman has suggested for decades in law review articles like “The Proslavery Origins of the Electoral College,” that the “connection between slavery and the [electoral] college was deliberate.” One way to answer the question, of course, is to pour through (as Finkelman has very admirably done) the records of the Constitutional Convention of 1787: the notes of James Madison, for example, or the very complete documents collected by Yale historian Max Farrand at the beginning of the twentieth century. Another way, however, is to do as Gould suggests, and think about the “fit” between the design of an instrument and the purpose it is meant to achieve. Or in other words, to ask why the Law of Large Numbers suggests Donald Trump is like the 1984 Kansas City Royals.

The 1984 Kansas City Royals, for those who aren’t aware, are well-known in baseball nerd circles for having won the American League West division despite being—as famous sabermetrician Bill James, founder of the application of statistical methods to baseball, once wrote—“the first team in baseball history to win a championship of any stripe while allowing more runs (684) than they scored (673).” “From the beginnings of major league baseball just after the civil war through 1958,” James observes, no team ever managed such a thing. Why? Well, it does seem readily apparent that scoring more runs than one’s opponent is a key component to winning baseball games, and winning baseball games is a key component to winning championships, so in that sense it ought to be obvious that there shouldn’t be many winning teams that failed to score more runs than their opponents. Yet on the other hand, it also seems possible to imagine a particular sort of baseball team winning a lot of one-run games, but occasionally giving up blow-out losses—and yet as James points out, no such team succeeded before 1959.

Even the “Hitless Wonders,” the 1906 Chicago White Sox, scored more runs than their opponents  despite hitting (according to This Great Game: The Online Book of Baseball) “a grand total of seven home runs on the entire season” while simultaneously putting up the American League’s “worst batting average (.230).” The low-offense South Side team is seemingly made to order for the purposes of this discussion because they won the World Series that year (over the formidable Chicago Cubs)—yet even this seemingly-hapless team scored 570 runs to their opponents’ 460, according to Baseball Reference. (A phenomenon most attribute to the South Siders’ pitching and fielding: that is, although they didn’t score a lot of runs, they were really good at preventing their opponents’ from scoring a lot of runs.) Hence, even in the pre-Babe Ruth “dead ball” era, when baseball teams routinely employed “small ball” strategies designed to produce one-run wins as opposed to Ruth’s “big ball” attack, there weren’t any teams that won despite scoring fewer runs than their opponents’.

After 1958, however, there were a few teams that approached that margin: the 1959 Dodgers, freshly moved to Los Angeles, scored only 705 runs to their opponents’ 670, while the 1961 Cincinnati Reds scored 710 to their opponents 653, and the 1964 St. Louis Cardinals scored 715 runs to their opponents’ 652. Each of these teams were different than most other major league teams: the ’59 Dodgers played in the Los Angeles Coliseum, a venue built for the 1932 Olympics, not baseball; its cavernous power alleys were where home runs went to die, while its enormous foul ball areas ended many at-bats that would have continued in other stadiums. (The Coliseum, that is, was a time machine to the “deadball” era.) The 1961 Reds had Frank Robinson and virtually no other offense until the Queen City’s nine was marginally upgraded through a midseason trade. Finally, the 1964 Cardinals team had Bob Gibson (please direct yourself to the history of Bob Gibson’s career immediately if you are unfamiliar with him), and second they played in the first year after major league baseball’s Rules Committee redefined the strike zone to be just slightly larger—a change that had the effect of dropping home run totals by ten percent and both batting average and runs scored by twelve percent. In The New Historical Baseball Abstract, Bill James calls the 1960s the “second deadball era”; the 1964 Cardinals did not score a lot of runs, but then neither did anyone else.

Each of these teams was composed of unlikely sets of pieces: the Coliseum was a weird place to play baseball, the Rule Committee was a small number of men who probably did not understand the effects of their decision, and Bob Gibson was Bob Gibson. And even then, these teams all managed to score more runs than their opponents, even if the margin was small. (By comparison, the all-time run differential record is held by Joe DiMaggio’s 1939 New York Yankees, who outscored their opponents by 411 runs: 967 to 556, a ratio may stand until the end of time.) Furthermore, the 1960 Dodgers finished in fourth place, the 1962 Reds finished in third, and the 1965 Cards finished seventh: these were teams, in short, that had success for a single season, but didn’t follow up. Without going very deeply into the details then, suffice it to say that run differential is—as Sean Forman noted in the The New York Times in 2011—“a better predictor of future win-loss percentage than a team’s actual win-loss percentage.” Run differential is a way to “smooth out” the effects of chance in a fashion that the “lumpiness” of win-loss percentage doesn’t.

That’s also, as it happens, just what the Law of Large Numbers does: first noted by mathematician Jacob Bernoulli in his Ars Conjectandi of 1713, that law holds that “the more … observations are taken into account, the less is the danger of straying from the goal.” It’s the principle that is the basis of the insurance industry: according to Caltech physicist Leonard Mlodinow, it’s the notion that while “[i]ndividual life spans—and lives—are unpredictable, when data are collected from groups and analyzed en masse, regular patterns emerge.” Or for that matter, the law is also why it’s very hard to go bankrupt—which Donald Trump, as it so happens, has—when running a casino: as Nicholas Taleb commented in The Black Swan: The Impact of the Highly Improbable, all it takes to run a successful casino is to refuse to allow “one gambler to make a massive bet,” and instead “have plenty of gamblers make series of bets of limited size.” More bets equals more “observations,” and the more observations the more likely it is that all those bets will converge toward the expected result. In other words, one coin toss might be heads or might be tails—but the more times the coin is thrown, the more likely it is that there will be an equal number of both heads and tails.

How this concerns Donald Trump is that, as has been noted, although the president-elect did win the election, he did not win more votes than the Democratic candidate, Hillary Clinton. (As of this writing, those totals now stand at 62,391,335 votes for Clinton to Trump’s 61,125,956.) The reason that Clinton did not win the election is because American presidential elections are not won by collecting more votes in the wider electorate, but rather through winning in that peculiarly American institution, the Electoral College: an institution in which, as Will Hively remarked remarkably presciently in a Discover article in 1996, a “popular-vote loser in the big national contest can still win by scoring more points in the smaller electoral college.” Despite how weird that bizarre sort of result actually is, however, according to some that’s just what makes the Electoral College worth keeping.

Hively was covering that story in 1996: his Discovery story was about how, in the pages of the journal Public Choice that year, mathematician Alan Natapoff tried to argue that the “same logic that governs our electoral system … also applies to many sports”—for example, baseball’s World Series. In order “to become [World Series] champion,” Natapoff noticed, a “team must win the most games”—not score the most runs. In the 1960 World Series, the mathematician wrote, the New York Yankees “scored more than twice as many total runs as the Pittsburgh Pirates, 55 to 27”—but the Yankees lost game 7, and thus the series. “Runs must be grouped in a way that wins games,” Natapoff thought, “just as popular votes must be grouped in a way that wins states.” That is, the Electoral College forces candidates to “have broad appeal across the whole nation,” instead of playing “strongly on a single issue to isolated blocs of voters.” It’s a theory that might seem, on its face, to have a certain plausibility: by constructing the Electoral College, the delegates to the constitutional convention of 1787 prevented future candidates from winning by appealing to a single, but large, constituency.

Yet, recall Stephen Jay Gould’s remark about the panda’s thumb, which suggests that we can examine just how well a given object fulfills its purpose: in this case, Natapoff is arguing that, because the design of the World Series “fits” the purpose of identifying the best team in baseball, so too does the Electoral College “fit” the purpose of identifying the best presidential candidate. Natapoff’s argument concerning the Electoral College presumes, in other words, that the task of baseball’s playoff system is to identify the best team in baseball, and hence it ought to work for identifying the best president. But the Law of Large Numbers suggests that the first task of any process that purports to identify value is that it should eliminate, or at least significantly reduce, the effects of chance: whatever one thinks about the World Series, presumably presidents shouldn’t be the result of accident. And the World Series simply does not do that.

“That there is”—as Nate Silver and Dayn Perry wrote in their ESPN.com piece, “Why Don’t the A’s Win In October?” (collected in Jonah Keri and James Click’s Baseball Between the Numbers: Why Everything You Know About the Game Is Wrong)—“a great deal of luck involved in the playoffs is an incontrovertible mathematical fact.” It’s a point that was


argued so early in baseball’s history as 1904, when the New York Giants refused to split the gate receipts evenly with what they considered to be an upstart American League team (Cf. “Striking Out” https://djlane.wordpress.com/2016/07/31/striking-out/.). As Caltech physicist Leonard Mlodinow has observed, if the World Series were designed—by an “ideal engineer,” say—to make sure that one team was the better team, it would have to be 23 games long if one team were significantly better than the other, and 269 games long if the two teams were evenly matched—that is, nearly as long as two full seasons. In fact, since it may even be argued that baseball, by increasingly relying on a playoff system instead of the regular season standings, is increasing, not decreasing, the role of chance in the outcome of its championship process: whereas prior to 1969, the two teams meeting in the World Series were the victors of a paradigmatic Law of Large Numbers system—the regular season—now many more teams enter the playoffs, and do so by multiple routes. Chance is playing an increasing role in determining baseball’s champions: in James’ list of sixteen championship-winning teams that had a run differential of less than 1.100: 1, all of the teams, except the ones I have already mentioned, are from 1969 or after. Hence, from a mathematical perspective the World Series cannot be seriously argued to eliminate, or even effectively reduce, the element of chance—from which it can be reasoned, as Gould says about the panda’s thumb, that the purpose of the World Series is not to identify the best baseball team.

Natapoff’s argument, in other words, has things exactly backwards: rather than showing just how rational the Electoral College is, the comparison to baseball demonstrates just how irrational it is—how vulnerable it is to chance. In the light of Gould’s argument about the panda’s thumb, which suggests that a lack of “fit” between the optimal solution (the human thumb) to a problem and the actual solution (the panda’s thumb) implies the presence of “history,” that would then intimate that the Electoral College is either the result of a lack of understanding of the mathematics of chance with regards to elections—or that the American system for electing presidents was not designed for the purpose that it purports to serve. As I will demonstrate, despite the rudimentary development of the mathematics of probability at the time at least a few—and these, some of the most important—of the delegates to the Philadelphia convention in 1787 were aware of those mathematical realities. That fact suggests, I would say, that Paul Finkelman’s arguments concerning the purpose of the Electoral College are worth much more attention than they have heretofore received: Finkelman may or may not be correct that the purpose of the Electoral College was to support slavery—but what is indisputable is that it was not designed for the purpose of eliminating chance in the election of American presidents.

Consider, for example, that although he was not present at the meeting in Philadelphia, Thomas Jefferson possessed not only a number of works on the then-nascent study of probability, but particularly a copy of the very first textbook to expound on Bernoulli’s notion of the Law of Large Numbers: 1718’s The Doctrine of Chances, or, A Method of Calculating the Probability of Events in Play, by Abraham de Moivre. Jefferson also had social and intellectual connections to the noted French mathematician, the Marquis de Condorcet—a man who, according to Iain McLean of the University of Warwick and Arnold Urken of the Stevens Institute of Technology, applied “techniques found in Jacob Bernoulli’s Ars Conjectandi” to “the logical relationship between voting procedures and collective outcomes.” Jefferson in turn (McLean and Urken inform us) “sent [James] Madison some of Condorcet’s political pamphlets in 1788-9”—a connection that would only have reaffirmed a connection already established by the Italian Philip Mazzei, who sent a Madison a copy of some of Condorcet’s work in 1786: “so that it was, or may have been, on Madison’s desk while he was writing the Federalist Papers.” And while none of that implies that Madison knew of the marquis prior to coming to Philadelphia in 1787, before even meeting Jefferson when the Virginian came to France to be the American minister, the marquis had already become a close friend, for years, to another man who would become a delegate to the Philadelphia meeting: Benjamin Franklin. Although not all of the convention attendees, in short, may have been aware of the relationship between probability and elections, at least some were—and arguably, they were the most intellectually formidable ones, the men most likely to notice that the design of the Electoral College is in direct conflict with the Law of Large Numbers.

In particular, they would have been aware of the marquis’ most famous contribution to social thought: Condorcet’s “Jury Theorem,” in which—as Norman Schofield once observed in the pages of Social Choice Welfare—the Frenchman proved that, assuming “that the ‘typical’ voter has a better than even chance of choosing the ‘correct’ outcome … the electorate would, using the majority rule, do better than an average voter.” In fact, Condorcet demonstrated mathematically—using Bernoulli’s methods in a book entitled Essay on the Application of Analysis to the Probability of Majority Decisions (significantly published in 1785, two years before the Philadelphia meeting)—that adding more voters made a correct choice more likely, just as (according to the Law of Large Numbers) adding more games makes it more likely that the eventual World Series winner is the better team. Franklin at the least then, and perhaps Madison next most-likely, could not but have been aware of the possible mathematical dangers an Electoral College could create: they must have known that the least-chancy way of selecting a leader—that is, the product of the design of an infallible engineer—would be a direct popular vote. And while it cannot be conclusively demonstrated that these men were thinking specifically of Condorcet’s theories at Philadelphia, it is certainly more than suggestive that both Franklin and Madison thought that a direct popular vote was the best way to elect a president.

When James Madison came to the floor of Independence Hall to speak to the convention about the election of presidents for instance, he insisted that “popular election was better” than an Electoral College, as David O. Stewart writes in his The Summer of 1787: The Men Who Invented the Constitution. Meanwhile, it was James Wilson of Philadelphia—so close to Franklin, historian Lawrence Goldstone reports, that the infirm Franklin chose Wilson to read his addresses to the convention—who originally proposed direct popular election of the president: “Experience,” the Scottish-born Philadelphian said, “shewed [sic] that an election of the first magistrate by the people at large, was both a convenient & successful mode.” In fact, as William Ewald of the University of Pennsylvania has pointed out, “Wilson almost alone among the delegates advocated not only the popular election of the President, but the direct popular election of the Senate, and indeed a consistent application of the principle of ‘one man, one vote.’” (Wilson’s positions were far ahead of their time: in the case of the Senate, Wilson’s proposal would not be realized until the passage of the Seventeenth Amendment in 1913, and his stance in favor of the principle of “one man, one vote” would not be enunciated as part of American law until the Reynolds v. Sims line of cases decided by the Earl Warren-led U.S. Supreme Court in the early 1960s.) To Wilson, the “majority of people wherever found” should govern “in all questions”—a statement that is virtually identical to Condorcet’s mathematically-influenced argument.

What these men thought, in other words, was that an electoral system that was designed to choose the best leader of a nation would proceed on the basis of a direct national popular vote: some of them, particularly Madison, may even have been aware of the mathematical reasons for supposing that a direct national popular vote was how an American presidential election would be designed if it were the product of what Stephen Jay Gould calls an “ideal engineer.” Just as an ideal (but nonexistent) World Series would be at least 23, and possibly so long as 269 games—in order to rule out chance—the ideal election to the presidency would include as many eligible voters as possible: the more voters, Condorcet would say, the more likely those voters would be to get it right. Yet just as with the actual, as opposed to ideal, World Series, there is a mismatch between the Electoral College’s proclaimed purpose and its actual purpose: a mismatch that suggests researchers ought to look for the traces of history within it.

Hence, although it’s possible to investigate Paul Finkelman’s claims regarding the origins of the Electoral College by, say, trawling through the volumes of the notes taken at the Constitutional Convention, it’s also possible simply to think through the structure of the Constitution itself in the same fashion that Stephen Jay Gould thinks about, say, the structure of frog skeletons: in terms of their relation to the purpose they serve. In this case, there is a kind of mathematical standard to which the Electoral College can be compared: a comparison that doesn’t necessarily imply that the Constitution was created simply and only to protect slavery, as Finkelman says—but does suggest that Finkelman is right to think that there is something in need of explanation. Contra Natapoff, the similarity between the Electoral College and the World Series does not suggest that the American way of electing a head of state is designed to produce the best possible leader, but instead that—like the World Series—it was designed with some other goal in mind. The Electoral College may or may not be the creation of an ideal craftsman, but it certainly isn’t a “beautiful machine”; after electing the political version of the 1984 Kansas City Royals—who, by the way, were swept by Detroit in the first round—to the highest office in the land, maybe the American people should stop treating it that way.

So Small A Number

How chance the King comes with so small a number?
The Tragedy of King Lear. Act II, Scene 4.

 

Who killed Michael Brown, in Ferguson, Missouri, in 2014? According to the legal record, it was police officer Darren Wilson who, in August of that year, fired twelve bullets at Brown during an altercation in Ferguson’s streets—the last being, said the coroner’s report, likely the fatal one. According to the protesters against the shooting (the protest that evolved into the #BlackLivesMatter movement), the real culprit was the racism of the city’s police department and civil administration; a charge that gained credibility later when questionable emails written, and sent to, city employees became public knowledge. In this account, the racism of Ferguson’s administration itself simply mirrored the racism that is endemic to the United States; Darren Wilson’s thirteenth bullet, in short, was racism. Yet, according to the work of Radley Balko of the Washington Post, among others, the issue that lay behind Brown’s death was not racism, per se, but rather a badly-structured political architecture that fails to consider a basic principle of reality banally familiar to such bastions of sophisticated philosophic thought as Atlantic City casinos and insurance companies: the idea that, in the words of the New Yorker’s Malcolm Gladwell, “the safest and most efficient way to provide [protection]” is “to spread the costs and risks … over the biggest and most diverse group possible.” If that is so, then perhaps it could be said that Brown’s killer was whoever caused Americans to forget that principle—if so, a case could be made that Brown’s killer was a Scottish philosopher who lived more than two centuries ago: the sage of skepticism, David Hume.

Hume is well-known in philosophical circles for, among other contributions, describing something he called the “is-ought problem”: in his early work, A Treatise of Human Nature, Hume said his point was that “the distinction of vice and virtue is not founded merely on the relations of objects”—or, that just because reality is a certain way, that does not mean that it ought to be that way. British philosopher G.E. Moore later called the act of mistaking is with ought the “naturalistic fallacy”: in 1903’s Principia Ethica, Moore asserted (as J.B. Schneewind of Johns Hopkins has paraphrased it) that “claims about morality cannot be derived from statements of facts.” It’s a claim, in other words, that serves to divide questions of morality, or values, from questions of science, or facts—and, as should be self-evident, the work of the humanities requires a intellectual claim of this form in order to exist. If morality, after all, is amenable to scientific analysis there would be little reason for the humanities.

Yet, there is widespread agreement among many intellectuals that the humanities are not subject to scientific analysis, and specifically because only the humanities can tackle subjects of “value.” Thus, for instance, we find professor of literature Michael Bérubé, of Pennsylvania State University—an institution noted for its devotion to truth and transparency—scoffing “as if social justice were a matter of discovering the physical properties of the universe” when faced with doubters like Harvard biologist E. O. Wilson, who has had the temerity to suggest that the humanities could learn something from the sciences. And, Wilson and others aside, even some scientists ascribe to some version of this split: the biologist Stephen Jay Gould, for example, echoed Moore in his essay “Non-Overlapping Magisteria” by claiming that while the “net of science covers the empirical universe: what is it made of (fact) and why does it work this way (theory),” the “net of religion”—which I take in this instance as a proxy for the humanities generally—“extends over questions of moral meaning and value.” Other examples could be multiplied.

How this seemingly-arid intellectual argument affected Michael Brown can be directly explained, albeit not easily. Perhaps the simplest route is by reference to the Malcolm Gladwell article I have already cited: the 2006 piece entitled “The Risk Pool.” In a superficial sense, the text is a social history about the particulars of how social insurance and pensions became widespread in the United States following the Second World War, especially in the automobile industry. But in a more inclusive sense, “The Risk Pool” is about what could be considered a kind of scientific law—or, perhaps, a law of the universe—and how, in a very direct sense, that law affects social justice.

In the 1940s, Gladwell tells us, the leader of the United Auto Workers union was Walter Reuther—a man who felt that “risk ought to be broadly collectivized.” Reuther thought that providing health insurance and pensions ought to be a function of government: that way, the largest possible pool of laborers would be paying into a system that could provide for the largest possible pool of recipients. Reuther’s thought, that is, most determinedly centered on issues of “social justice”: the care of the infirm and the aged.

Reuther’s notions however also could be thought of in scientific terms: as an instantiation of what is called, by statisticians, the “law of large numbers.” According to Caltech physicist Leonard Mlodinow, the law of large numbers can be described as “the way results reflect underlying probabilities when we make a large number of observations.” A more colorful way to think of it is the way trader and New York University professor Nassim Taleb puts it in his book, Fooled By Randomness: The Hidden Role of Chance in Life and in the Markets: there, Taleb observes that, were Russian roulette a game in which the survivors gained the savings of the losers, then “if a twenty-five-year-old played Russian roulette, say, once a year, there would be a very slim possibility of his surviving until his fiftieth birthday—but, if there are enough players, say thousands of twenty-five-year-old players, we can expect to see a handful of (extremely rich) survivors (and a very large cemetery).” In general, the law of large numbers is how casinos (or investment banks) make money legally (and bookies make it illegally): by taking enough bets (which thereby cancel each other out) the institution, whether it is located in a corner tavern or Wall Street, can charge customers for the privilege of betting—and never take the risk of failure that would accrue were that institution to bet one side or another. Less concretely, the same law is what allows us to assert confidently a belief in scientific results: because they can be repeated again and again, we can trust that they reflect something real.

Reuther’s argument about social insurance and pensions more or less explicitly mirrors that law: like a casino, the idea of social insurance is that, by including enough people, there will be enough healthy contributors paying into the fund to balance out the sick people drawing from it. In the same fashion, a pension fund works by ensuring that there are enough productive workers paying into the pension to cancel out the aged people receiving from it. In both casinos and pension funds, in other words, the only means by which they can work is by having enough people included in them—if there are too few, the fund or casino takes the risk that the numbers of those drawing out exceed those paying in, at which point the operation fails. (In gambling, this is called “breaking the bank”; Ward Wilson pithily explains why that doesn’t happen very often in his learned tome, Gambling for Winners; Your Hard-Headed, No B.S., Guide to Gaming Opportunities With a Long-Term, Mathematical, Positive Expectation: “the casino has more money than you.”) Both casinos and insurance funds must have large numbers of participants in order to function: as numbers decrease, the risk of failure increases. Reuther therefore thought that the safest possible way to provide social protection for all Americans was to include all Americans.

Yet, according to those following Moore’s concept of the “naturalistic fallacy,” Reuther’s argument would be considered an illicit intrusion of scientific ideas into the realm of politics, or “value.” Again, that might appear to be an abstruse argument between various schools of philosophers, or between varieties of intellectuals, scientific and “humanistic.” (It’s an argument that, in addition to accruing to the humanities the domain of “value,” also cedes categories like stylish writing—as if scientific arguments can only be expressed by equations rather than quality of expression, and as if there weren’t scientists who were brilliant writers and humanist scholars who weren’t awful ones.) But while in one sense this argument takes place in very rarified air, in another it takes place on the streets where we live. Or, more specifically, the streets where Michael Brown was shot and killed.

The problem of Ferguson, Radley Balko’s work for the Washington Post tells us, is not one of “race,” but instead a problem of poor people. More exactly, a problem of what happens when poor people are excluded from larger population pools—or in other words, when the law of large numbers is excluded from discussions of public policy. Balko’s story draws attention to two inarguable facts: the first, that there “are 90 municipalities in St. Louis County”—Ferguson’s county—and nearly all of them “have their own police force, mayor, city manager and town council,” while 81 of those towns also have their municipal court capable of sentencing lawbreakers to paying fines. By contrast, Balko draws attention to the second-largest, by population, Missouri urban county: Kansas City’s Jackson County, which is both “geographically larger than St. Louis County and has about two-thirds the population”—and yet “has just 19 municipalities, and just 15 municipal courts.” Comparing the two counties, that is, implies that St. Louis County is far more segmented than Jackson County is: there are many more population pools in the one than in the other.

Knowing what is known about the law of large numbers then, it might not be surprising that a number of the many municipalities of St. Louis County are worse off than the few municipalities of Jackson County: in St. Louis County some towns, Balko reports, “can derive 40 percent or more of their annual revenue from the petty fines and fees collected by their municipal courts”—rather than, say, property taxes. That, it seems likely, is due to the fact that instead of many property owners paying taxes, there are instead a large number of renters paying rent to a small number of landlords, who in turn are wealthy enough to minimize their tax burden by employing tax lawyers and other maneuvers. Because these towns thusly cannot depend on property tax revenue, they must instead depend on the fines and fees the courts can recoup from residents: an operation that, because of the chaos that necessarily implies for the lives of those citizens, usually results in more poverty. (It’s difficult to apply for a job, for example, if you are in jail due to failure to pay a parking ticket.) Yet, if the law of large numbers is excluded a priori from political discussion—as some in the humanities insist it must be, whether out of disciplinary self-interest or some other reason—that necessarily implies that residents of Ferguson cannot address the real causes of their misery, a fact that may explain just why those addressing the problems of Ferguson focus so much on “racism” rather than the structural issues raised by Balko.

The trouble however with identifying “racism” as an explanation for Michael Brown’s death is that it leads to a set of “solutions” that do not address the underlying issue. In the November following Brown’s death, for example, Trymaine Lee of MSNBC reported that the federal Justice Department “held a two-day training with St. Louis area police on implicit racial bias and fair and impartial policing”—as if the problem of Ferguson was wholly to blame on the police department or even the town administration as a whole. Not long afterwards, the Department of Justice reported (according to Ray Sanchez of CNN) that, while Ferguson is 67% African-American, in the two years prior to Brown’s death “85% of people subject to vehicle stops by Ferguson police were African-American,” while “90% of those who received citations were black and 93% of people arrested were black”—data that seems to imply that, were those numbers only closer to 67%, then there would be no problem in Ferguson.

Yet, even if the people arrested in Ferguson were proportionately black, that would have no effect on the reality that—as Mike Maciag of Governing reported shortly after Brown’s death—“court fine collections [accounted] for one-fifth of [Ferguson’s] total operating revenue” in the years leading up to the shooting.  The problem of Ferguson isn’t that its residents are black, and so that the town’s problems could be solved by, say, firing all the white police officers and hiring all black ones. Instead, Ferguson’s difficulty is not just that the town’s citizens are poor—but that they are politically isolated.

There is, in sum, a fundamental reason that the doctrine of “separate but equal” is not merely bad for American schools, as the Supreme Court held in the 1954 decision of Brown v. Board of Education, the landmark case that ended Jim Crow in the American South. That reason is the same at all scales: from the nuclear supercollider at CERN exploring the essential particles of the universe to the roulette tables of Las Vegas to the Social Security Administration, the greater the number of inputs the greater the certainty, and hence safety, of the results. Instead of affirming that law of the universe, however, the work of people like Michael Bérubé and others is devoted to questioning whether universal laws exist—in other words, to resisting the encroachment of the sciences on their turf. Perhaps that resistance is somehow helpful in some larger sense; perhaps it is so that, as is often claimed, the humanities enlarge our sense of what it means to be human, among other sometimes-described possible benefits—I make no claims on that score.

What’s absurd, however, is the monopolistic claim sometimes retailed by Bérubé and others that the humanities have an exclusive right to political judgment: if Michael Brown’s death demonstrates anything, it ought (a word I use without apology) to show that, by promoting the idea of the humanities as distinct from the sciences, humanities departments have in fact collaborated (another word I use without apology) with people who have a distinct interest in promoting division and discord for their own ends. That doesn’t mean, of course, that anyone who has ever read a novel or seen a film helped to kill Michael Brown. But, just as it is so that institutions that cover up child abuse—like the Catholic Church or certain institutions of higher learning in Pennsylvania—bear a responsibility to their victims, so too is there a danger in thinking that the humanities have a monopoly on politics. Darren Wilson did have a thirteenth bullet, though it wasn’t racism. Who killed Michael Brown? Why, if you think that morality should be divided from facts … you did.

Art Will Not Save You—And Neither Will Stanley

 

But I was lucky, and that, I believe, made all the difference.
—Stanley Fish. “My Life Report” 31 October 2011, New York Times. 

 

Pfc. Bowe Bergdahl, United States Army, is the subject of the new season of Serial, the National Public Radio show that tells “One story. Week by week.” as the advertising tagline has it. NPR is doing a show about Bergdahl because of what Bergdahl chose to do on the night of 30 June 2009: as Serial reports, that night he walked off his “small outpost in eastern Afghanistan and into hostile territory,” where he was captured by Taliban guerrillas and held prisoner for nearly five years. Bergdahl’s actions have led some to call him a deserter and a traitor; as a result of leaving his unit Bergdahl faces a life sentence from a military court. But the line Bergdahl crossed when he stepped beyond the concertina wire and into the desert of Paktika Province was far greater than the line between a loyal soldier and a criminal. When Bowe Bergdahl wandered into the wilderness, he also crossed the line between the sciences and the humanities—and demonstrated why the political hopes some people place in the humanities is not only illogical, but arguably holding up actual political progress.

Bergdahl can be said to have crossed that line because what happens to him when he is tried by a military court regarding what happened will, likely, turn on what the intent behind his act was: in legal terms, this is known as mens rea, which is Latin for “guilty mind.” Intent is one of the necessary components prosecutors must prove to convict Bergdahl for desertion: according to Article 85 of the Uniform Code of Military Justice, to be convicted of desertion Bergdahl must be shown to have had the “intent to remain away” from his unit “permanently.” It’s this matter of intent that demonstrates the difference between the humanities and the sciences.

The old devil, Stanley Fish, once demonstrated that border in an essay in the New York Times designed to explain what it is that literary critics, and other people who engage in interpretation, do, and how it differs from other lines of work:

Suppose you’re looking at a rock formation and see in it what seems to be the word ‘help.’ You look more closely and decide that, no, what you’re seeing is an effect of erosion, random marks that just happen to resemble an English word. The moment you decide that nature caused the effect, you will have lost all interest in interpreting the formation, because you no longer believe that it has been produced intentionally, and therefore you no longer believe that it’s a word, a bearer of meaning.

To put it another way, matters of interpretation concern agents who possess intent: any other kind of discussion is of no concern to the humanities. Conversely, the sciences can be said to concern all those things not produced by an agent, or more specifically an agent who intended to convey something to some other agent.

It’s a line that seems clear enough, even in what might be marginal cases: when a beaver builds a dam, surely he intends to build that dam, but it also seems inarguable that the beaver intends nothing more to be conveyed to other beavers than, “here is my dam.” More questionable cases might be when, say, a bird or some other animal performs a “mating dance”: surely the bird intends his beloved to respond, but still it would seem ludicrous to put a scholar of, say, Jane Austen’s novels to the task of recovering the bird’s message. That would certainly be overkill.

Yes yes, you will impatiently say, but what has that to do with Bergdahl? The answer, I think, might be this: if Bergdahl’s lawyer had a scientific, instead of a humanistic, sort of mind, he might ask how many soldiers were stationed in Afghanistan during Bergdahl’s time there, and how many overall. The reason a scientist would ask that question about, say, a flock of birds he was studying is because, to a scientist, the overall numbers matter. The reason why they matter demonstrates just what the difference between science and the humanities is, but also why the faith some place in the political utility of the humanities is ridiculous.

The reason why the overall numbers of the flock would matter to a scientist is because sample size matters: a behavior that one bird in a flock of twelve birds exhibited is probably not as significant as a behavior that one bird in a flock of millions exhibited. As Nassim Taleb put it in his book, Fooled By Randomness, how impressive it is if a monkey has managed to type a verbatim copy of the Iliad “Depends On The Number of Monkeys.” “If there are five monkeys in the game,” Taleb elaborates, “I would be rather impressed with the Iliad writer”—but if, on the other hand, “there are a billion to the power one billion monkeys I would be less impressed.” Or to put it in another context, the “greater the number of businessmen, the greater the likelihood of one of them performing in a stellar manner just by luck.” What matters to a scientist, in other words, isn’t just what a given bird does—it’s how big the flock was in the first place.

To a lawyer, of course, none of that would be significant: the court that tries Bergdahl will not view that question as a relevant one in determining whether he is guilty of the crime of desertion. That is because, as a discipline concerned with interpretation, such a question will have been ruled out of court, as we say, before the court has even met: to consider how many birds in the flock there were when one of them behaved strangely, in other words, is to have a priori ceased to consider that bird as an agent because when one asks how many other birds there are, the implication is that what matters more is simply the role of chance rather than any intent on the part of the bird. Any lawyer that brought up the fact that Bergdahl was the only one out of so many thousands of soldiers to have done what he did, without taking up the matter of Bergdahl’s intent, would not be acting as a lawyer.

By the way, in case you’re wondering, roughly 65,000 soldiers were in Afghanistan by early October of 2009, behind the “surge” ordered by President Barack Obama shortly after taking office. The number, according to a contemporary story by The Washington Post, would be “more than double the number there when Bush left office,” which is to say that when Bergdahl left his tiny outpost at the end of June that year, the military was in the midst of a massive buildup of troops. The sample size, in Taleb’s terms, was growing rapidly at that time—with what effects on Bergdahl’s situation, if any, I await enlightenment, if there be any.

Whether that matters or not in terms of Bergdahl’s story—in Serial or anywhere else—remains to be seen; as a legal matter it would be very surprising if any military lawyer brought it up. What that, in turn, suggests is that the caution with which Stanley Fish has greeted many in the profession of literary study regarding the application of such work to actual political change is thoroughly justified: “when you get to the end” of the road many of those within the humanities have been traveling at least since the 1960s or 70s, Fish has remarked for instance, “nothing will have changed except the answers you might give to some traditional questions in philosophy and literary theory.” It’s a warning of crisis that even now may be reaching its peak as the nation realizes that, after all, the great political story of our time has not been about the minor league struggles within academia, but rather the story of how a small number of monkeys have managed to seize huge proportions of the planet’s total wealth: as Bernie Sanders, the political candidate, tweeted recently in a claim rated “True” by Politifact, “the Walton family of Walmart own more wealth than the bottom 40 percent of America.”

In that story, the intent of the monkeys hardly matters.

Talk That Talk

Talk that talk.
“Boom Boom.”
    John Lee Hooker. 1961.

 

Is the “cultural left” possible? What I mean by “cultural left” is those who, in historian Todd Gitlin’s phrase, “marched on the English department while the Right took the White House”—and in that sense a “cultural left” is surely possible, because we have one. Then again however, there are a lot of things that exist but yet have little rational grounds for doing so, such as the Tea Party or the concept of race. So, did the strategy of leftists invading the nation’s humanities departments ever really make any sense? In other words, is it even possible to conjoin a sympathy for and solidarity with society’s downtrodden with a belief that the means to further their interests is to write, teach, and produce art and other “cultural” products? Or, is that idea like using a chainsaw to drive nails?

Despite current prejudices, which often these days depict “culture” as on the side of the oppressed, history suggests the answer is the latter, not the former: in reality, “culture” has usually acted hand-in-hand with the powerful—as it must, given that it is dependent upon some people having sufficient leisure and goods to produce it. Throughout history, art’s medium has simply been too much for its ostensible message—it’s depended on patronage of one sort or another. Hence, a potential intellectual weakness of basing a “left” around the idea of culture: the actual structure of the world of culture simply is the way that the fabulously rich Andrew Carnegie argued society ought to be in his famous 1889 essay, “The Gospel of Wealth.”

Carnegie’s thesis in “The Gospel of Wealth” after all was that the “superior wisdom [and] experience” of the “man of wealth” ought to determine how to spend society’s surplus. To that end, the industrialist wrote, wealth ought to be concentrated: “wealth, passing through the hands of the few, can be made a much more potent force … than if it had been distributed in small sums to the people themselves.” If it’s better for ten people to have $100,000 each than for a hundred to have $10,000, then it ought to be that much better to have one person with a million dollars. Instead of allowing that money to wander around aimlessly, the wealthiest—for Carnegie, a category interchangeable with “smartest”—ought to have charge of it.

Most people today, I think, would easily spot the logical flaw in Carnegie‘s prescription: just because somebody has money doesn’t make them wise, or even that intelligent. Yet while that is certainly true, the obvious flaw in the argument obscures a deeper flaw—at least if considering the arguments of the trader and writer Nassim Taleb, author of Fooled by Randomness and The Black Swan. According to Taleb, the problem with giving power to the wealthy isn’t just that knowing something about someone’s wealth doesn’t necessarily guarantee intelligence—it’s that, over time, the leaders of such a society are likely to become less, rather than more, intelligent.

Taleb illustrates his case by, perhaps coincidentally, reference to “culture”: an area that he correctly characterizes as at least as, if not more so, unequal as any aspect of human life. “It’s a sad fact,” Taleb wrote not long ago, “that among a large cohort of artists and writers, almost all will struggle (say, work for Starbucks) while a small number will derive a disproportionate share of fame and attention.” Only a vanishingly small number of such cultural workers are successful—a reality that is even more pronounced when it comes to cultural works themselves, according to Stanford professor of literature Franco Moratti.

Investigating early lending libraries, Moratti found that the “smaller a collection is, the more canonical it is” [emp. original]; and also, “small size equals safe choices.” That is, of the collections he studied, he found that the smaller they were the more homogenous they were: nearly every library is going to have a copy of the Bible, for instance, while only a very large library is likely to have, say, copies of the Dead Sea Scrolls. The world of “culture” then is just is the way Carnegie wished the rest of the world to be: a world ruled by what economists call a “winner-take-all” effect, in which increasing amounts of a society’s spoils go to fewer and fewer contestants.

Yet, whereas according to Carnegie’s theory this is all to the good—on the theory that the “winners” deserve their wins—according to Taleb what actually results is something quite different. A “winner-take-all” effect, he says, “implies that those who, for some reason, start getting some attention can quickly reach more minds than others, and displace the competitors from the bookshelves.” So even though two competitors might be quite close in quality, whoever is a contest’s winner gets everything—and what that means is, as Taleb says about the art world, “that a large share of the success of the winner of such attention can be attributable to matters that lie outside the piece of art itself, namely luck.” In other words, it’s entirely possible that “the failures also have the same ‘qualities’ attributable to the winner”: the differences between them might not be much, but who now knows about Ben Jonson, William Shakespeare’s playwriting contemporary?

Further, consider what that means over time. Over-rewarding those who might happen to have caught some small edge, in other words, tends to magnify small initial differences. What that would mean is that someone who might possess more over-all merit, but that happened to have been overlooked for some reason, would tend to be buried by anyone who just happened to have had an advantage—deserved or not, small or not. And while, considered from the point of view of society as whole, that’s bad enough—because then the world isn’t using all the talent it has available—think about what happens to such a society over time: contrary to Andrew Carnegie’s theory, that society would tend to produce less capable, not more capable, leaders, because it would be more—not less—likely that they reached their position by sheer happenstance rather than merit.

A society, in other words, that was attempting to maximize the potential talent available to it—and it seems little arguable that such is the obvious goal—should not be trying to bury potential talent, but instead to expose as much of it as possible: to get it working, doing the most good. But whatever the intentions of those involved in it, the “culture industry” as a whole is at least as regressive and unequal as any other: whereas in other industries “star” performers usually only emerge after years and years of training and experience, in “culture” many times such performers either emerge in youth or not at all. Of all parts of human life, in fact, it’s difficult to think of one more like Andrew Carnegie’s dream of inequality than culture.

In that sense then it’s hard to think of a worse model for a leftish kind of politics than culture, which perhaps explains why despite the fact that our universities are bulging with professors of art and literature and so on proclaiming “power to the people,” the United States is as unequal a place today as it has been since the 1920s. For one thing, such a model stands in the way of critiques of American institutions that are built according to the opposite, “Carnegian,” theory—and many American institutions are built according to such a theory.

Take the U.S. Supreme Court, where—as Duke University professor of law Jedediah Purdy has written—the “country puts questions of basic principle into the hands of just a few interpreters.” That, in Taleb’s terms, is bad enough: the fewer people doing the deciding implies a greater variability in outcome, which also means a potentially greater role for chance. It’s worse when it’s considered the court is an institution that only irregularly gains new members: appointing new Supreme Court justices depends whoever happens to be president and the lifespan of somebody else, just for starters. All of these facts, Taleb’s work suggests, implies that selecting Supreme Court justices are prone to chance—and thus that Supreme Court verdicts are too.

None of these things are, I think any reasonable person would say, desirable outcomes for a society. To leave some of the most important decisions of any nation potentially exposed to chance, as the structure of the United States Supreme Court does, seems particularly egregious. To argue against such a structure however depends on a knowledge of probability, a background in logic and science and mathematics—not a knowledge of the history of the sonnet form or the films of Jean Luc Goddard. And yet, Americans today are told that “the left” is primarily a matter of “culture”—which is to say that, though a “cultural left” is apparently possible, it may not be all that desirable.

 

 

 

At Play In The Fields Of The Lord

Logo for 2015 US Amateur at Olympia Fields Country Club
Logo for 2015 US Amateur at Olympia Fields Country Club

 

Behold, I send you forth as sheep in the midst of wolves:
be ye therefore wise as serpents, and harmless as doves.
—Matthew 10:16

Now that the professional, Open tournaments are out of the way, the U.S. Amateur approaches. A tournament that has always been a symbol of wealth and discrimination—the Amateur was a tournament invented specifically to keep out the riff-raff of professional golfers—the site of this year’s edition might be considered particularly unfortunate considering that this year the tournament will fall just more than a year after the Michael Brown shooting in Ferguson, Missouri: Olympia Fields, in Chicago’s south suburbs, is a relatively wealthy enclave among a swath of exceedingly poor villages and towns very like the terrain of the St. Louis suburbs just a few hundred miles away. Yet there’s a deeper irony at work here that might be missed even by those who’d like to point out that similarity of setting: the format of the tournament, match-play, highlights precisely what the real message of the Brown shooting was. That real message, the one that is actually dangerous to power, wasn’t the one shouted by protestors—that American police departments are “racist.” The really dangerous message is the one echoed by the Amateur: a message that, read properly, tells us that our government’s structure is broken.

The later rounds of U. S. Amateur are played under golf’s match play, rather than stroke play, rules—a difference that will seem arcane to those unfamiliar with the sport, but is a very significant difference nevertheless. In stroke play, competitors play whatever number of holes are required—in professional tournaments, usually 72 holes—and count up however many strokes each took: the player with the fewest strokes wins. Match play however is not the same: in the first place, because in stroke play each golfer is effectively playing against every other player in the field, because all the strokes of every player count. But this is not so in match play.

In the first place, match play consists of, as the name suggests, matches: that is, once the field is cut to the 64 players with the lowest score after an initial two-day stroke play tournament, each of those 64 contestants plays an 18-hole match against one other contestant. The winner of each of these matches then proceeds to move on, until there is a champion—a single-elimination tournament that is exactly like the NCAA basketball tournament held every year in March. The winner of each match in turn, as John Van der Borght says on the website of the United States Golf Association, “is the player who wins the most holes.” That is, what matters on every hole is just whether the golfer has shot a lower score than the opponent for that hole, not overall. Each hole starts the competition again, in other words—like flipping coins, what happened in the past is irrelevant. It’s a format that might sound hopeful, because on each hole whatever screw-ups a player commits are consigned to the dustbin of history. In fact, however, it’s just this element that makes match-play the least egalitarian of formats—and ties it to Ferguson.

Tournaments conducted under match play rules are always subject to a kind of mathematical oddity called a Simpson’s Paradox: such a paradox occurs when, as the definition on Wikipedia says, it “appears that two sets of data separately support a certain hypothesis, but, when considered together, they support the opposite hypothesis.” For example, as I have mentioned in this blog before, in the first round of the PGA Tour’s 2014 Accenture Match Play tournament in Tucson, an unknown named Pedro Larrazabal shot a 68 to Hall-of-Famer Ernie Els’ 75—but because they played different opponents, Larrazabal was out of the tournament and Els was in. Admittedly, even with such an illustration the idea might still sound opaque, but the meaning can be seen by considering, for example, the tennis player Roger Federer’s record versus his rival Rafael Nadal.

Roger Federer has won 17 major championships in men’s tennis, a record—and yet many people argue that he is not the Greatest Of All Time (G.O.A.T.). The reason those people can argue that is because, as Michael Steinberger pointed in the New York Times not long ago, Federer “has a losing record against Nadal, and a lopsided one at that.” Steinberger then proceeded to argue why that record should be discarded and Federer should be called the “GOAT” anyway. But weirdly, Steinberger didn’t attempt—and neither, so far as I can tell, has anyone else—what an anonymous blogger did in 2009: a feat that demonstrates just what a Simpson’s Paradox is, and how it might apply both to the U.S. Amateur and Ferguson, Missouri.

What that blogger did, on a blog entitled SW19—a reference to the United Kingdom’s postal code for Wimbledon, the great tennis arena—was he counted up the points.

Let me repeat: he counted up the points.

That might sound trivial, of course, but as the writer of the SW19 blog realized, tennis is a game that abounds in Simpson’s Paradoxes: that is, it is a game in which it is possible to score fewer points than your opponent, but still win the match. Many people don’t realize this: it might be expected, for example, that because Nadal has an overwhelmingly-dominant win-loss record versus Federer, he must also have won an equally-dominant number of points from the Swiss champion. But an examination of the points scored in each of the matches between Federer and Nadal demonstrates that in fact the difference between them was miniscule.

The SW19 blogger wrote his post in 2009; at that time Nadal led Federer by 13 matches to 7 matches, a 65 percent winning edge for the Spaniard, Nadal. Of those 20 matches, Nadal won the 2008 French Open—played on Nadal’s best surface, clay—in straight sets, 6-1, 6-3, 6-0. In those 20 matches, the two men played 4,394 total points: that is, where one player served and the two volleyed back and forth until one player failed to deliver the ball to the other court according to the rules. If tennis had a straightforward relationship between points and wins—like golf’s stroke play format, in which every “point” (stroke) is simply added to the total and the winner has the fewest points—then it might be expected that Nadal has won about 65 percent of those 4,394 points played, which would be about 2,856 points. In other words, to get a 65 percent edge in total matches, Nadal should have about a 65 percent edge in total points: the point total, as opposed to the match record, between the two ought to be about 2,856 to 1,538.

Yet this, as the SW19 blogger realized, is not the case: the real margin between the two players was Nadal, 2,221, and Federer, 2,173. In other words, even including the epic beating at Roland Garros in 2008, Nadal had only beaten Federer by a total of 48 points over the course of their careers–a total of less than one percent of all the points scored. Not merely that, but if that single match at the 2008 French Open is excluded, then the margin becomes eight points.  The mathematical difference between Nadal and Federer, thus, is the difference between a couple of motes of dust on the edge of a coin while it’s being flipped—if what is measured is the act that is the basis of the sport, the act of scoring points. In terms of points scored, Nadal’s edge is about a half of percentage point—and most of that percentage was generated by a single match. But Nadal had a 65 percent edge in their matches.

How did that happen? The answer is that the structure of tennis scoring is similar to that of match play in golf: the relation between wins and points isn’t direct. In fact, as the SW19 blogger shows, of the twenty matches Nadal and Federer had played to that moment in 2009, Federer had actually scored more points than Nadal in three of them—and still lost the match. If there were a direct relation between points and wins in tennis, that is, the record between Federer and Nadal would actually stand even, at 10-10, instead of what it was in reality, 13-7—a record that would have accurately captured the real point differential between them. But because what matters in tennis isn’t—exactly—the total number of points you score, but instead the numbers of games and sets you win, it is entirely possible to score more points than your opponent in a tennis match—and still lose. (Or, the converse.)

The reason why that is possible, as Florida State University professor Ryan Rodenberg put it in The Atlantic not long ago, is due to “tennis’ decidedly unique scoring system.” (Actually, not unique, because as might be obvious by now match play golf is scored similarly.) In sports like soccer, baseball, or stroke play golf, as sports psychologist Allen Fox once wrote in Tennis magazine, “score is cumulative throughout the contest … and whoever has the most [or, in the case of stroke play golf, least] points at the end wins.” But in tennis things are different: “[i]f you reach game point and win it, you get the entire game while your opponent gets nothing—all the points he or she won in the game are eliminated.” Just in the same way that what matters in tennis is the game, not the point, in match play golf all that matters is the hole, and not the stroke.

Such scoring systems breed Simpson’s Paradoxes: that is, results that don’t reflect the underlying value a scoring system is meant to reflect—we want our games to be won by the better player, not the lucky one—but instead are merely artifacts of the system used to measure. The point (ha!) can be shown by way of an example taken from a blog written by one David Smith, head of marketing for a company called Revolution Analytics, about U.S. median wages. In that 2013 post, Smith reported that the “median US wage has risen about 1%, adjusted for inflation,” since 2000. But was that statistic important—that is, did it measure real value?

Well, what Smith found was that wages for high school dropouts, high school graduates, high school graduates with some college, college graduates, and people with advanced degrees all fell over the same period. Or, as Smith says, “within every educational subgroup, the median wage is now lower than it was in 2000.” But how can it be that “overall wages have risen, but wages within every subgroup have fallen?” The answer is similar to the reason why Rafael had a 65 percent winning margin against Federer: although there are more college graduates now than in 2000, the wages of college graduates haven’t fallen (1.2%) as far as, say, high school dropouts (7.9%). So despite the fact that everyone is poorer—everyone is receiving lower wages, adjusted for inflation—than in 2000, mendacious people can say wages are actually up. Wages are up—if you “compartmentalize” the numbers in just the way that reflects the story you’d like to tell.

Now, while the story about American wages might suggest a connection to Ferguson—and it does—that isn’t the connection between the U.S. Amateur and Ferguson, Missouri, I’d like to discuss. That connection is this one: if the trouble about the U.S. Amateur is that it is conducted under match play—a format that permits Simpson’s Paradox results—and Simpson’s Paradoxes are, at heart, boundary disputes—arguments about whether to divide up the raw data into smaller piles or present them as one big pile—then that suggests the real link to Ferguson because the real issue behind Darren Wilson’s shooting of Michael Brown then isn’t racism—or at least, the way to solve it isn’t to talk about racism. Instead, it’s to talk borders.

After Ferguson police officer Darren Wilson shot Michael Brown last August, the Department of Justice issued a report that was meant, as Zoë Carpenter of The Nation wrote this past March, to “address the roots of the police force’s discriminatory practices.” That report held that those practices were not “simply the result of racist cops,” but instead stemmed “from the way the city preys on residents financially, relying on the fines that accompany even minor offenses to balance its budget.” The report found an email from Ferguson’s finance director to the town’s police chief that, Carpenter reported, said “unless ticket writing ramps up significantly before the end of the year, it will be hard to significantly raise collections next year.” The finance director’s concerns were justified: only slightly less than a quarter of Ferguson’s total budget was generated by traffic tickets and other citations. The continuing operation of the town depends on revenue raised by the police—a need, in turn, that drives the kind of police zealotry that the Department of Justice said contributed to Brown’s death.

All of which might seem quite far from the concerns of the golf fans watching the results of the matches at the U.S. Amateur. Yet consider a town not far from Ferguson: Beverly Hills, Missouri. Like Ferguson, Beverly Hills is located to the northwest of downtown St. Louis, and like Ferguson it is a majority black town. But where Ferguson has over 20,000 residents, Beverly Hills has only around 600 residents—and that size difference is enough to make the connection to the U.S. Amateur’s format of play, match play, crystalline.

Ferguson after all is not alone in depending so highly on police actions for its revenues: Calverton Park, for instance, is another Missouri “municipality that last fiscal year raised a quarter of its revenue from traffic fines,” according to the St. Louis Post-Dispatch. Yet while Ferguson, like Calverton Park, also raised about a quarter of its budget from police actions, Beverly Hills raised something like half of its municipal budget on traffic and other kinds of citations, as a story in the Washington Post. All these little towns, all dependent on traffic tickets to meet their budgets; “Most of the roughly ninety municipalities in St. Louis County,” Carpenter reports in The Nation, “have their own courts, which … function much like Ferguson’s: for the purpose of balancing budgets.” Without even getting into the issue of the fairness of property taxes or sales taxes as a basis for municipal budgeting, it seems obvious that depending on traffic tickets as a major source of revenue is poor planning at best. Yet without the revenue provided by cops writing tickets—and, as a result of Ferguson, the state of Missouri is considering limiting the percentage of a town’s budget that can be raised by such tickets, as the St. Louis Dispatch article says—many of these towns will simply fail. And that is the connection to the U.S. Amateur.

What these towns are having to consider in other words is, according to the St. Louis Post-Dispatch, an option mentioned by St. Louis County Executive Steve Stenger last December: during an interview, the official said that “the consolidation of North County municipalities is what we should be talking about” in response to the threat of cutting back reliance on tickets. Small towns like Beverly Hills may simply be too small: they create too little revenue to support themselves without a huge effort on the part of the police force to find—and thus, in a sense, create—what are essentially taxable crimes. The way to solve the problem of a “racist” police department, in other words, might not be to conduct workshops or seminars in order to “retrain” the officers on the frontline, but instead to redrawn the political boundaries of the greater St. Louis metropolitan area.

That, at least, is a solution that our great-grandparents considered, as an article by writer Kim-Mai Cutler for Tech Crunch this past April remarked. Examining the historical roots of the housing crisis in San Francisco, Cutler discovered that in “1912, a Greater San Francisco movement emerged and the city tried to annex Oakland,” a move Oakland resisted. Yet as a consequence of not creating a Bay-wide government, Cutler says, “the Bay Area’s housing, transit infrastructure and tax system has been haunted by the region’s fragmented governance” ever since: the BART (Bay Area Regional Transit) system, for example, as originally designed “would have run around the entire Bay Area,” Cutler says, “but San Mateo County dropped out in 1961 and then Marin did too.” Many of the problems of that part of Northern California could be solved, Cutler thusly suggests via this and other instances—contra the received wisdom of our day—by bigger, not smaller, government.

“Bigger,” that is, in the sense of “more consolidated”: by the metric of sheer numbers, a government built to a larger scale might not employ as many people as do the scattered suburban governments of America today. But what such a government would do is capture all of the efficiencies of economies of scale available to a larger entity—thus, it might be in a sense smaller than the units it replaced, but definitely would be more powerful. What Missourians and Californians—and possibly others—may be realizing then is that the divisions between their towns are like the divisions tennis makes around its points, or match play golf makes around its strokes: dividing a finite resource, whether points or strokes or tax dollars (or votes), into smaller pools creates what might be called “unnatural,” or “artificial,” results—i.e., results that inadequately reflect the real value of the underlying resource. Just like match play can make Ernie Els’ 75 look better than Pedro Larrazabal’s 68, or tennis’ scoring system can make Rafael Nadal look much better than Federer—when in reality the difference between them is (or was) no more than a sliver of a gnat’s eyelash—dozens of little towns dissipate the real value, economic and otherwise, of the people that inhabit a region.

That’s why when Eric Holder, Attorney General for the United States, said that “the underlying culture” of the police department and court system of Ferguson needs to be reformed, he got it exactly wrong. The problems in St. Louis and San Francisco, the evidence suggests, are created not because government is getting in the way, but because government isn’t structured correctly to channel the real value of the people: scoring systems that leave participants subject to the vagaries of Simpson’s Paradox results might be perfectly fine for games like tennis or golf—where the downsides are minimal—but they shouldn’t be how real life gets scored, and especially not in government. Contra Holder, the problem is not that the members of the Ferguson police department are racists. The problem is that the government structure requires them, like occupying soldiers or cowboys, to view their fellow citizens as a kind of herd. Or, to put the manner in a pithier way: A system that depends on the harvesting of sheep will turn its agents into wolves. Instead of drowning the effects of racism—as a big enough government would through its very size—multiplying struggling towns only encourages racism: instead of diffusing racism, a system broken into little towns focuses it. The real problem of Ferguson then—the real problem of America—is not that Americans are systematically discriminatory: it’s that the systems used by Americans aren’t keeping the score right.

Lest The Adversary Triumph

… God, who, though his power
Creation could repeat, yet would be loath
Us to abolish, lest the Adversary
Triumph …
Paradise Lost Book XI

… the literary chit-chat which makes the reputation of poets boom and crash in an imaginary stock exchange …
The Anatomy of Criticism

A list of articles for “liberal” magazine Salon.com. The first is an attack on Darwinians like Richard Dawkins; the others ridicule creationists for being anti-Darwinian
A list of articles for “liberal” magazine Salon.com. The first is an attack on Darwinians like Richard Dawkins; the others ridicule creationists for being anti-Darwinian

 

“Son, let me make one thing clear,” Air Force General Curtis Le May, the first head of the Strategic Air Command, supposedly said sometime in the 1950s to a young officer who repeatedly referred to the Soviet Union as the “enemy” during a presentation about Soviet nuclear capabilities. “The Soviet Union,” the general explained, “is our adversary. The enemy is the United States Navy.” Similarly, the “sharp rise in U.S. inequality, especially at the very top of the income scale” in recent years—as Nobel Prize winner Paul Krugman called it, in 1992—might equally be the result of confusion: as Professor Walter Benn Michaels of the University of Illinois at Chicago has written, “the intellectual left has responded to the increase in economic inequality by insisting on the importance of cultural identity.” The simplest explanation for that disconnect, I’d suggest, is that while the “intellectual left” might talk a good game about “speaking truth to power” and whatnot, “power” is just their adversary. The real enemy is science, especially Darwinian biology—and, yet more specifically, a concept called “survivorship bias”—and that enmity may demonstrate that the idea of an oppositional politics based around culture, rather than science, is absurd.

Like a lot of American wars, this one is often invisible to the American public, partly because when academics like University of Chicago English professor W.J.T. Mitchell do write for the public,  they often claim their modest aim is merely to curb scientific hubris. As Mitchell piously wrote in 1998’s The Last Dinosaur Book: The Life and Times of a Cultural Icon, his purpose in that book was merely to note that “[b]iological explanations of human behavior … are notoriously easy, popular, and insidious.” As far as that goes, of course, Mitchell is correct: the history of the twentieth century is replete with failed applications of Darwinian thought to social problems. But then, the twentieth century is replete with a lot of failed intellectual applications—yet academic humanists tend to focus on blaming biology for the mistakes of the past.

Consider for example how many current academics indict a doctrine called “social Darwinism” for the social ills of a century ago. In ascending order of sophistication, here is Rutgers historian Jackson Lears asserting from the orchestra pit, in a 2011 review of books by well-known atheist Sam Harris, that the same “assumptions [that] provided the epistemological foundations for Social Darwinism” did the same “for scientific racism and imperialism,” while from the mezzanine level of middlebrow popular writing here is William Kleinknecht, in The Man Who Sold The World: Ronald Reagan and the Betrayal of Main Street America, claiming that in the late nineteenth and early twentieth centuries, “social Darwinism … had nourished a view of the lower classes as predestined by genetics and breeding to live in squalor.” Finally, a diligent online search discovers, in the upper balcony, Boston University student Evan Razdan’s bald assertion that at the end of the nineteenth century, “Darwinism became a major justification for racism and imperialism.” I could multiply the examples: suffice it to say that for a good many in academe, it is now gospel truth that Darwinism was on the side of the wealthy and powerful during the early part of the twentieth century.

In reality however Darwin was usually thought of as on the side of the poor, not the rich, in the early twentieth century. For investigative reporters like Ida Tarbell, whose The History of the Standard Oil Company is still today the foundation of muckraking journalism, “Darwin’s theory [was] a touchstone,” according to Steve Weinberg’s Taking on the Trust: The Epic Battle of Ida Tarbell and John D. Rockefeller. The literary movement of the day, naturalism, drew its characters “primarily from the lower middle class or the lower class,” as Donald Pizer wrote in Realism and Naturalism in Nineteenth-Century American Fiction, and even a scholar with a pro-religious bent like Doug Underwood must admit, as he does in From Yahweh to Yahoo: The Religious Roots of the Secular Press, that the “naturalists were particularly influenced by the theories of Charles Darwin.” Progressive philosopher John Dewey wrote in 1910’s “The Influence of Darwinism on Philosophy” that Darwin’s On the Origin of Species “introduced a mode of thinking that in the end was bound to transform the logic of knowledge, and hence the treatment of morals, politics, and religion.” (As American philosopher Richard Rorty has noted, Dewey and his pragmatists began “from a picture of human beings as chance products of evolution.”) Finally, Karl Marx—a person no one has ever thought to be on the side of the wealthy—thought so highly of Darwin that he exclaimed, in a letter to Frederick Engels, that On the Origin of Species “contains the basis in natural history for our view.” To blame Darwin for the inequality of the Gilded Age is like blaming Smokey the Bear for forest fires.

Even aside from the plain facts of history, however, you’d think the sheer absurdity of pinning Darwin for the crimes of the robber barons would be self-evident. If a thief cited Matthew 5:40—“And if any man will sue thee at the law, and take away thy coat, let him have thy cloke also”—to justify his theft, nobody would think that he had somehow thereby indicted Jesus. Logically, the idea a criminal cites to justify his crime makes no difference either to the fact of the crime or to the idea: that is why the advocates of civil disobedience, like Martin Luther King Jr., held that lawbreaking in the name of a higher law still requires the lawbreaker to be arrested, tried, and, if found guilty, sentenced. (Conversely, is it somehow worse that King was assassinated by a white supremacist? Or would it have been better had he been murdered in the course of a bank robbery that had nothing to do with his work?) Just because someone commits a crime in the name of an idea, as King sometimes did, doesn’t make the idea itself wrong. nor could it make Martin Luther King Jr. any less dead. And anyway, isn’t the notion of taking a criminal’s word about her motivations at face value dubious?

Somehow however the notion that Darwin is to blame for the desperate situation of the poor at the beginning of twentieth century has been allowed to fester in the American university system: Eric Rauchway, a professor of history at the University of California Davis, even complained in 2007 that anti-Darwinism has become so widespread among his students that it’s now a “cliche of the history paper that during the industrial era” all “misery and suffering” was due to the belief of the period’s “lords of plutocracy” in the doctrines of “‘survival of the fittest’” and “‘natural selection.’” That this makes no sense doesn’t seem to enter anyone’s calculations—despite the fact that most of these “lords,” like John Rockefeller and Andrew Carnegie,  were “good Christian gentlemen,” just like many businessmen are today.

The whole idea of blaming Darwin, as I hope is clear, is at best exaggerated and at worst nonsense. But really to see the point, it’s necessary to ask why all those “progressive” and “radical” thinkers thought Darwin was on their side, not the rich man’s. The answer can be found by thinking clearly about what Darwin actually taught, rather than what some people supposedly used him to justify. And what the biologist taught was the doctrine of natural selection: a process that, understood correctly, is far from a doctrine that favors the wealthy and powerful. It would be closer to the truth to say that, on the contrary, what Darwin taught must always favor the poor against the wealthy.

To many in the humanities, that might sound absurd—but to those uncommitted, let’s begin by understanding Darwin as he understood himself, not by what others have claimed about him. And misconceptions of Darwin begin at the beginning: many people credit Charles Darwin with the idea of evolution, but that was not his chief contribution to human knowledge. A number of very eminent people, including his own grandfather, Erasmus Darwin, had argued for the reality of evolutionary descent long before Charles was even born: in his two-volume work of 1796, Zoonomia; or, the Laws of Organic Life, this older Darwin had for instance asserted that life had been evolving for “millions of ages before the commencement of the history of mankind.” So while the theory of evolution is at times presented as springing unbidden from Erasmus’ grandson Charles’ head, that’s simply not true.

By the time Charles published On the Origin of Species in 1859, the general outline of evolution was old hat to professionals, however shocking it may have been to the general public. On the Origin of Species had the impact it did because of the mechanism Darwin suggested to explain how the evolution of species could have proceeded—not that it presented the facts of evolutionary descent, although it do that in copious detail. Instead, as American philosopher Daniel Dennett has observed, “Darwin’s great idea” was “not the idea of evolution, but the idea of evolution by natural selection.” Or as the biologist Stephen Jay Gould has written, Darwin’s own chief aim in his work was “to advance the theory of natural selection as the most important mechanism of evolution.” Darwin’s contribution wasn’t to introduce the idea that species shared ancestors and hence were not created but evolved—but instead to explain how that could have happened.

What Darwin did was to put evolution together with a means of explaining it. In simplest terms, that natural selection is what Darwin would say it was in the Origin: the idea that, since “[m]ore individuals are born than can possibly survive,” something will inevitably “determine which individual shall live and which shall die.” In such a circumstances, as he would later write in the Historical Sketch of the Progress of Opinion on the Origin of Species, “favourable variations would tend to be preserved, and unfavourable ones would be destroyed.” Or as Stephen Jay Gould has succinctly put it, natural selection is “the unconscious struggle among individual organisms to promote their own personal reproductive success.” The word unconscious is the keyword here: the organisms don’t know why they have succeeded—nor do they need to understand. They just do—to paraphrase Yoda—or do not.

Why any of this should matter to the humanities or to people looking to contest economic inequality ought be immediately apparent—and would be in any rational society. But since the American education system seems designed at the moment to obscure the point I will now describe a scientific concept related to natural selection known as survivorship bias. Although that concept is used in every scientific discipline, it’s a particularly important one to Darwinian biology. There’s an argument, in fact, that survivorship bias is just a generalized version of natural selection, and thus it simply is Darwinian biology.

That’s because the concept of “survivorship bias” describes how human beings are tempted to describe mindless processes as mindful ones. Here I will cite one of the concept’s most well-known contemporary advocates, a trader and professor of something called “risk engineering” at New York University named Nassim Nicholas Taleb—precisely because of his disciplinary distance both from biology and the humanities: his distance from both, as Bertold Brecht might have has described it, “exposes the device” by stripping the idea from its disciplinary contexts. As Taleb says, one example of survivorship bias is the tendency all human beings have to think that someone is “successful because they are good.” Survivorship bias, in short, is the sometimes-dangerous assumption that there’s a cause behind every success. But, as Darwin might have said, that ain’t necessarily so.

Consider for instance a hypothetical experiment Taleb constructs in his Fooled By Randomness: The Hidden Role of Chance in Life and in the Markets, consisting of 10,000 money managers. The rules of this experiment are that “each one has a 50% probability of making $10,000 at the end of the year, and a 50% probability of losing $10,000.” If we should run this experiment five times—five runs through randomness—then at the end of those conjectural five years, by the laws of probability we can expect “313 managers who made money for five years in a row.” Is there anything especially clever about these few? No: their success has nothing to do with any quality each might possess. It’s simply due, as Taleb says, to “pure luck.” But these 313 will think of themselves as very fine fellows.

Now, notice that, by substituting the word “zebra” for the words “money managers” and “10 offspring” for “$10,000” Taleb has more or less described the situation of the Serengeti Plain—and, as early twentieth-century investigative reporter Ida Tarbell realized, the wilds of Cleveland, Ohio. Tarbell, in 1905’s “John D. Rockefeller: A Character Study” actually says that by 1868, when Rockefeller was a young businessman on the make, he “must have seen clearly … that nothing but some advantage not given by nature or recognized by the laws of fair play in business could ever make him a dictator in the industry.” In other words, Rockefeller saw that if he merely allowed “nature,” as it were, to take its course, he stood a good chance of being one of the 9000-odd failures, instead of the 300-odd success stories. Which is why he went forward with the various shady schemes Tarbell goes on to document in her studies of the man and his company. (Whose details are nearly unbelievable—unless you’re familiar with the details of the 2008 housing bubble.) The Christian gentleman John D. Rockefeller, in other words, hardly believed in the “survival of the fittest.”

It should in other words be clear just how necessary the concept of survivorship bias—and thus Darwin’s notion of natural selection—is to any discussion of economic inequality. Max Weber at least, the great founder of sociology, understood it—that’s why, in The Protestant Ethic and the Spirit of Capitalism, Weber famously described the Calvinist doctrine of predestination, in which “God’s grace is, since His decrees cannot change, as impossible for those to whom He has granted it to lose as it is unattainable for those to whom He has denied it.” As Weber knew, if the Chosen of God are known by their worldly success, then there is no room for debate: the successful simply deserve their success in a fashion not dissimilar to the notion of the divine right of kings.

If there’s a possibility that worldly success is however due to chance, i.e. luck, then the road is open to argue about the outcomes of the economic system. Since John D. Rockefeller, at least according to Tarbell, certainly did act as though worldly success was far more due to “chance” rather than the fair outcome of a square game, one could I suppose argue that he was a believer in Darwinism like the believers in the “social Darwinist” camp say. But that seems to stretch the point.

Still, what has this to do with the humanities? The answer is that you could do worse than define the humanities by saying they are the disciplines of the university that ignore survivorship bias—although, if so, that might mean that “business” ought to be classified alongside comparative literature in the course catalogue, at least as Taleb puts it.

Examine economist Gary Smith’s Standard Deviations: Flawed Assumptions, Tortured Data, And Other Ways To Lie With Statistics. As Michael Shermer of Pomona College notes in a review of Smith’s book, Smith shows how business books like Jim Collins’ Good to Great “culled 11 companies out of 1,435 whose stock beat the market average over a 40-year time span and then searched for shared characteristics among them,” or how In Search of Excellence, 1982’s best-seller,  “identified eight common attributes of 43 ‘excellent’ companies.” As Taleb says in his The Black Swan: The Impact of the Highly Improbable, such studies “take a population of hotshots, those with big titles and big jobs, and study their attributes”—they “look at what those big guns have in common: courage, risk taking, optimism and so on, and infer that these traits, most notably risk taking, help you to become successful.” But as Taleb observes, the “graveyard of failed persons [or companies] will be full of people who shared the following traits: courage, risk taking, optimism, et cetera.” The problem with “studies” like these is that they begin with Taleb’s 313, instead of the 10,000.

Another way to describe “survivorship bias” in other words is to say that any real investigation into anything must consider what Taleb calls the “silent evidence”: in the case of the 10,000 money managers, it’s necessary to think of the 9000-odd managers who started the game and failed, and not just the 300-odd managers who succeeded. Such studies will surely always find “commonalities” between the “winners,” just as Taleb’s 313 will surely always discover some common trait between them—and in the same way that a psychic can always “miraculously” know that somebody just died.

Yet, why should the intellectual shallowness of business writers matter to scholars in the humanities, who write not for popular consumption but for peer-review? Well, because as Taleb points out, the threat posed by survivorship bias to shoddy kinds of scholarship is not particular to shoddy studies and shoddy scholars, but instead is endemic to entire species of writing. Take for instance Shermer’s discussion of Walter Isaacson’s 2011 biography of Apple Computer’s Steve Jobs … which I’d go into if it were necessary.

But it isn’t, according to Taleb: the “entire notion of biography,” Taleb says in The Black Swan, “is grounded in the arbitrary ascription of a causal relation between specified traits and subsequent events.” Biography by definition takes a number of already-successful entities and then tries to explain their success, instead of starting with equally-unknown entities and watching them either succeed or fail. Nobody finds Beethoven before birth, and even Jesus Christ didn’t pick up disciples before adulthood. Biographies then might be entertaining, but they can’t possibly have any real intellectual substance. Biographies could only really be valuable if their authors predicted a future success—and nobody could possibly write a predictive biography. Biography then simply is an exercise in survivorship bias.

And if biography, then how about history? About the only historians who discuss the point of survivorship bias are those who write what’s known as “counterfactual” history. A genre largely kicked off by journalist MacKinlay Kantor’s fictitious 1960 speculation, If the South Had Won the Civil War, it’s been defined by former Regius Professor of History at Cambridge University Richard J. Evans as “alternative versions of the past in which one alteration in the timeline leads to a different outcome from the one we know actually occurred.” Or as David Frum, thinking in The Atlantic about what might have happened had the United States not entered World War I in 1917, says about his enterprise: “Like George Bailey in It’s a Wonderful Life, I contemplate these might-have-beens to gain a better appreciation for what actually happened.” In statements like these, historians confront the fact that their discipline is inevitably subject to the problem of survivorship bias.

Maybe that’s why counterfactual history is also a genre with a poor reputation with historians: Evans himself has condemned the genre, in The Guardian, by writing that it “threatens to overwhelm our perceptions of what really happened in the past.” “The problem with counterfactuals,” Evans says, “is that they almost always treat individual human actors … as completely unfettered,” when in fact historical actors are nearly always constrained by larger forces. FDR could, hypothetically, have called for war in 1939—it’s just that he probably wouldn’t have been elected in 1940, and someone else would have been in office on that Sunday in Oahu. Which, sure, is true, and responsible historians have always, as Evans says, tried “to balance out the elements of chance on the one hand, and larger historical forces (economic, cultural, social, international) on the other, and come to some kind of explanation that makes sense.” That, to be sure, is more or less the historian’s job. But I am sure the man on the wire doesn’t like to reminded of the absence of a net either.

The threat posed by survivorship bias extends even into genres that might appear to be immune to it: surely the study of literature, which isn’t about “reality” in any strict sense, is immune to the acid bath of survivorship bias. But look at Taleb’s example of how a consideration of survivorship bias affects just how we think about literature, in the form of a discussion of the reputation of the famous nineteenth French novelist Honoré de Balzac.

Let’s say, Taleb proposes, someone asks you why Balzac deserves to be preserved as a great writer, and in reply “you attribute the success of the nineteenth-century novelist … to his superior ‘realism,’ ‘insights,’ ‘sensitivity,’ ‘treatment of characters,’ ‘ability to keep the reader riveted,’ and so on.” As Taleb points out, those characteristics only work as a justification for preserving Balzac “if, and only if, those who lack what we call talent also lack these qualities.” If, on the other hand, there are actually “dozens of comparable literary masterpieces that happened to perish” merely by chance, then “your idol Balzac was just the beneficiary of disproportionate luck compared to his peers.” Without knowing who Balzac’s competitors were, in other words, we are not in a position to know with certainty whether Balzac’s success is due to something internal to his work, or whether his survival is simply the result of dumb luck. So even literature is threatened by survivorship bias.

If you wanted to define the humanities you could do worse than to say they are the disciplines that pay little to no attention to survivorship bias. Which, one might say, is fine: “In my father’s house are many mansions,” to cite John 14:2. But the trouble may be that, since as Taleb or Smith—and the examples could be multiplied—point out, the work of the humanities share the same “scholarly” standards as those of many “business writers,” it does not really matter how “radical”—or even merely reformist—their claims are. The similarities of method may simply overwhelm the message.

In that sense then, despite the efforts of many academics to center a leftist politics on the classrooms of the English department rather than the scientific lab, that just may not be possible: the humanities will always be centered on fending off survivorship bias in the guise of biology’s threat to “reduce the complexities of human culture to patterns in animal behavior,” as W.J.T. Mitchell says—and in so doing, the disciplines of culture will inevitably end up arguing, as Walter Benn Michaels says, “that the differences that divide us are not the differences between those of who have money and those who don’t but are instead the differences between those of us who are black and those who are white or Asian or Latino or whatever.” The humanities are antagonistic to biology because the central concept of Darwinian biology, natural selection, is a version of the principle of survivorship bias, while survivorship bias is a concept that poses a real and constant intellectual threat to the humanities—and finally, to complete the circle, survivorship bias is the only argument against allowing the rich to run the world according to their liking. It may then not be any wonder why, as the tide has gone out on the American dream, the American academy has essentially responded by saying “let’s talk about something else.” To the gentlemen and ladies of the American disciplines of the humanities, the wealthy are just the adversary.