Don Thumb

Then there was the educated Texan from Texas who looked like someone in Technicolor and felt, patriotically, that people of means—decent folk—should be given more votes than drifters, whores, criminals, degenerates, atheists, and indecent folk—people without means.
Joseph Heller. Catch-22. (1961).

 

“Odd arrangements and funny solutions,” the famed biologist Stephen Jay Gould once wrote about the panda’s thumb, “are the proof of evolution—paths that a sensible God would never tread but that a natural process, constrained by history, follows perforce.” The panda’s thumb, that is, is not really a thumb: it is an adaptation of another bone (the radial sesamoid) in the animal’s paw; Gould’s point is that the bamboo-eater’s thumb is not “a beautiful machine,” i.e. not the work of “an ideal engineer.” Hence, it must be the product of an historical process—a thought that occurred to me once again when I was asked recently by one of my readers (I have some!) whether it’s really true, as law professor Paul Finkelman has suggested for decades in law review articles like “The Proslavery Origins of the Electoral College,” that the “connection between slavery and the [electoral] college was deliberate.” One way to answer the question, of course, is to pour through (as Finkelman has very admirably done) the records of the Constitutional Convention of 1787: the notes of James Madison, for example, or the very complete documents collected by Yale historian Max Farrand at the beginning of the twentieth century. Another way, however, is to do as Gould suggests, and think about the “fit” between the design of an instrument and the purpose it is meant to achieve. Or in other words, to ask why the Law of Large Numbers suggests Donald Trump is like the 1984 Kansas City Royals.

The 1984 Kansas City Royals, for those who aren’t aware, are well-known in baseball nerd circles for having won the American League West division despite being—as famous sabermetrician Bill James, founder of the application of statistical methods to baseball, once wrote—“the first team in baseball history to win a championship of any stripe while allowing more runs (684) than they scored (673).” “From the beginnings of major league baseball just after the civil war through 1958,” James observes, no team ever managed such a thing. Why? Well, it does seem readily apparent that scoring more runs than one’s opponent is a key component to winning baseball games, and winning baseball games is a key component to winning championships, so in that sense it ought to be obvious that there shouldn’t be many winning teams that failed to score more runs than their opponents. Yet on the other hand, it also seems possible to imagine a particular sort of baseball team winning a lot of one-run games, but occasionally giving up blow-out losses—and yet as James points out, no such team succeeded before 1959.

Even the “Hitless Wonders,” the 1906 Chicago White Sox, scored more runs than their opponents  despite hitting (according to This Great Game: The Online Book of Baseball) “a grand total of seven home runs on the entire season” while simultaneously putting up the American League’s “worst batting average (.230).” The low-offense South Side team is seemingly made to order for the purposes of this discussion because they won the World Series that year (over the formidable Chicago Cubs)—yet even this seemingly-hapless team scored 570 runs to their opponents’ 460, according to Baseball Reference. (A phenomenon most attribute to the South Siders’ pitching and fielding: that is, although they didn’t score a lot of runs, they were really good at preventing their opponents’ from scoring a lot of runs.) Hence, even in the pre-Babe Ruth “dead ball” era, when baseball teams routinely employed “small ball” strategies designed to produce one-run wins as opposed to Ruth’s “big ball” attack, there weren’t any teams that won despite scoring fewer runs than their opponents’.

After 1958, however, there were a few teams that approached that margin: the 1959 Dodgers, freshly moved to Los Angeles, scored only 705 runs to their opponents’ 670, while the 1961 Cincinnati Reds scored 710 to their opponents 653, and the 1964 St. Louis Cardinals scored 715 runs to their opponents’ 652. Each of these teams were different than most other major league teams: the ’59 Dodgers played in the Los Angeles Coliseum, a venue built for the 1932 Olympics, not baseball; its cavernous power alleys were where home runs went to die, while its enormous foul ball areas ended many at-bats that would have continued in other stadiums. (The Coliseum, that is, was a time machine to the “deadball” era.) The 1961 Reds had Frank Robinson and virtually no other offense until the Queen City’s nine was marginally upgraded through a midseason trade. Finally, the 1964 Cardinals team had Bob Gibson (please direct yourself to the history of Bob Gibson’s career immediately if you are unfamiliar with him), and second they played in the first year after major league baseball’s Rules Committee redefined the strike zone to be just slightly larger—a change that had the effect of dropping home run totals by ten percent and both batting average and runs scored by twelve percent. In The New Historical Baseball Abstract, Bill James calls the 1960s the “second deadball era”; the 1964 Cardinals did not score a lot of runs, but then neither did anyone else.

Each of these teams was composed of unlikely sets of pieces: the Coliseum was a weird place to play baseball, the Rule Committee was a small number of men who probably did not understand the effects of their decision, and Bob Gibson was Bob Gibson. And even then, these teams all managed to score more runs than their opponents, even if the margin was small. (By comparison, the all-time run differential record is held by Joe DiMaggio’s 1939 New York Yankees, who outscored their opponents by 411 runs: 967 to 556, a ratio may stand until the end of time.) Furthermore, the 1960 Dodgers finished in fourth place, the 1962 Reds finished in third, and the 1965 Cards finished seventh: these were teams, in short, that had success for a single season, but didn’t follow up. Without going very deeply into the details then, suffice it to say that run differential is—as Sean Forman noted in the The New York Times in 2011—“a better predictor of future win-loss percentage than a team’s actual win-loss percentage.” Run differential is a way to “smooth out” the effects of chance in a fashion that the “lumpiness” of win-loss percentage doesn’t.

That’s also, as it happens, just what the Law of Large Numbers does: first noted by mathematician Jacob Bernoulli in his Ars Conjectandi of 1713, that law holds that “the more … observations are taken into account, the less is the danger of straying from the goal.” It’s the principle that is the basis of the insurance industry: according to Caltech physicist Leonard Mlodinow, it’s the notion that while “[i]ndividual life spans—and lives—are unpredictable, when data are collected from groups and analyzed en masse, regular patterns emerge.” Or for that matter, the law is also why it’s very hard to go bankrupt—which Donald Trump, as it so happens, has—when running a casino: as Nicholas Taleb commented in The Black Swan: The Impact of the Highly Improbable, all it takes to run a successful casino is to refuse to allow “one gambler to make a massive bet,” and instead “have plenty of gamblers make series of bets of limited size.” More bets equals more “observations,” and the more observations the more likely it is that all those bets will converge toward the expected result. In other words, one coin toss might be heads or might be tails—but the more times the coin is thrown, the more likely it is that there will be an equal number of both heads and tails.

How this concerns Donald Trump is that, as has been noted, although the president-elect did win the election, he did not win more votes than the Democratic candidate, Hillary Clinton. (As of this writing, those totals now stand at 62,391,335 votes for Clinton to Trump’s 61,125,956.) The reason that Clinton did not win the election is because American presidential elections are not won by collecting more votes in the wider electorate, but rather through winning in that peculiarly American institution, the Electoral College: an institution in which, as Will Hively remarked remarkably presciently in a Discover article in 1996, a “popular-vote loser in the big national contest can still win by scoring more points in the smaller electoral college.” Despite how weird that bizarre sort of result actually is, however, according to some that’s just what makes the Electoral College worth keeping.

Hively was covering that story in 1996: his Discovery story was about how, in the pages of the journal Public Choice that year, mathematician Alan Natapoff tried to argue that the “same logic that governs our electoral system … also applies to many sports”—for example, baseball’s World Series. In order “to become [World Series] champion,” Natapoff noticed, a “team must win the most games”—not score the most runs. In the 1960 World Series, the mathematician wrote, the New York Yankees “scored more than twice as many total runs as the Pittsburgh Pirates, 55 to 27”—but the Yankees lost game 7, and thus the series. “Runs must be grouped in a way that wins games,” Natapoff thought, “just as popular votes must be grouped in a way that wins states.” That is, the Electoral College forces candidates to “have broad appeal across the whole nation,” instead of playing “strongly on a single issue to isolated blocs of voters.” It’s a theory that might seem, on its face, to have a certain plausibility: by constructing the Electoral College, the delegates to the constitutional convention of 1787 prevented future candidates from winning by appealing to a single, but large, constituency.

Yet, recall Stephen Jay Gould’s remark about the panda’s thumb, which suggests that we can examine just how well a given object fulfills its purpose: in this case, Natapoff is arguing that, because the design of the World Series “fits” the purpose of identifying the best team in baseball, so too does the Electoral College “fit” the purpose of identifying the best presidential candidate. Natapoff’s argument concerning the Electoral College presumes, in other words, that the task of baseball’s playoff system is to identify the best team in baseball, and hence it ought to work for identifying the best president. But the Law of Large Numbers suggests that the first task of any process that purports to identify value is that it should eliminate, or at least significantly reduce, the effects of chance: whatever one thinks about the World Series, presumably presidents shouldn’t be the result of accident. And the World Series simply does not do that.

“That there is”—as Nate Silver and Dayn Perry wrote in their ESPN.com piece, “Why Don’t the A’s Win In October?” (collected in Jonah Keri and James Click’s Baseball Between the Numbers: Why Everything You Know About the Game Is Wrong)—“a great deal of luck involved in the playoffs is an incontrovertible mathematical fact.” It’s a point that was


argued so early in baseball’s history as 1904, when the New York Giants refused to split the gate receipts evenly with what they considered to be an upstart American League team (Cf. “Striking Out” https://djlane.wordpress.com/2016/07/31/striking-out/.). As Caltech physicist Leonard Mlodinow has observed, if the World Series were designed—by an “ideal engineer,” say—to make sure that one team was the better team, it would have to be 23 games long if one team were significantly better than the other, and 269 games long if the two teams were evenly matched—that is, nearly as long as two full seasons. In fact, since it may even be argued that baseball, by increasingly relying on a playoff system instead of the regular season standings, is increasing, not decreasing, the role of chance in the outcome of its championship process: whereas prior to 1969, the two teams meeting in the World Series were the victors of a paradigmatic Law of Large Numbers system—the regular season—now many more teams enter the playoffs, and do so by multiple routes. Chance is playing an increasing role in determining baseball’s champions: in James’ list of sixteen championship-winning teams that had a run differential of less than 1.100: 1, all of the teams, except the ones I have already mentioned, are from 1969 or after. Hence, from a mathematical perspective the World Series cannot be seriously argued to eliminate, or even effectively reduce, the element of chance—from which it can be reasoned, as Gould says about the panda’s thumb, that the purpose of the World Series is not to identify the best baseball team.

Natapoff’s argument, in other words, has things exactly backwards: rather than showing just how rational the Electoral College is, the comparison to baseball demonstrates just how irrational it is—how vulnerable it is to chance. In the light of Gould’s argument about the panda’s thumb, which suggests that a lack of “fit” between the optimal solution (the human thumb) to a problem and the actual solution (the panda’s thumb) implies the presence of “history,” that would then intimate that the Electoral College is either the result of a lack of understanding of the mathematics of chance with regards to elections—or that the American system for electing presidents was not designed for the purpose that it purports to serve. As I will demonstrate, despite the rudimentary development of the mathematics of probability at the time at least a few—and these, some of the most important—of the delegates to the Philadelphia convention in 1787 were aware of those mathematical realities. That fact suggests, I would say, that Paul Finkelman’s arguments concerning the purpose of the Electoral College are worth much more attention than they have heretofore received: Finkelman may or may not be correct that the purpose of the Electoral College was to support slavery—but what is indisputable is that it was not designed for the purpose of eliminating chance in the election of American presidents.

Consider, for example, that although he was not present at the meeting in Philadelphia, Thomas Jefferson possessed not only a number of works on the then-nascent study of probability, but particularly a copy of the very first textbook to expound on Bernoulli’s notion of the Law of Large Numbers: 1718’s The Doctrine of Chances, or, A Method of Calculating the Probability of Events in Play, by Abraham de Moivre. Jefferson also had social and intellectual connections to the noted French mathematician, the Marquis de Condorcet—a man who, according to Iain McLean of the University of Warwick and Arnold Urken of the Stevens Institute of Technology, applied “techniques found in Jacob Bernoulli’s Ars Conjectandi” to “the logical relationship between voting procedures and collective outcomes.” Jefferson in turn (McLean and Urken inform us) “sent [James] Madison some of Condorcet’s political pamphlets in 1788-9”—a connection that would only have reaffirmed a connection already established by the Italian Philip Mazzei, who sent a Madison a copy of some of Condorcet’s work in 1786: “so that it was, or may have been, on Madison’s desk while he was writing the Federalist Papers.” And while none of that implies that Madison knew of the marquis prior to coming to Philadelphia in 1787, before even meeting Jefferson when the Virginian came to France to be the American minister, the marquis had already become a close friend, for years, to another man who would become a delegate to the Philadelphia meeting: Benjamin Franklin. Although not all of the convention attendees, in short, may have been aware of the relationship between probability and elections, at least some were—and arguably, they were the most intellectually formidable ones, the men most likely to notice that the design of the Electoral College is in direct conflict with the Law of Large Numbers.

In particular, they would have been aware of the marquis’ most famous contribution to social thought: Condorcet’s “Jury Theorem,” in which—as Norman Schofield once observed in the pages of Social Choice Welfare—the Frenchman proved that, assuming “that the ‘typical’ voter has a better than even chance of choosing the ‘correct’ outcome … the electorate would, using the majority rule, do better than an average voter.” In fact, Condorcet demonstrated mathematically—using Bernoulli’s methods in a book entitled Essay on the Application of Analysis to the Probability of Majority Decisions (significantly published in 1785, two years before the Philadelphia meeting)—that adding more voters made a correct choice more likely, just as (according to the Law of Large Numbers) adding more games makes it more likely that the eventual World Series winner is the better team. Franklin at the least then, and perhaps Madison next most-likely, could not but have been aware of the possible mathematical dangers an Electoral College could create: they must have known that the least-chancy way of selecting a leader—that is, the product of the design of an infallible engineer—would be a direct popular vote. And while it cannot be conclusively demonstrated that these men were thinking specifically of Condorcet’s theories at Philadelphia, it is certainly more than suggestive that both Franklin and Madison thought that a direct popular vote was the best way to elect a president.

When James Madison came to the floor of Independence Hall to speak to the convention about the election of presidents for instance, he insisted that “popular election was better” than an Electoral College, as David O. Stewart writes in his The Summer of 1787: The Men Who Invented the Constitution. Meanwhile, it was James Wilson of Philadelphia—so close to Franklin, historian Lawrence Goldstone reports, that the infirm Franklin chose Wilson to read his addresses to the convention—who originally proposed direct popular election of the president: “Experience,” the Scottish-born Philadelphian said, “shewed [sic] that an election of the first magistrate by the people at large, was both a convenient & successful mode.” In fact, as William Ewald of the University of Pennsylvania has pointed out, “Wilson almost alone among the delegates advocated not only the popular election of the President, but the direct popular election of the Senate, and indeed a consistent application of the principle of ‘one man, one vote.’” (Wilson’s positions were far ahead of their time: in the case of the Senate, Wilson’s proposal would not be realized until the passage of the Seventeenth Amendment in 1913, and his stance in favor of the principle of “one man, one vote” would not be enunciated as part of American law until the Reynolds v. Sims line of cases decided by the Earl Warren-led U.S. Supreme Court in the early 1960s.) To Wilson, the “majority of people wherever found” should govern “in all questions”—a statement that is virtually identical to Condorcet’s mathematically-influenced argument.

What these men thought, in other words, was that an electoral system that was designed to choose the best leader of a nation would proceed on the basis of a direct national popular vote: some of them, particularly Madison, may even have been aware of the mathematical reasons for supposing that a direct national popular vote was how an American presidential election would be designed if it were the product of what Stephen Jay Gould calls an “ideal engineer.” Just as an ideal (but nonexistent) World Series would be at least 23, and possibly so long as 269 games—in order to rule out chance—the ideal election to the presidency would include as many eligible voters as possible: the more voters, Condorcet would say, the more likely those voters would be to get it right. Yet just as with the actual, as opposed to ideal, World Series, there is a mismatch between the Electoral College’s proclaimed purpose and its actual purpose: a mismatch that suggests researchers ought to look for the traces of history within it.

Hence, although it’s possible to investigate Paul Finkelman’s claims regarding the origins of the Electoral College by, say, trawling through the volumes of the notes taken at the Constitutional Convention, it’s also possible simply to think through the structure of the Constitution itself in the same fashion that Stephen Jay Gould thinks about, say, the structure of frog skeletons: in terms of their relation to the purpose they serve. In this case, there is a kind of mathematical standard to which the Electoral College can be compared: a comparison that doesn’t necessarily imply that the Constitution was created simply and only to protect slavery, as Finkelman says—but does suggest that Finkelman is right to think that there is something in need of explanation. Contra Natapoff, the similarity between the Electoral College and the World Series does not suggest that the American way of electing a head of state is designed to produce the best possible leader, but instead that—like the World Series—it was designed with some other goal in mind. The Electoral College may or may not be the creation of an ideal craftsman, but it certainly isn’t a “beautiful machine”; after electing the political version of the 1984 Kansas City Royals—who, by the way, were swept by Detroit in the first round—to the highest office in the land, maybe the American people should stop treating it that way.

Advertisements

The Color of Water

No one gets lucky til luck comes along.
Eric Clapton
     “It’s In The Way That You Use It”
     Theme Song for The Color of Money (1986).

 

 

The greenish tint to the Olympic pool wasn’t the only thing fishy about the water in Rio last month: a “series of recent reports,” Patrick Redford of Deadspin reported recently, “assert that there was a current in the pool at the Rio Olympics’ Aquatic Stadium that might have skewed the results.” Or—to make the point clear in a way the pool wasn’t—the water in the pool flowed in such a way that it gave the advantage to swimmers starting in certain lanes: as Redford writes, “swimmers in lanes 5 through 8 had a marked advantage over racers in lanes 1 through 4.” According, however, to ESPN’s Michael Wilbon—a noted African-American sportswriter—such results shouldn’t be of concern to people of color: “Advanced analytics,” Wilbon wrote this past May, “and black folks hardly ever mix.” To Wilbon, the rise of statistical analysis poses a threat to African-Americans. But Wilbon is wrong: in reality, the “hidden current” in American life holding back both black Americans and all Americans is not analytics—it’s the suspicions of supposedly “progressive” people like Michael Wilbon.

The thesis of Wilbon’s piece, “Mission Impossible: African-Americans and Analytics”—published on ESPN’s race-themed website, The Undefeated—was that black people have some kind of allergy to statistical analysis: “in ‘BlackWorld,’” Wilbon solemnly intoned, “never is heard an advanced analytical word.” Whereas, in an earlier age, white people like Thomas Jefferson questioned black people’s literacy, nowadays, it seems, it’s ok to question their ability to understand mathematics—a “ridiculous” (according to The Guardian’s Dave Schilling, another black journalist) stereotype that Wilbon attempts to paint as, somehow, politically progressive: Wilbon, that is, excuses his absurd beliefs on the basis that analytics “seems to be a new safe haven for a new ‘Old Boy Network’ of Ivy Leaguers who can hire each other and justify passing on people not given to their analytic philosophies.” Yet, while Wilbon isn’t alone in his distrust of analytics, it’s actually just that “philosophy” that may hold the most promise for political progress—not only for African-Americans, but every American.

Wilbon’s argument, after all, depends on a common thesis heard in the classrooms of American humanities departments: when Wilbon says the “greater the dependence on the numbers, the more challenged people are to tell (or understand) the narrative without them,” he is echoing a common argument deployed every semester in university seminar rooms throughout the United States. Wilbon is, in other words, merely repeating the familiar contention, by now essentially an article of faith within the halls of the humanities, that without a framework—or (as it’s sometimes called), “paradigm”—raw statistics are meaningless: the doctrine sometimes known as “social constructionism.”

That argument is, as nearly everyone who has taken a class in the departments of the humanities in the past several generations knows, that “evidence” only points in a certain direction once certain baseline axioms are assumed. (An argument first put about, by the way, by the physician Galen in the second century AD.) As American literary critic Stanley Fish once rehearsed the argument in the pages of the New York Times, according to its terms investigators “do not survey the world in a manner free of assumptions about what it is like and then, from that (impossible) disinterested position, pick out the set of reasons that will be adequate to its description.” Instead, Fish went on, researchers “begin with the assumption (an act of faith) that the world is an object capable of being described … and they then develop procedures … that yield results, and they call those results reasons for concluding this or that.” According to both Wilbon and Fish, in other words, the answers people find depends not the structure of reality itself, but instead on the baseline assumptions the researcher begins with: what matters is not the raw numbers, but the contexts within which the numbers are interpreted.

What’s important, Wilbon is saying, is the “narrative,” not the numbers: “Imagine,” Wilbon says, “something as pedestrian as home runs and runs batted in adequately explaining [Babe] Ruth’s overall impact” on the sport of baseball. Wilbon’s point is that a knowledge of Ruth’s statistics won’t tell you about the hot dogs the great baseball player ate during games, or the famous “called shot” during the 1932 World Series—what he is arguing is that statistics only point toward reality: they aren’t reality itself. Numbers, by themselves, don’t say anything about reality; they are only a tool with which to access reality, and by no means the only tool available: in one of Wilbon’s examples Stef Curry, the great guard for the NBA’s Golden State Warriors, knew he shot better from the corners—an intuition that later statistical analysis bore out. Wilbon’s point is that both Curry’s intuition and statistical analysis told the same story, implying that there’s no fundamental reason to favor one road to truth over the other.

In a sense, to be sure, Wilbon is right: statistical analysis is merely a tool for getting at reality, not reality itself, and certainly other tools are available. Yet, it’s also true that, as statistician and science fiction author Michael F. Flynn has pointed out, astronomy—now accounted one of the “hardest” of physical sciences, because it deals with obviously real physical objects in space—was once not an observational science, but instead a mathematical one: in ancient times, Chinese astronomers were called “calendar-makers,” and a European astronomer was called a mathematicus. As Flynn says, “astronomy was not about making physical discoveries about physical bodies in the sky”—it was instead “a specialized branch of mathematics for making predictions about sky events.” Without telescopes, in other words, astronomers did not know what, exactly, say, the planet Mars was: all they could do was make predictions, based on mathematical analysis, about what part of the sky it might appear in next—predictions that, over the centuries, became perhaps-startlingly accurate. But as a proto-Wilbon might have said in (for instance) the year 1500, such astronomers had no more direct knowledge of what Mars is than a kindergartner has of the workings of the Federal Reserve.

In the same fashion, Wilbon might point out about the swimming events in Rio, there is no direct evidence of a current in the Olympic pool: the researchers who assert that there was such a current base their arguments on statistical evidence of the races, not examination of the conditions of the pool. Yet the evidence for the existence of a current is pretty persuasive: as the Wall Street Journal reported, fifteen of the sixteen swimmers, both men and women, who swam in the 50-meter freestyle event finals—the one event most susceptible to the influence of a current, because swimmers only swim one length of the pool in a single direction—swam in lanes 4 through 8, and swimmers who swam in outside lanes in early heats and inside lanes in later heats actually got slower. (A phenomena virtually unheard of in top level events like the Olympics.) Barry Revzin, of the website Swim Swam, found that a given Olympic swimmer picked up “a 0.2 percent advantage for each lane … closer to [lane] 8,” Deadspin’s Redford reported, and while that could easily seem “inconsequentially small,” Redford remarked, “it’s worth pointing out that the winner in the women’s 50 meter freestyle only beat the sixth-place finisher by 0.12 seconds.” It’s a very small advantage, in other words, which is to say that it’s very difficult to detect—except by means of the very same statistical analysis distrusted by Wilbon. But although it is a seemingly-small advantage, it is enough to determine the winner of the gold medal. Wilbon in other words is quite right to say that statistical evidence is not a direct transcript of reality—he’s wrong, however, if he is arguing that statistical analysis ought to be ignored.

To be fair, Wilbon is not arguing exactly that: “an entire group of people,” he says, “can’t simply refuse to participate in something as important as this new phenomenon.” Yet Wilbon is worried about the growth of statistical analysis because he views it as a possible means for excluding black people. If, as Wilbon writes, it’s “the emotional appeal,” rather than the “intellect[ual]” appeal, that “resonates with black people”—a statement that, if it were written by a white journalist, would immediately cause a protest—then Wilbon worries that, in a sports future run “by white, analytics-driven executives,” black people will be even further on the outside looking in than they already are. (And that’s pretty far outside: as Wilbon notes, “Nate McMillan, an old-school, pre-analytics player/coach, who was handpicked by old-school, pre-analytics player/coach Larry Bird in Indiana, is the only black coach hired this offseason.”) Wilbon’s implied stance, in other words—implied because he nowhere explicitly says so—is that since statistical evidence cannot be taken at face value, but only through screens and filters that owe more to culture than to the nature of reality itself, therefore the promise (and premise) of statistical analysis could be seen as a kind of ruse designed to perpetuate white dominance at the highest levels of the sport.

Yet there are at least two objections to make about Wilbon’s argument: the first being the empirical observation that in U.S. Supreme Court cases like McCleskey v. Kemp for instance (in which the petitioner argued that, according to statistical analysis, murderers of white people in Georgia were far more likely to receive the death penalty than murderers of black people), or Teamsters v. United States, (in which—according to Encyclopedia.com—the Court ruled, on the basis of statistical evidence, that the Teamsters union had “engaged in a systemwide practice of minority discrimination”), statistical analysis has been advanced to demonstrate the reality of racial bias. (A demonstration against which, by the way, time and again conservatives have countered with arguments against the reality of statistical analysis that essentially mirror Wilbon’s.) To think then that statistical analysis could be inherently biased against black people, as Wilbon appears to imply, is empirically nonsense: it’s arguable, in fact, that statistical analysis of the sort pioneered by people like sociologist Gunnar Myrdal has done at least as much, if not more, as (say) classes on African-American literature to combat racial discrimination.

The more serious issue, however, is a logical objection: Wilbon’s two assertions are in conflict with each other. To reach his conclusions, Wilbon ignores (like others who make similar arguments) the implications of his own reasoning: statistics ought to be ignored, he says, because only “narrative” can grant meaning to otherwise meaningless numbers—but, if it is so that numbers themselves cannot “mean” without a framework to grant them meaning, then they cannot pose the threat that Wilbon says they might. In other words, if Wilbon is right that statistical analysis is biased against black people, then it means that numbers do have meaning in themselves, while conversely if numbers can only be interpreted within a framework, then they cannot be inherently biased against black people. By Wilbon’s own account, in other words, nothing about statistical analysis implies that such analysis can only be pursued by white people, nor could the numbers themselves demand only a single (oppressive) use—because if that were so, then numbers would be capable of providing their own interpretive framework. Wilbon cannot logically advance both propositions simultaneously.

That doesn’t mean, however, that Wilbon’s argument—the argument, it ought to be noted, of many who think of themselves as politically “progressive”—is not having an effect: it’s possible, I think, that the relative success of this argument is precisely what is causing Americans to ignore a “hidden current” in American life. That current is could be described by an “analytical” observation made by professors Sven Steinmo and Jon Watts some two decades ago: “No other democratic system in the world requires support of 60% of legislators to pass government policy”—an observation that, in turn, may be linked to the observable reality that, as political scientists Frances E. Lee and Bruce Oppenheimer have noted, “less populous states consistently receive more federal funding than states with more people.” Understanding the impact of these two observations, and their effects on each other would, I suspect, throw a great deal of light on the reality of American lives, white and black—yet it’s precisely the sort of reflection that the “social construction” dogma advanced by Wilbon and company appears specifically designed to avoid. While to many, even now, the arguments for “social construction” and such might appear utterly liberatory, it’s possible to tell a tale in which it is just such doctrines that are the tools of oppression today.

Such an account would be, however—I suppose Michael Wilbon or Stanley Fish might tell us—simply a story about the one that got away.

Joe Maddon and the Fateful Lightning 

All things are an interchange for fire, and fire for all things,
just like goods for gold and gold for goods.
—Heraclitus

Chicago Cubs logo
Chicago Cubs Logo

Last month, one of the big stories about presidential candidate and Wisconsin governor Scott Walker was his plan not only to cut the state’s education budget, but also to change state law in order to allow, according to The New Republic, “tenured faculty to be laid off at the discretion of the chancellors and Board of Regents.” Given that Wisconsin was the scene of the Ely case of 1894—which ended with the board of trustees of the University of Wisconsin issuing the ringing declaration: “Whatever may be the limitations which trammel inquiry elsewhere we believe the great state University of Wisconsin should ever encourage that continual and fearless sifting and winnowing by which alone truth can be found”—Walker’s attempt is a threat to the entire system of tenure. Yet it may be that American academia in general, if not Wisconsin academics in particular, are not entirely blameless—not because, as American academics might smugly like to think, because they are so totally radical, dude, but on the contrary because they have not been radical enough: to the point that, as I will show, probably the most dangerous, subversive and radical thinker on the North American continent at present is not an academic, nor even a writer, at all. His name is Joe Maddon, and he is the manager of the Chicago Cubs.

First though, what is Scott Walker attempting to do, and why is it a big deal? Specifically, Walker wants to change Section 39 of the relevant Wisconsin statute so that Wisconsin’s Board of Regents could, “with appropriate notice, terminate any faculty or academic staff appointment when such an action is deemed necessary … instead of when a financial emergency exists as under current law.” In other words, Walker’s proposal would more or less allow Wisconsin’s Board of Regents to fire anyone virtually at will, which is why the American Association of University Professors “has already declared that the proposed law would represent the loss of a viable tenure system,” as reported by TNR.

The rationale given for the change is the usual one of allowing for more “flexibility” on the part of campus leaders: by doing so, supposedly, Wisconsin’s university system can better react to the fast-paced changes of the global economy … feel free to insert your own clichés of corporate speak here. The seriousness with which Walker takes the university’s mission as a searcher for truth might perhaps be discerned by the fact that he appointed the son of his campaign chairman to the Board of Regents—nepotism apparently being, in Walker’s view, a sure sign of intellectual probity.

The tenure system was established, of course, exactly to prevent political appointee yahoos from having anything to say about the production of truth—a principle that, one might think, ought to be sacrosanct, especially in the United States, where every American essentially exists right now, today, on the back of intellectual production usually conducted in a university lab. (For starters, it was the University of Chicago that gave us what conservatives seem to like to think of as the holy shield of the atomic bomb.) But it’s difficult to blame “conservatives” for doing what’s in, as the scorpion said to the frog, their nature: what’s more significant is that academics ever allowed this to happen in the first place—and while it is surely the case that all victims everywhere wish to hold themselves entirely blameless for whatever happens to them, it’s also true that no one is surprised when somebody hits a car driving the wrong way.

A clue toward how American academia has been driving the wrong way can be found in a New Yorker story from last October, where Maria Konnikova described a talk moral psychologist Jonathan Haidt gave to the Society for Personality and Social Psychology. The thesis of the talk? That psychology, as a field, had “a lack of political diversity that was every bit as dangerous as a lack of, say, racial or religious or gender diversity.” In other words, the whole field was inhabited by people who were at least liberal, and many who were radicals, on the ideological spectrum, and very few conservatives.

To Haidt, this was a problem because it “introduced bias into research questions [and] methodology,” particularly concerning “politicized notions, like race, gender, stereotyping, and power and inequality.” Yet a follow-up study surveying 800 social psychologists found something interesting: actually, these psychologists were only markedly left-of-center compared to the general population when it came to something called “the social-issues scale.” Whereas in economic matters or foreign affairs, these professors tilted left at about a sixty to seventy percent clip, when it came to what sometimes are called “culture war” issues the tilt was in the ninety percent range. It’s the gap between those measures, I think, that Scott Walker is able to exploit.

In other words, while it ought to be born in mind that this is merely one study of a narrow range of professors, the study doesn’t disprove Professor Walter Benn Michaels’ generalized assertion that American academia has largely become the “human resources department of the right”: that is, the figures seem to say that, sure, economic inequality sorta bothers some of these smart guys and gals—but really to wind them up you’d best start talking about racism or abortion, buster. And what that might mean is that the rise of so-called “tenured radicals” since the 1960s hasn’t really been the fearsome beast the conservative press likes to make it out to be: in fact, it might be so that—like some predator/prey model from ecological study—the more left the professoriate turns, the more conservative the nation becomes.

That’s why it’s Joe Maddon of the Chicago Cubs, rather than any American academic, who is the most radical man in America right now. Why? Because Joe Maddon is doing something interesting in these days of American indifference to reality: he is paying attention to what the world is telling him, and doing something about it in a manner that many, if not most, academics could profit by examining.

What Joe Maddon is doing is batting the pitcher eighth.

That might, obviously, sound like small beer when the most transgressive of American academics are plumbing the atomic secrets of the universe, or questioning the existence of the biological sexes, or any of the other surely fascinating topics the American academy are currently investigating. In fact, however, there is at present no more important philosophical topic of debate anywhere in America, from the literary salons of New York City to the programming pits of Northern California, than the one that has been ongoing throughout this mildest of summers on the North Side of the city of Chicago.

Batting the pitcher eighth is a strategy that has been tried before in the history of American baseball: in 861 games since 1914. But twenty percent of those games, reports Grantland, “have come in 2015,” this season, and of those games, 112 and counting, have been those played by the Chicago Cubs—because in every single game the Cubs have played in this year, the pitcher has batted in the eighth spot. That’s something that no major league baseball team has ever done—and the reasons Joe Maddon has for tossing aside baseball orthodoxy like so many spit cups of tobacco juice is the reason why, eggheads and corporate lackeys aside, Joe Maddon is at present the most screamingly dangerous man in America.

Joe Maddon is dangerous because he saw something in a peculiarity in the rule of baseball, something that most fans are so inured to they have become unconscious to its meaning. That peculiarity is this: baseball has history. It’s a phrase that might sound vague and sentimental, but that’s not the point at all: what it refers to is that, with every new inning, a baseball lineup does not begin again at the beginning, but instead jumps to the next player after the last batter of the previous inning. This is important because, traditionally, pitchers bat in the ninth spot in a given lineup because they are usually the weakest batters on any team by a wide margin, which means that by batting them last, a manager usually ensures that they do not bat until at least the second, or even third, inning at the earliest. Batting the pitcher ninth enables a manager to hide his weaknesses and emphasize his strengths.

That has been orthodox doctrine since the beginnings of the sport: the tradition is so strong that when Babe Ruth, who first played in the major leagues as a pitcher, came to Boston he initially batted in the ninth spot. But what Maddon saw was that while the orthodox theory does minimize the numbers of plate appearances on the part of the pitcher, that does not in itself necessarily maximize the overall efficiency of the offense—because, as Russell Carleton put it for FoxSports, “in baseball, a lot of scoring depends on stringing a couple of hits together consecutively before the out clock runs out.” In other words, while batting the pitcher ninth does hide that weakness as much as possible, that strategy also involves giving up an opportunity: in the words of Ben Lindbergh of Grantland, by “hitting a position player in the 9-hole as a sort of second leadoff man,” a manager could “increase the chances of his best hitter(s) batting with as many runners on base as possible.” Because baseball lineups do not start at the beginning with every new inning, batting the weakest hitter last means that a lineup’s best players—usually the one through three spots—do not have as many runners on base as they might otherwise.

Now, the value of this move of putting the pitcher eighth is debated by baseball statisticians: “Study after study,” says Ben Lindbergh of Grantland, “has shown that the tactic offers at best an infinitesimal edge: two or three runs per season in the right lineup, or none in the wrong one.” In other words, Maddon may very well be chasing a will-o’-the-wisp, a perhaps-illusionary advantage: as Lindbergh says, “it almost certainly isn’t going to make or break the season.” Yet, in an age in which runs are much scarcer than they were in the juiced-up steroid era of the 1990s, and simultaneously the best teams in the National League (the American League, which does not allow pitchers to bat, is immune to the problem) are separated in the standings by only a few games, a couple of runs over the course of a season may be exactly what allows one team to make the playoffs and, conversely, prevents another from doing the same: “when there’s so little daylight separating the top teams in the standings,” as Lindbergh also remarked, “it’s more likely that a few runs—which, once in a while, will add an extra win—could actually account for the different between making and missing the playoffs.” Joe Maddon, in other words, is attempting to squeeze every last run he can from his players with every means at his disposal—even if it means taking on a doctrine that has been part of baseball nearly since its beginnings.

Yet, why should that matter at all, much less make Joe Maddon perhaps the greatest threat to the tranquility of the Republic since John Brown? The answer is that Joe Maddon is relentlessly focused on the central meaningful event of his business: the act of scoring. Joe Maddon’s job is to make sure that his team scores as many runs as possible, and he is willing to do what it takes in order to make that happen. The reason that he is so dangerous—and why the academics of America may just deserve the thrashing the Scott Walkers of the nation appear so willing to give them—is that American democracy is not so singlemindedly devoted to getting the maximum value out of its central meaningful event: the act of voting.

Like the baseball insiders who scoff at Joe Maddon for scuttling after a spare run or two over the course of 162 games—like the major league assistant general quoted by Lindbergh who dismissed the concept by saying “the benefit of batting the pitcher eighth is tiny if it exists at all”—American political insiders believe that a system that profligately disregards the value of votes doesn’t really matter over the course of a political season—or century. And it is indisputable that the American political system is profligate with the value of American votes. The value of a single elector in the Electoral College, for example, can differ by hundreds of thousands of votes cast by voters each Election Day, depending on the state; while through “the device of geographic—rather than population-based—representation in the Senate, [the system] substantially dilutes the voice and voting power of the majority of Americans who live in urban and metropolitan areas in favor of those living in rural areas,” as one Princeton political scientist has put the point. Or to put it more directly, as Dylan Matthews put it for the Washington Post two years ago, if “senators representing 17.82 percent of the population agree, they can get a majority”—while on the other hand “11.27 percent of the U.S. population,” as represented by the smallest 20 states, “can successfully filibuster legislation.” Perhaps most significantly, as Frances Lee and Bruce Oppenheimer have shown in their Sizing Up the Senate: The Unequal Consequences of Equal Representation, “less populous states consistently receive more federal funding than states with more people.” As presently constructed, in other words, the American political system is designed to waste votes, not to seek all of their potential value.

American academia, however, does not discuss such matters. Indeed, the disciplines usually thought of as the most politically “radical”—usually those in the humanities—are more or less expressly designed to rule out the style of thought (naturalistic, realistic) taken on here: one reason, perhaps, explaining the split in psychology professors between their opinions on economic matters and “cultural” ones observed by Maria Konnikova. Yet just because an opinion is not registered in academia does not mean it does not exist: imbalances are inevitably corrected, which undoubtedly will occur in this matter of the relative value of an American vote. The problem of course is that such “price corrections,” when it comes to issues like this, are not particularly known for being calm or smooth. Perhaps there is one possible upside however: when that happens—and there is no doubt that the day of what the song calls “the fateful lightning” will arrive, be it tomorrow or in the coming generations—Joe Maddon may receive his due as not just a battler in the frontlines of sport, but a warrior for justice. That, at least, might not be entirely surprising to his fellow Chicagoans—who remember that it was not the flamboyant tactics of busting up liquor stills that ultimately got Capone, but instead the slow and patient work of tax accountants and auditors.

You know, the people who counted.