Water to the Sea

Yet lives our pilot still. Is’t meet that he
Should leave the helm and like a fearful lad
With tearful eyes add water to the sea
And give more strength to that which hath too much,
Whiles, in his moan, the ship splits on the rock,
Which industry and courage might have saved?
Henry VI, Part III. Act V, scene iv.

Those who make many species are the ‘splitters,’ and those who make few are the ‘lumpers,’” remarked Charles Darwin in an 1857 letter to botanist J.D. Hooker; the title of University of Chicago professor Kenneth Warren’s most recent book, What Was African-American Literature?, announces him as a “lumper.” The chief argument of Warren’s book is that the claim that something called “African-American literature” is “different from the rest of American lit[ature]”—a claim that many of Warren’s colleagues, perhaps no one more so than Harvard’s Henry Louis Gates, Jr., have based their careers upon—is, in reality, a claim that, historically, many writers with large amounts of melanin would have rejected. Take the fact, Warren says, that “literary societies … among free blacks in the antebellum north were not workshops for the production of a distinct black literature but salons for producing works of literary distinction”: these were not people looking to split off—or secede—from the state of literature. Warren’s work is, thereby, aimed against those who, like so many Lears, have divided and subdivided literature by attaching so many different adjectives to literature’s noun—an attack Warren says he makes because “a literature insisting that the problem of the 21st century remains the problem of the color line paradoxically obscures the economic and political problems facing many black Americans, unless those problems can be attributed to racial discrimination.” What Warren sees, I think, is that far too much attention is being paid to the adjective in “African-American literature”—though what he may not see is that the real issue concerns the noun.

The noun being, of course, the word “literature”: Warren’s account worries the “African-American” part of “African-American literature” instead of the “literature” part. Specifically, in Warren’s view what links the adjective to the noun—or “what made African American literature a literature”—was the regime of “constitutionally-sanctioned state-enforced segregation” known as Jim Crow, which made “black literary achievement … count, almost automatically, as an effort on behalf of the ‘race’ as a whole.” Without that institutional circumstance there are writers who are black—but no “black writers.” To Warren, it’s the distinct social structure of Jim Crow, hardening in the 1890s, that creates “black literature,” instead of merely examples of writing produced by people whose skin is darker-colored than that of other writers.

Warren’s argument thereby takes the familiar form of the typical “social construction” argument, as outlined by Ian Hacking in his book, The Social Construction of What? Such arguments begin, Hacking says, when “X is taken for granted,” and “appears to be inevitable”; in the present moment, African-American literature can certainly be said—for some people—to appear to be inevitable: Harvard’s Gates, for instance, has long claimed that “calls for the creation of a [specifically “black”] tradition occurred long before the Jim Crow era.” But it’s just at such moments, Hacking says, that someone will observe that in fact the said X is “the contingent product of the social world.” Which is just what Warren does.

Warren points out that although those who argue for an ahistorical vision of an African-American literature would claim that all black writers were attempting to produce a specifically black literature, Warren notes that the historical evidence points, merely, to an attempt to produce literature: i.e., a member of the noun class without a modifying adjective. At least, until the advent of the Jim Crow system at the end of the nineteenth century: it’s only after that time, Warren says, that “literary work by black writers came to be discussed in terms of how well it served (or failed to serve) as an instrument in the fight against Jim Crow.” In the familiar terms of the hallowed social constructionism argument, Warren is claiming that the adjective is added to the noun later, as a result of specific social forces.

Warren’s is an argument, of course, with a number of detractors, and not simply Gates. In The Postethnic Literary: Reading Paratexts and Transpositions Around 2000, Florian Sedlmeier charged Warren with reducing “African American identity to a legal policy category,” and furthermore that Warren’s account “relegates the functions of authorship and literature to the economic subsystem.” It’s a familiar version of the“reductionist” charge often cited by “postmoderns” against Marxists—an accusation tiresome at best in these days.

More creatively, in a symposium of responses to Warren in the Los Angeles Review of Books, Erica Edwards attempted to one-up Warren by saying that Warren fails to recognize that perhaps the true “invention” of African-American literature was not during the Jim Crow era of legalized segregation, but instead “with the post-Jim Crow creation of black literature classrooms.” Whereas Gates, in short, wishes to locate the origin of African-American literature in Africa prior to (or concurrently with) slavery itself, and Warren instead locates it in the 1890s during the invention of Jim Crow, Edwards wants to locate it in the 1970s, when African-American professors began to construct their own classes and syllabi. Edwards’ argument, at the least, has a certain empirical force: the term “African-American” itself is a product of the civil rights movement and afterwards; that is, the era of the end of Jim Crow, not its beginnings.

Edwards’ argument thereby leads nearly seamlessly into Aldon Lynn Nielsen’s objections, published as part of the same symposium. Nielsen begins by observing that Warren’s claims are not particularly new: Thomas Jefferson, he notes, “held that while Phillis Wheatley [the eighteenth-century black poet] wrote poems, she did not write literature,” while George Schuyler, the black novelist, wrote for The Nation in 1926 that “there was not and never had been an African American literature”—for the perhaps-surprising reason that there was no such thing as an African-American. Schuyler instead felt that the “Negro”—his term—“was no more than a ‘lampblacked Anglo-Saxon.’” In that sense, Schuyler’s argument was even more committed to the notion of “social construction” than Warren is: whereas Warren questions the timelessness of the category of a particular sort of literature, Schuyler questioned the existence of a particular category of person. Warren, that is, merely questions why “African-American literature” should be distinguished—or split from—“American literature”; Schuyler—an even more incorrigible lumper than Warren—questioned why “African-Americans” ought to be distinguished from “Americans.”

Yet, if even the term “African-American,” considered as a noun itself rather than as the adjective it is in the phrase “African-American literature,” can be destabilized, then surely that ought to raise the question, for these sharp-minded intellectuals, of the status of the noun “literature.” For it is precisely the catechism of many today that it is the “liberating” features of literature—that is, exactly, literature’s supposed capacity to produce the sort of argument delineated and catalogued by Hacking, the sort of argument in which it is argued that “X need not have existed”—that will produce, and has produced, whatever “social progress” we currently observe about the world.

That is the idea that “social progress” is the product of an increasing awareness of Nietzsche’s description of language as a “mobile army of metaphors, metonyms, and anthropomorphisms”—or, to use the late American philosopher Richard Rorty’s terminology, to recognize that “social progress” is a matter of redescription by what he called, following literary critic Harold Bloom, “strong poets.” Some version of such a theory is held by what Rorty, following University of Chicago professor Allan Bloom, called “‘the Nietzscheanized left’”: one that takes seriously the late Belgian literature professor Paul de Man’s odd suggestion that “‘one can approach … the problems of politics only on the basis of critical-linguistic analysis,’” or the late French historian Michel Foucault’s insistence that he would not propose a positive program, because “‘to imagine another system is to extend our participation in the present system.’” But such sentiments have hardly been limited to European scholars.

In America, for instance, former Duke University professor of literature Jane Tompkins echoed Foucault’s position in her essay “Sentimental Power: Uncle Tom’s Cabin and the Politics of Literary History.” There, Tompkins approvingly cited novelist Harriet Beecher Stowe’s belief, as expressed in Uncle Tom, that the “political and economic measures that constitute effective action for us, she regards as superficial, mere extensions of the worldly policies that produced the slave system in the first place.’” In the view of people like Tompkins, apparently, “political measures” will somehow sprout out of the ground of their own accord—or at least, by means of the transformative redescriptive powers of “literature.”

Yet if literature is simply a matter of redescription then it must be possible to redescribe “literature” itself: which in this paragraph will be in terms of a growing scientific “literature” (!) that, since the 1930s, has examined the differences between animals and human beings in terms of what are known as “probability guessing experiment[s].” In the classic example of this research—as cited in a 2000 paper called “The Left Hemisphere’s Role in Hypothesis Formation”—if a light is flashed with a ratio of 70% red light to 30% green, animals will tend always to guess red, while human beings will attempt to anticipate which light will be flashed next: in other words, animals will “tend to maximize or always choose the option that has occurred most frequently in the past”—whereas human beings will “tend to match the frequency of previous occurrences in their guesses.” Animals will simply always guess the same answer, while human beings will attempt to divine the pattern: that is, they will make their guesses based on the assumption that the previous series of flashes were meaningful. If the previous three flashes were “red, red, green,” a human being will tend to guess that the next flash will be red, whereas an animal will simply always guess red.

That in turn implies that, since in this specific example there is in fact no pattern and merely a probabilistic ratio of green to red, animals will always outperform human beings in this sort of test: as the authors of the paper write, “choosing the most frequent option all of the time, yields more correct guesses than matching as long as p ≠ 0.5.” Or, as they also note, “if the red light occurs with a frequency of 70% and a green light occurs with a frequency of 30%, overall accuracy will be highest if the subject predicts red all the time.” It’s true, in other words, that attempting to match a pattern will result in being correct 100% of the time—if the pattern is successfully matched. That result has, arguably, consequences for the liberationist claims of social constructionist arguments in general and literature in specific.

I trust that, without much in the way of detail—which I think could be elucidated at tiresome length—it can be stipulated that, more or less, the entire liberatory project of “literature” described above, as held by such luminaries as Foucault or Tompkins, can be said to be an attempt at elaborating rules for “pattern recognition.” Hence, it’s possible to understand how training in literature might be helpful towards fighting discrimination, which after all is obviously about constructing patterns: racists are not racist towards merely 65% of all black people, or are only racist 37% of the time. Racism—and other forms of discrimination—are not probabilistic, they are deterministic: they are rules used by discriminators that are directed at everyone within the class. (It’s true that the phenomenon of “passing” raises questions about classes, but the whole point of “passing” is that individual discriminators are unaware of the class’ “true” boundaries.) So it’s easy to see how pattern-recognition might be a useful skill with which to combat racial or other forms of discrimination.

Matching a pattern, however, suffers from one difficulty: it requires the existence of a pattern to be matched. Yet, in the example discussed in “The Left Hemisphere’s Role in Hypothesis Formation”—as in everything influenced by probability—there is no pattern: there is merely a larger chance of the light being red rather than green in each instance. Attempting then to match a pattern in a situation ruled instead by probability is not only unhelpful, but positively harmful: because there is no pattern, “guessing” simply cannot perform as well as simply maintaining the same choice every time. (Which in this case would at least result in being correct 70% of the time.) In probabilistic situations, in other words, where there is merely a certain probability of a given result rather than a certain pattern, both empirical evidence and mathematics itself demonstrates that the animal procedure of always guessing the same will be more successful than the human attempt at pattern recognition.

Hence, it follows that although training in recognizing patterns—the basis of schooling in literature, it might be said—might be valuable in combatting racism, such training will not be helpful in facing other sorts of problems: as the scientific literature demonstrates, pattern recognition as a strategy only works if there is a pattern. That in turn means that literary training can only be useful in a deterministic, and not probabilistic, world—and therefore, then, the project of “literature,” so-called, can only be “liberatory” in the sense meant by its partisans if the obstacles from which human beings need liberation are pattern-based. And that’s a conclusion, it seems to me, that is questionable at best.

Take, for example, the matter of American health care. Unlike all other industrialized nations, the United States does not have a single, government-run healthcare system, despite the fact that—as Malcolm Gladwell has noted that the American labor movement knew as early as the 1940s— “the safest and most efficient way to provide insurance against ill health or old age [is] to spread the costs and risks of benefits over the biggest and most diverse group possible.”  In other words, insurance works best by lumping, not splitting. The reason why may perhaps be the same as the reason that, as the authors of “The Left Hemisphere’s Role in Hypothesis Formation” point out, it can be said that “humans choose a less optimal strategy than rats” when it comes to probabilistic situations. Contrary to the theories of those in the humanities, in other words, the reality  is that human beings in general—and Americans when it comes to health care—appear to have a basic unfamiliarity with the facts of probability.

One sign of that ignorance is, after all, the growth of casino gambling in the United States even as health care remains a hodgepodge of differing systems—despite the fact that both insurance and casinos run on precisely the same principle. As statistician and trader Nassim Taleb has pointed out, casinos “never (if they do things right) lose money”—so long as they are not run by Donald Trump—because they “simply do not let one gambler make a massive bet” and instead prefer “to have plenty of gamblers make a series of bets of limited size.” In other words, it is not possible for some high roller to bet, say, a Las Vegas casino the entire worth of the casino on a single hand of blackjack, or any other game; casinos just simply limit the stakes to something small enough that the continued existence of the business is not at risk on any one particular event, and then make sure that there are enough bets being made to allow the laws of probability in every game (which are tilted toward the casino) to ensure the continued health of the business. Insurance, as Gladwell observed above, works precisely the same way: the more people paying premiums—and the more widely dispersed they are—the less likely it is that any one catastrophic event can wipe out the insurance fund. Both insurance and casinos are lumpers, not splitters: that, after all, is precisely why all other industrialized nations have put their health care systems on a national basis rather than maintaining the various subsystems that Americans—apparently inveterate splitters—still have.

Health care, of course, is but one of the many issues of American life that, although influenced by, ultimately have little to do with, racial or other kinds of discrimination: what matters about health care, in other words, is that too few Americans are getting it, not merely that too few African-Americans are. The same is true, for instance, about incarceration: although such works as Michelle Alexander’s The New Jim Crow have argued that the fantastically-high rate of incarceration in the United States constitutes a new “racial caste system,” University of Pennsylvania professor of political science Marie Gottschalk has pointed out that “[e]ven if you released every African American from US prisons and jails today, we’d still have a mass incarceration crisis in this country.” The problem with American prisons, in other words, is that there are too many Americans in them, not (just) too many African-Americans—or any other sort of American.

Viewing politics through a literary lens, in sum—as a matter of flashes of insight and redescription, instantiated by Wittgenstein’s duck-rabbit figure and so on—ultimately has costs: costs that have been witnessed again and again in recent American history, from the War on Drugs to the War on Terror. As Warren recognizes, viewing such issues as health care or prisons through a literary, or more specifically racial, lens is ultimately an attempt to fit a square peg through a round hole—or, perhaps even more appositively, to bring a knife to a gun fight. Warren, in short, may as well have cited UCLA philosophy professor Abraham Kaplan’s observation, sometimes called Kaplan’s Law of the Instrument: “Give a boy a hammer and everything he meets has to be pounded.” (Or, as Kaplan put the point more delicately, it ought not to be surprising “to discover that a scientist formulates problems in a way which requires for their solution just those techniques in which he himself is especially skilled.”) Much of the American “left,” in other words, views all problems as matters of redescription and so on—a belief not far from common American exhortations to “think positively” and the like. Certainly, America is far from the post-racial utopia some would like it to be. But curing the disease is not—contrary to the beliefs of many Americans today—the same as diagnosing it.

Like it—or lump it.

Just Say No

Siger wished to remain a professing Catholic, and to safeguard his faith he had recourse to the celebrated theory of the two truths: what is true in philosophy may be false in religion, and vice versa.
—“Siger of Brabant” New Catholic Encyclopedia. 1914. 
If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments where one suffices.
—Thomas Aquinas. Summa Contra Gentiles
220px-benozzo_gozzoli_004a
“The Triumph of Thomas Aquinas Over Averroës”  Benozzo Gozzoli        (1420-1497)

Let no one,” read the sign over Plato’s Academy, the famed school of ancient Athens, “ignorant of mathematics enter here.” To Plato, understanding mathematics was prerequisite to the discussion of other topics, including politics. During the 1880s, however, some professors in the German university system (like Wilhelm Windelband and Wilhelm Dilthey) divided knowledge into what they called “geisteswissenschaften” (“human sciences”) and “naturwissenschaften” (“natural sciences”), so that where Plato viewed mathematics as a necessary substrate in a vertical, hierarchical relation with other fields, the Germans thought of that relation horizontally, as if they were co-equals. Today, that German conception is best exemplified by what’s known as “science studies”: the “institutional goal of” which, as Mark Bauerlein of Emory University observed some years ago, is “to delimit the sciences to one knowledge domain, to show that they speak not for reality, but for certain constructions of reality.” (Or, as one of the founders of “science studies”—Andrew Ross—began a book on the matter back in the early 1990s: “This book is dedicated to all of the science teachers I never had. It could only have been written without them.”) Yet, while it may be that the German horizontal conception (to use Plato’s famous metaphor) “carves nature at the joint” better than Plato’s vertical one, the trouble with thinking of the mathematical, scientific world as one thing and the world of the human, including the political, as something else is that, among other costs, it makes it very difficult to tell—as exemplified by two different accounts of this same historical event—the story of George Washington’s first veto. Although many people appear to think of the “humanities” as just the ticket to escape America’s troubled political history … well, maybe not.

The first account I’ll mention is a chapter entitled “The First Veto,” contained in a book published in 2002 called Political Numeracy: Mathematical Perspectives on Our Chaotic Constitution. Written by law professor Michael Meyerson of the University of Baltimore, Meyerson’s book is deeply influenced by the German, horizontal view: he begins his book by observing that, when he began law school, his torts teacher sneered to his class that if any of them “were any good in math, you’d all be in medical school,” and goes on to observe that the “concept of mathematics can be relevant to the study of law seems foreign to many modern legal minds”—presumably, due to the German influence. Meyerson writes his book, then, as an argument against the German horizontal concept—and hence, implicitly, in favor of the Platonic, Greek one. Yet Meyerson’s work is subtly damaged by contact with the German view: it is not as good a treatment of the first presidential veto as another depiction of that same event—one written long before the German distinction came to be dominant in the United States.

That account was written by political scientist Edward James of the University of Chicago, and is entitled The First Apportionment of Federal Representatives in the United States: A Study in American Politics. Published in 1896, or more than a century before Meyerson’s account, it is nevertheless wholly superior: in the first place because of its level of detail, but in the second because—despite being composed in what might appear to contemporary readers as a wholly-benighted time—it’s actually far more sophisticated than Meyerson on precisely the subject that the unwary might suppose him to be weakest on. But before taking up that matter, it might be best to explain just what the first presidential veto was about.

George Washington only issued two vetoes during his two terms as president of the United States, which isn’t a record—several presidents have issued zero vetoes, including George W. Bush in his first term. But two is a pretty low number of vetoes: the all-time record holder, Franklin Roosevelt, issued 635 vetoes over his twelve years in office, and two others have issued more than a hundred. Yet while Washington’s second veto, concerning the War Department, appears fairly inconsequential today, his first veto has had repercussions that still echo in the United States. That’s because it concerned what’s of tremendous political importance to all Americans even now: the architecture of the national legislature, Congress. But it also (in a fashion that may explain just why Washington’s veto does not receive the attention it might) concerned that basic mathematical operation: division.

The structure of the Congress is detailed in Article One of the U.S. Constitution, whose first section vests the legislative power of the national government in Congress and then divides that Congress into two houses, the Senate and the House of Representatives. Section Two of Article One describes the House of Representatives, and Clause Three of Section Two describes, among other things, just how members of the House should be distributed around the nation: the members should, the clause says, “not exceed one for every thirty Thousand” inhabitants. But it also says that “each state shall have at Least one Representative”—and that causes all the trouble.

“At the heart of the dispute,” as Meyerson remarks, is a necessarily small matter: “fractions.” Or, as James puts it in what I think of as his admirably direct fashion:

There will always be remainders after dividing the population of the state by the number of people entitled to a representative, and so long as this is true, an exact division on numerical basis is impossible, if state lines must be observed in the process of apportionment.

It isn’t possible, in other words, to have one-sixth of a congressman (no matter what we might think of her cognitive abilities), nor is it likely that state populations will be an easily-dividable number. If it were possible to ignore state lines it would also be possible to divide up the country by population readily: as James remarks, without having to take into account state boundaries the job would be “a mere matter of arithmetic.” But because state boundaries have to be taken into account, it isn’t.

The original bill—the one that Washington vetoed—tackled this problem in two steps: in the first place, it simply divided the country, whose population the 1790 Census revealed to be (on Census Day, 2 August 1790) 3,929,214, and divided by 33,000 (which does not exceed one per 30,000), which of course gives a product just shy of 120 (119.067090909, to be precise). So that was to be the total number of seats in the House of Representatives.

The second step then was to distribute them, which Congress solved by giving—according to the “The Washington Papers” at the University of Virginia—“an additional member to the eight states with the largest fraction left over after dividing.” But doing so meant that, effectively, some states’ population was being divided by 30,000 while others were being divided by some other number: as James describes, while Congress determined the total number of congressmen by dividing the nation’s total population by 33,000, when it came time to determine which states got those congressmen the legislature used a different divisor. The bill applied a 30,000 ratio to “Rhode Island, New York, Pennsylvania, Maryland, Virginia, Kentucky and Georgia,” while applying “one of 27,770 to the other eight states.” Hence, as Washington would complain in his note to Congress explaining his veto, there was “no one proportion or divisor”—a fact that Edmund Randolph, Washington’s Attorney General (and, significantly as we’ll see, a Virginian), would say was “repugnant to the spirit of the constitution.” That opinion Washington’s Secretary of State, Thomas Jefferson (also a Virginian) shared.

Because the original bill used different divisors, Jefferson said that meant that it did not contain “any principle at all”—and hence would allow future Congresses to manipulate census results for political purposes “according to any … crochet which ingenuity may invent.” Jefferson thought, instead, that every state’s population ought to be divided by the same number: a “common divisor.” On the one hand, of course, that appears perfectly fair: using a single divisor gave the appearance of transparency and prevented the kinds of manipulations Jefferson envisioned. But it did not prevent what is arguably another manipulation: under Jefferson’s plan, which had largely the same results as the original plan, two seats were taken away from Delaware and New Hampshire and given to Pennsylvania—and Virginia.

Did I mention that Jefferson (and Randolph and Washington) was a Virginian? All three were, and at the time Virginia was, as Meyerson to his credit points out, “the largest state in the Union” by population. Yet while Meyerson does correctly note “that the Jefferson plan is an extraordinarily effective machine for allocating extra seats to large states,” he fails to notice something else about Virginia—something that James does notice (as we shall see). Virginia in the 1790s was not just the most populous state, but also a state with a very large, very wealthy, and very particular local industry.

That industry was, of course, slavery, and as James wrote (need I remind you) in 1896, it did not escape sharp people at the time of Washington’s veto that, in the first place, “the vote for and against the bill was perfectly geographical, a Northern against a Southern vote,” and secondly that Jefferson’s plan had the effect of “diminish[ing] the fractions in the Northern and Eastern states and increase them in the Southern”—a pattern that implied to some that “the real reason for the adoption” of Jefferson’s plan “was not that it secured a greater degree of fairness in the distribution, but that it secured for the controlling element in the Senate”—i.e., the slaveowners—“an additional power.” “It is noticeable,” James drily remarks, “that Virginia had been picked out especially as a state which profited” by Jefferson’s plan, and that “it was […] Virginians who persuaded the President” to veto the original bill. In other words, it’s James, in 1896, who is capable of discussing the political effects of the mathematics involved in terms of race—not Meyerson, despite the fact that the law professor (because he graduated from high school in 1976) had the benefit of, among other advantages, having witnessed at least the tail end of the American civil rights movement.

All that said, I don’t know just why, of course, Meyerson feels it possible to ignore the relation between George Washington’s first, and nearly only, veto and slavery: he might for instance argue that his focus is on the relation between mathematics and politics, and that bringing race into the discussion would muddy his argument. But that’s precisely the point, isn’t it? Meyerson’s reason for excluding slavery from his discussion of Washington’s first veto is, I suspect at any rate, driven precisely by his sense that race is a matter of geisteswissenschaften. 

After all, what else could it be? As Walter Benn Michaels of the University of Illinois at Chicago has put the point, despite the fact that “we don’t any longer believe in race as a biological entity, we still treat people as if they belonged to races”—which means that we must (still) think that race exists somehow. And since the biologists assure us that there is no way—biologically speaking—to link people from various parts of, say, Africa more than people from Asia or Europe (or as Michaels says, “there is no biological fact of the matter about what race you belong to”), we must thusly be treating race as a “social” or “cultural” fact rather than a natural one—which of course implies that we must think there is (still) a distinction to be made between the “natural sciences” and the “human sciences.” Hence, Meyerson excludes race from his analysis of Washington’s first veto because he (still) thinks of race as part of the “human sciences”: even Meyerson, it seems, cannot escape the gravity well of the German concept. Yet, since there isn’t any such thing as race, that necessarily raises the question of just why we think that there are two kinds of “science.” Perhaps there is little to puzzle over about just why some Americans might like the idea of race, but one might think that it is something of a mystery just why soi-disant “intellectuals” like that idea.

Or maybe not.

Don Thumb

Then there was the educated Texan from Texas who looked like someone in Technicolor and felt, patriotically, that people of means—decent folk—should be given more votes than drifters, whores, criminals, degenerates, atheists, and indecent folk—people without means.
Joseph Heller. Catch-22. (1961).

 

“Odd arrangements and funny solutions,” the famed biologist Stephen Jay Gould once wrote about the panda’s thumb, “are the proof of evolution—paths that a sensible God would never tread but that a natural process, constrained by history, follows perforce.” The panda’s thumb, that is, is not really a thumb: it is an adaptation of another bone (the radial sesamoid) in the animal’s paw; Gould’s point is that the bamboo-eater’s thumb is not “a beautiful machine,” i.e. not the work of “an ideal engineer.” Hence, it must be the product of an historical process—a thought that occurred to me once again when I was asked recently by one of my readers (I have some!) whether it’s really true, as law professor Paul Finkelman has suggested for decades in law review articles like “The Proslavery Origins of the Electoral College,” that the “connection between slavery and the [electoral] college was deliberate.” One way to answer the question, of course, is to pour through (as Finkelman has very admirably done) the records of the Constitutional Convention of 1787: the notes of James Madison, for example, or the very complete documents collected by Yale historian Max Farrand at the beginning of the twentieth century. Another way, however, is to do as Gould suggests, and think about the “fit” between the design of an instrument and the purpose it is meant to achieve. Or in other words, to ask why the Law of Large Numbers suggests Donald Trump is like the 1984 Kansas City Royals.

The 1984 Kansas City Royals, for those who aren’t aware, are well-known in baseball nerd circles for having won the American League West division despite being—as famous sabermetrician Bill James, founder of the application of statistical methods to baseball, once wrote—“the first team in baseball history to win a championship of any stripe while allowing more runs (684) than they scored (673).” “From the beginnings of major league baseball just after the civil war through 1958,” James observes, no team ever managed such a thing. Why? Well, it does seem readily apparent that scoring more runs than one’s opponent is a key component to winning baseball games, and winning baseball games is a key component to winning championships, so in that sense it ought to be obvious that there shouldn’t be many winning teams that failed to score more runs than their opponents. Yet on the other hand, it also seems possible to imagine a particular sort of baseball team winning a lot of one-run games, but occasionally giving up blow-out losses—and yet as James points out, no such team succeeded before 1959.

Even the “Hitless Wonders,” the 1906 Chicago White Sox, scored more runs than their opponents  despite hitting (according to This Great Game: The Online Book of Baseball) “a grand total of seven home runs on the entire season” while simultaneously putting up the American League’s “worst batting average (.230).” The low-offense South Side team is seemingly made to order for the purposes of this discussion because they won the World Series that year (over the formidable Chicago Cubs)—yet even this seemingly-hapless team scored 570 runs to their opponents’ 460, according to Baseball Reference. (A phenomenon most attribute to the South Siders’ pitching and fielding: that is, although they didn’t score a lot of runs, they were really good at preventing their opponents’ from scoring a lot of runs.) Hence, even in the pre-Babe Ruth “dead ball” era, when baseball teams routinely employed “small ball” strategies designed to produce one-run wins as opposed to Ruth’s “big ball” attack, there weren’t any teams that won despite scoring fewer runs than their opponents’.

After 1958, however, there were a few teams that approached that margin: the 1959 Dodgers, freshly moved to Los Angeles, scored only 705 runs to their opponents’ 670, while the 1961 Cincinnati Reds scored 710 to their opponents 653, and the 1964 St. Louis Cardinals scored 715 runs to their opponents’ 652. Each of these teams were different than most other major league teams: the ’59 Dodgers played in the Los Angeles Coliseum, a venue built for the 1932 Olympics, not baseball; its cavernous power alleys were where home runs went to die, while its enormous foul ball areas ended many at-bats that would have continued in other stadiums. (The Coliseum, that is, was a time machine to the “deadball” era.) The 1961 Reds had Frank Robinson and virtually no other offense until the Queen City’s nine was marginally upgraded through a midseason trade. Finally, the 1964 Cardinals team had Bob Gibson (please direct yourself to the history of Bob Gibson’s career immediately if you are unfamiliar with him), and second they played in the first year after major league baseball’s Rules Committee redefined the strike zone to be just slightly larger—a change that had the effect of dropping home run totals by ten percent and both batting average and runs scored by twelve percent. In The New Historical Baseball Abstract, Bill James calls the 1960s the “second deadball era”; the 1964 Cardinals did not score a lot of runs, but then neither did anyone else.

Each of these teams was composed of unlikely sets of pieces: the Coliseum was a weird place to play baseball, the Rule Committee was a small number of men who probably did not understand the effects of their decision, and Bob Gibson was Bob Gibson. And even then, these teams all managed to score more runs than their opponents, even if the margin was small. (By comparison, the all-time run differential record is held by Joe DiMaggio’s 1939 New York Yankees, who outscored their opponents by 411 runs: 967 to 556, a ratio may stand until the end of time.) Furthermore, the 1960 Dodgers finished in fourth place, the 1962 Reds finished in third, and the 1965 Cards finished seventh: these were teams, in short, that had success for a single season, but didn’t follow up. Without going very deeply into the details then, suffice it to say that run differential is—as Sean Forman noted in the The New York Times in 2011—“a better predictor of future win-loss percentage than a team’s actual win-loss percentage.” Run differential is a way to “smooth out” the effects of chance in a fashion that the “lumpiness” of win-loss percentage doesn’t.

That’s also, as it happens, just what the Law of Large Numbers does: first noted by mathematician Jacob Bernoulli in his Ars Conjectandi of 1713, that law holds that “the more … observations are taken into account, the less is the danger of straying from the goal.” It’s the principle that is the basis of the insurance industry: according to Caltech physicist Leonard Mlodinow, it’s the notion that while “[i]ndividual life spans—and lives—are unpredictable, when data are collected from groups and analyzed en masse, regular patterns emerge.” Or for that matter, the law is also why it’s very hard to go bankrupt—which Donald Trump, as it so happens, has—when running a casino: as Nicholas Taleb commented in The Black Swan: The Impact of the Highly Improbable, all it takes to run a successful casino is to refuse to allow “one gambler to make a massive bet,” and instead “have plenty of gamblers make series of bets of limited size.” More bets equals more “observations,” and the more observations the more likely it is that all those bets will converge toward the expected result. In other words, one coin toss might be heads or might be tails—but the more times the coin is thrown, the more likely it is that there will be an equal number of both heads and tails.

How this concerns Donald Trump is that, as has been noted, although the president-elect did win the election, he did not win more votes than the Democratic candidate, Hillary Clinton. (As of this writing, those totals now stand at 62,391,335 votes for Clinton to Trump’s 61,125,956.) The reason that Clinton did not win the election is because American presidential elections are not won by collecting more votes in the wider electorate, but rather through winning in that peculiarly American institution, the Electoral College: an institution in which, as Will Hively remarked remarkably presciently in a Discover article in 1996, a “popular-vote loser in the big national contest can still win by scoring more points in the smaller electoral college.” Despite how weird that bizarre sort of result actually is, however, according to some that’s just what makes the Electoral College worth keeping.

Hively was covering that story in 1996: his Discovery story was about how, in the pages of the journal Public Choice that year, mathematician Alan Natapoff tried to argue that the “same logic that governs our electoral system … also applies to many sports”—for example, baseball’s World Series. In order “to become [World Series] champion,” Natapoff noticed, a “team must win the most games”—not score the most runs. In the 1960 World Series, the mathematician wrote, the New York Yankees “scored more than twice as many total runs as the Pittsburgh Pirates, 55 to 27”—but the Yankees lost game 7, and thus the series. “Runs must be grouped in a way that wins games,” Natapoff thought, “just as popular votes must be grouped in a way that wins states.” That is, the Electoral College forces candidates to “have broad appeal across the whole nation,” instead of playing “strongly on a single issue to isolated blocs of voters.” It’s a theory that might seem, on its face, to have a certain plausibility: by constructing the Electoral College, the delegates to the constitutional convention of 1787 prevented future candidates from winning by appealing to a single, but large, constituency.

Yet, recall Stephen Jay Gould’s remark about the panda’s thumb, which suggests that we can examine just how well a given object fulfills its purpose: in this case, Natapoff is arguing that, because the design of the World Series “fits” the purpose of identifying the best team in baseball, so too does the Electoral College “fit” the purpose of identifying the best presidential candidate. Natapoff’s argument concerning the Electoral College presumes, in other words, that the task of baseball’s playoff system is to identify the best team in baseball, and hence it ought to work for identifying the best president. But the Law of Large Numbers suggests that the first task of any process that purports to identify value is that it should eliminate, or at least significantly reduce, the effects of chance: whatever one thinks about the World Series, presumably presidents shouldn’t be the result of accident. And the World Series simply does not do that.

“That there is”—as Nate Silver and Dayn Perry wrote in their ESPN.com piece, “Why Don’t the A’s Win In October?” (collected in Jonah Keri and James Click’s Baseball Between the Numbers: Why Everything You Know About the Game Is Wrong)—“a great deal of luck involved in the playoffs is an incontrovertible mathematical fact.” It’s a point that was


argued so early in baseball’s history as 1904, when the New York Giants refused to split the gate receipts evenly with what they considered to be an upstart American League team (Cf. “Striking Out” https://djlane.wordpress.com/2016/07/31/striking-out/.). As Caltech physicist Leonard Mlodinow has observed, if the World Series were designed—by an “ideal engineer,” say—to make sure that one team was the better team, it would have to be 23 games long if one team were significantly better than the other, and 269 games long if the two teams were evenly matched—that is, nearly as long as two full seasons. In fact, since it may even be argued that baseball, by increasingly relying on a playoff system instead of the regular season standings, is increasing, not decreasing, the role of chance in the outcome of its championship process: whereas prior to 1969, the two teams meeting in the World Series were the victors of a paradigmatic Law of Large Numbers system—the regular season—now many more teams enter the playoffs, and do so by multiple routes. Chance is playing an increasing role in determining baseball’s champions: in James’ list of sixteen championship-winning teams that had a run differential of less than 1.100: 1, all of the teams, except the ones I have already mentioned, are from 1969 or after. Hence, from a mathematical perspective the World Series cannot be seriously argued to eliminate, or even effectively reduce, the element of chance—from which it can be reasoned, as Gould says about the panda’s thumb, that the purpose of the World Series is not to identify the best baseball team.

Natapoff’s argument, in other words, has things exactly backwards: rather than showing just how rational the Electoral College is, the comparison to baseball demonstrates just how irrational it is—how vulnerable it is to chance. In the light of Gould’s argument about the panda’s thumb, which suggests that a lack of “fit” between the optimal solution (the human thumb) to a problem and the actual solution (the panda’s thumb) implies the presence of “history,” that would then intimate that the Electoral College is either the result of a lack of understanding of the mathematics of chance with regards to elections—or that the American system for electing presidents was not designed for the purpose that it purports to serve. As I will demonstrate, despite the rudimentary development of the mathematics of probability at the time at least a few—and these, some of the most important—of the delegates to the Philadelphia convention in 1787 were aware of those mathematical realities. That fact suggests, I would say, that Paul Finkelman’s arguments concerning the purpose of the Electoral College are worth much more attention than they have heretofore received: Finkelman may or may not be correct that the purpose of the Electoral College was to support slavery—but what is indisputable is that it was not designed for the purpose of eliminating chance in the election of American presidents.

Consider, for example, that although he was not present at the meeting in Philadelphia, Thomas Jefferson possessed not only a number of works on the then-nascent study of probability, but particularly a copy of the very first textbook to expound on Bernoulli’s notion of the Law of Large Numbers: 1718’s The Doctrine of Chances, or, A Method of Calculating the Probability of Events in Play, by Abraham de Moivre. Jefferson also had social and intellectual connections to the noted French mathematician, the Marquis de Condorcet—a man who, according to Iain McLean of the University of Warwick and Arnold Urken of the Stevens Institute of Technology, applied “techniques found in Jacob Bernoulli’s Ars Conjectandi” to “the logical relationship between voting procedures and collective outcomes.” Jefferson in turn (McLean and Urken inform us) “sent [James] Madison some of Condorcet’s political pamphlets in 1788-9”—a connection that would only have reaffirmed a connection already established by the Italian Philip Mazzei, who sent a Madison a copy of some of Condorcet’s work in 1786: “so that it was, or may have been, on Madison’s desk while he was writing the Federalist Papers.” And while none of that implies that Madison knew of the marquis prior to coming to Philadelphia in 1787, before even meeting Jefferson when the Virginian came to France to be the American minister, the marquis had already become a close friend, for years, to another man who would become a delegate to the Philadelphia meeting: Benjamin Franklin. Although not all of the convention attendees, in short, may have been aware of the relationship between probability and elections, at least some were—and arguably, they were the most intellectually formidable ones, the men most likely to notice that the design of the Electoral College is in direct conflict with the Law of Large Numbers.

In particular, they would have been aware of the marquis’ most famous contribution to social thought: Condorcet’s “Jury Theorem,” in which—as Norman Schofield once observed in the pages of Social Choice Welfare—the Frenchman proved that, assuming “that the ‘typical’ voter has a better than even chance of choosing the ‘correct’ outcome … the electorate would, using the majority rule, do better than an average voter.” In fact, Condorcet demonstrated mathematically—using Bernoulli’s methods in a book entitled Essay on the Application of Analysis to the Probability of Majority Decisions (significantly published in 1785, two years before the Philadelphia meeting)—that adding more voters made a correct choice more likely, just as (according to the Law of Large Numbers) adding more games makes it more likely that the eventual World Series winner is the better team. Franklin at the least then, and perhaps Madison next most-likely, could not but have been aware of the possible mathematical dangers an Electoral College could create: they must have known that the least-chancy way of selecting a leader—that is, the product of the design of an infallible engineer—would be a direct popular vote. And while it cannot be conclusively demonstrated that these men were thinking specifically of Condorcet’s theories at Philadelphia, it is certainly more than suggestive that both Franklin and Madison thought that a direct popular vote was the best way to elect a president.

When James Madison came to the floor of Independence Hall to speak to the convention about the election of presidents for instance, he insisted that “popular election was better” than an Electoral College, as David O. Stewart writes in his The Summer of 1787: The Men Who Invented the Constitution. Meanwhile, it was James Wilson of Philadelphia—so close to Franklin, historian Lawrence Goldstone reports, that the infirm Franklin chose Wilson to read his addresses to the convention—who originally proposed direct popular election of the president: “Experience,” the Scottish-born Philadelphian said, “shewed [sic] that an election of the first magistrate by the people at large, was both a convenient & successful mode.” In fact, as William Ewald of the University of Pennsylvania has pointed out, “Wilson almost alone among the delegates advocated not only the popular election of the President, but the direct popular election of the Senate, and indeed a consistent application of the principle of ‘one man, one vote.’” (Wilson’s positions were far ahead of their time: in the case of the Senate, Wilson’s proposal would not be realized until the passage of the Seventeenth Amendment in 1913, and his stance in favor of the principle of “one man, one vote” would not be enunciated as part of American law until the Reynolds v. Sims line of cases decided by the Earl Warren-led U.S. Supreme Court in the early 1960s.) To Wilson, the “majority of people wherever found” should govern “in all questions”—a statement that is virtually identical to Condorcet’s mathematically-influenced argument.

What these men thought, in other words, was that an electoral system that was designed to choose the best leader of a nation would proceed on the basis of a direct national popular vote: some of them, particularly Madison, may even have been aware of the mathematical reasons for supposing that a direct national popular vote was how an American presidential election would be designed if it were the product of what Stephen Jay Gould calls an “ideal engineer.” Just as an ideal (but nonexistent) World Series would be at least 23, and possibly so long as 269 games—in order to rule out chance—the ideal election to the presidency would include as many eligible voters as possible: the more voters, Condorcet would say, the more likely those voters would be to get it right. Yet just as with the actual, as opposed to ideal, World Series, there is a mismatch between the Electoral College’s proclaimed purpose and its actual purpose: a mismatch that suggests researchers ought to look for the traces of history within it.

Hence, although it’s possible to investigate Paul Finkelman’s claims regarding the origins of the Electoral College by, say, trawling through the volumes of the notes taken at the Constitutional Convention, it’s also possible simply to think through the structure of the Constitution itself in the same fashion that Stephen Jay Gould thinks about, say, the structure of frog skeletons: in terms of their relation to the purpose they serve. In this case, there is a kind of mathematical standard to which the Electoral College can be compared: a comparison that doesn’t necessarily imply that the Constitution was created simply and only to protect slavery, as Finkelman says—but does suggest that Finkelman is right to think that there is something in need of explanation. Contra Natapoff, the similarity between the Electoral College and the World Series does not suggest that the American way of electing a head of state is designed to produce the best possible leader, but instead that—like the World Series—it was designed with some other goal in mind. The Electoral College may or may not be the creation of an ideal craftsman, but it certainly isn’t a “beautiful machine”; after electing the political version of the 1984 Kansas City Royals—who, by the way, were swept by Detroit in the first round—to the highest office in the land, maybe the American people should stop treating it that way.

I Think I’m Gonna Be Sad

In all Republics the voice of a majority must prevail.
—Andrew Jackson.

I know no safe depository of the ultimate powers of the society but the people themselves, and if we think them not enlightened enough to exercise that control with a wholesome discretion, the remedy is not to take control from them, but to inform their discretion.
—Thomas Jefferson. “Letter to William Charles Jarvis.” 28 September, 1820

 

 

When the Beatles first came to America, in February of 1964—Michael Tomasky noted recently for The Daily Beast—they rode from their gig at Ed Sullivan’s show in New York City to their first American concert in Washington, D.C. by train, arriving two hours and fifteen minutes after leaving Manhattan. It’s a seemingly trivial detail—until it’s pointed out, as Tomasky realized, that anyone trying that trip today would be lucky to do it in three hours. American infrastructure in short is not what it was: as the American Society of Civil Engineers wrote in 2009’s Report Card for American Infrastructure, “years of delayed maintenance and lack of modernization have left Americans with an outdated and failing infrastructure that cannot meet our needs.” But what to do about it? “What’s needed,” wrote John Cassidy, of The New Yorker, recently, “is some way to protect essential infrastructure investments from the vicissitudes of congressional politics and the cyclical ups and downs of the economy.” He suggests, instead, “an independent, nonpartisan board” that could “carry out cost-benefit analyses of future capital-spending proposals.” This board, presumably, would be composed of professionals above the partisan fray, and thus capable of seeing to the long-term needs of the country. It all sounds really jake, and just the thing that the United States ought to do—excepting only for the disappointing fact that the United States already has just such a board, and the existence of that “board” is the very reason why Americans don’t invest in infrastructure.

First though—has national spending on infrastructure declined, and is “politics” the reason for that decline? Many think so: “Despite the pressing infrastructure investment needs of the United States,” businessman Scott Thomasson wrote for the Council on Foreign Relations recently, “federal infrastructure policy is paralyzed by partisan wrangling over massive infrastructure bills that fail to move through Congress.” Those who take that line do have evidence, at least for the first proposition.

Take for instance the Highway Trust Fund, an account that provides federal money for investments in roads and bridges. In 2014, the Fund was in danger of “drying up,” as Rebecca Kaplan reported for CBS News at the time, mostly because the federal gas tax of 18.4 cents per gallon hasn’t been increased since 1993. Gradually, then, both the federal government and the states have, in relative terms, decreased spending on highways and other projects of that sort—so much so that people like former presidential economic advisor and president of Harvard University, Lawrence Summers, say (as Summers did last year) that “the share of public investment [in infrastructure], adjusting for depreciation … is zero.” (That is, spending on infrastructure is effectively less than the rate of inflation—which itself is pretty low.) So, while the testimony of the American Society of Civil Engineers might, to say the least, be biased—asking an engineer whether there ought to be more spending on engineering is like asking an ice cream man whether you need a sundae—there’s a good deal of evidence that the United States could stand more investment in the structures that support American life.

Yet, even if that’s so, is the relative decline in spending really the result of politics—rather than, say, a recognition that the United States simply doesn’t need the same sort of spending on highways and railroads that it once did? Maybe—because “the Internet,” or something—there simply isn’t the need for so much physical building any more. Still, aside from such spectacular examples as the Minneapolis Interstate 35 bridge collapse in 2007 or the failure of the levees in New Orleans during Hurricane Katrina in 2005, there’s evidence that the United States would be spending more money on infrastructure under a different political architecture.

Consider, for example, how the U.S. Senate “shot down … a measure to spend $50 billion on highway, rail, transit and airport improvements” in November of 2011, as The Washington Post’s Rosalind S. Helderman reported at the time. Although the measure was supported by 51 votes in favor to 49 votes against, the measure failed to pass—because, as Helderman wrote, according to the rules of the Senate “the measure needed 60 votes to proceed to a full debate.” Passing bills in the Senate these days requires, it seems, more than majority support—which, near as I can make out, is just what is meant by “congressional gridlock.” What “gridlock” means is the inability of a majority to pass its programs—absent that inability, nearly certainly the United States would be spending more money on infrastructure. At this point, then, the question can be asked: why should the American government be built in a fashion that allows a minority to hold the majority for ransom?

The answer, it seems, might be deflating for John Cassidy’s idea: when the American Constitution was written, it inscribed into its very foundation what has been called (by The Economist, among many, many others) the “dream of bipartisanship”—the notion that, somewhere, there exists a group of very wise men (and perhaps women?) who can, if they were merely handed the power, make all the world right again, and make whole that which is broken. In America, the name of that body is the United States Senate.

As every schoolchild knows, the Senate was originally designed as a body of “notables,” or “wise men”: as the Senate’s own website puts it, the Senate was originally designed to be an “independent body of responsible citizens.” Or, as James Madison wrote to another “Founding Father,” Edmund Randolph, justifying the institution, the Senate’s role was “first to protect the people against their rulers [and] secondly to protect the people against transient impressions into which they themselves might be led.” That last justification may be the source of the famous anecdote regarding the Senate, which involves George Washington saying to Thomas Jefferson that “we pour our legislation into the senatorial saucer to cool it.” While the anecdote itself only appeared nearly a century later, in 1872, still it captures something of what the point of the Senate has always been held to be: a body that would rise above petty politicking and concern itself with the national interest—just the thing that John Cassidy recommends for our current predicament.

This “dream of bipartisanship,” as it happens, is not just one held by the founding generation. It’s a dream that, journalist and gadfly Thomas Frank has said, “is a very typical way of thinking for the professional class” of today. As Frank amplified his remarks, “Washington is a city of professionals with advanced degrees,” and the thought of those professionals is “‘[w]e know what the problems are and we know what the answers are, and politics just get in the way.’” To members of this class, Frank says, “politics is this ugly thing that you don’t really need.” For such people, in other words, John Cassidy’s proposal concerning an “independent, nonpartisan board” that could make decisions regarding infrastructure in the interests of the nation as a whole, rather than from the perspective of this or that group, might seem entirely “natural”—as the only way out of the impasse created by “political gridlock.” Yet in reality—as numerous historians have documented—it’s in fact precisely the “dream of bipartisanship” that created the gridlock in the first place.

An examination of history in other words demonstrates that—far from being the disinterested, neutral body that would look deep into the future to examine the nation’s infrastructure needs—the Senate has actually functioned to discourage infrastructure spending. After John Quincy Adams was elected president in the contested election of 1824, for example, the new leader proposed a sweeping program of investment in roads and canals and bridges, but also a national university, subsidies for scientific research and learning, a national observatory, Western exploration, a naval academy, and a patent law to encourage invention. Yet, as Paul C. Nagel observes in his recent biography of the Massachusetts president, virtually none of Adams’ program was enacted: “All of Adams’ scientific and educational proposals were defeated, as were his efforts to enlarge the road and canal systems.” Which is true, so far as that goes. But Nagel’s somewhat bland remarks do not do justice to the matter of how Adams’ proposals were defeated.

After the election of 1824, which elected the 19th Congress, Adams’ party had a majority in the House of Representatives—one reason why Adams became president at all, because the chaotic election of 1824, split between three major candidates, was decided (as per the Constitution) by the House of Representatives. But while Adams’ faction had a majority in the House, they did not in the Senate, where Andrew Jackson’s pro-Southern faction held sway. Throughout the 19th Congress, the Jacksonian party controlled the votes of 25 Senators (in a Senate of 48 senators, two to a state) while Adams’ faction controlled, at the beginning of the Congress, 20. Given the structure of the U.S. Constitution, which requires agreement between the two houses of Congress as the national legislature before bills can become law, this meant that the Senate could—as it did—effectively veto any of the Adams’ party’s proposals: control of the Senate effectively meant control of the government itself. In short, a recipe for gridlock.

The point of the history lesson regarding the 19th Congress is that, far from being “above” politics as it was advertised to be in the pages of The Federalist Papers and other, more recent, accounts of the U.S. Constitution, the U.S. Senate proved, in the event, hardly to be more neutral than the House of Representatives—or even the average city council. Instead of considering the matter of investment in the future on its own terms, historians have argued, senators thought about Adams’ proposals in terms of how they would affect a matter seemingly remote from the matters of building bridges or canals. Hence, although senators like John Tyler of Virginia, for example—who would later be elected president himself—opposed Adams-proposed “bills that mandated federal spending for improving roads and bridges and other infrastructure” on the grounds that such bills “were federal intrusions on the states” (as Roger Matuz put it in his The Presidents’ Fact Book), many today argue that their motives were not so high-minded. In fact, they were actually as venial as any motive could be.

Many of Adams’ opponents, that is—as William Lee Miller of the University of Virginia wrote in his Arguing About Slavery: John Quincy Adams and the Great Battle in the United States Congress—thought that the “‘National’ program that [Adams] proposed would have enlarged federal powers in a way that might one day threaten slavery.” And, as Miller also remarks, the “‘strict construction’ of the Constitution and states’ rights that [Adams’] opponents insisted upon”— were, “in addition to whatever other foundations in sentiment and philosophy they had, barriers of protection against interference with slavery.” In short—as historian Harold M. Hyman remarked in his magisterial A More Perfect Union: The Impact of the Civil War and Reconstruction on the Constitution—while the “constitutional notion that tight limits existed on what government could do was a runaway favorite” at the time, in reality these seemingly-resounding defenses of limited government were actually motivated by a less-than savory interest: “statesmen of the Old South,” Hyman wrote, found that these doctrines of constitutional limits were “a mighty fortress behind which to shelter slavery.” Senators, in other words, did not consider whether spending money on a national university would be a worthwhile investment for its own sake; instead, they worried about the effect that such an expenditure would have on slavery.

Now, it could still reasonably be objected at this point—and doubtless will be—that the 19th Congress is, in political terms, about as relevant to today’s politics as the Triassic: the debates between a few dozen, usually elderly, white men nearly two centuries ago have been rendered impotent by the passage of time. “This time, it’s different,” such arguments could, and probably will, say. Yet, at a different point in American history, it was well-understood that the creation of such “blue-ribbon” committees or the like—such as the Senate—were in fact simply a means for elite control.

As Alice Sturgis, of Stanford University, wrote in the third edition of her The Standard Code of Parliamentary Procedure (now in its fourth edition, after decades in print, and still the paragon of the field), while some “parliamentary writers have mistakenly assumed that the higher the vote required to take an action, the greater the protection of the members,” in reality “the opposite is true.” “If a two-thirds vote is required to pass a proposal and sixty-five members vote for the proposal and thirty-five members vote against it,” Sturgis went on to write, “the thirty-five members make the decision”—which then makes for “minority, not majority, rule.” In other words, even if many circumstances in American life have changed since 1825, it still remains the case that the American government is (still) largely structured in a fashion that solidifies the ability of a minority—like, say, oligarchical slaveowners—to control the American government. And while slavery was abolished by the Civil War, it still remains the case that a minority can block things like infrastructure spending.

Hence, since infrastructure spending is—nearly by definition—for the improvement of every American, it’s difficult to see how making infrastructure spending less democratic, as Cassidy wishes, would make it easier to spend money on infrastructure. We already have a system that’s not very democratic—arguably, that’s the reason why we aren’t spending money on infrastructure, not because (as pundits like Cassidy might have it), “Washington” has “gotten too political.” The problem with American spending on infrastructure, in sum, is not that it is political. In fact, it is precisely the opposite: it isn’t political enough. That people like John Cassidy—who, by the way, is a transplanted former subject of the Queen of England—think the contrary is itself, I’d wager, reason enough to give him, and people like him, what the boys from Liverpool called a ticket to ride.

Several And A Single Place

 

What’s the matter,
That in these several places of the city
You cry against the noble senate?
Coriolanus 

 

The explanation, says labor lawyer Thomas Geoghegan, possesses amazing properties: he can, the one-time congressional candidate says, “use it to explain everything … because it seems to work on any issue.” But before trotting out what that explanation is, let me select an issue that might appear difficult to explain: gun control, and more specifically just why, as Christopher Ingraham of the Washington Post wrote in July, “it’s never the right time to discuss gun control.” “In recent years,” as Ingraham says, “politicians and commentators from across the political spectrum have responded to mass shootings with an invocation of the phrase ‘now is not the time,’ or a close variant.” That inability even to discuss gun control is a tremendously depressing fact, at least insofar as you have sympathy for the needless waste of lives gun deaths are—until you realize that we Americans have been here before. And that demonstrates, just maybe, that Thomas Geoghegan has a point.

Over a century and a half ago, Americans were facing another issue that, in the words of one commentator, “must not be discussed at all.” It was so grave an issue, in fact, that very many Americans found “fault with those who denounce it”—a position that this commenter found odd: “You say that you think [it] is wrong,” he observed, “but you denounce all attempts to restrain it.” That’s a pretty strange position, because who thinks something is wrong, but yet is “not willing to deal with [it] as a wrong?” What other subject could be called a wrong, but should not be called “wrong in politics because that is bringing morality into politics,” and conversely should not be called “wrong in the pulpit because that is bringing politics into religion.” To sum up, this commenter said, “there is no single place, according to you, where this wrong thing can properly be called wrong!”

The place where this was said was New Haven, Connecticut; the time, March of 1860; the speaker, a failed senatorial candidate now running for president for a brand-new political party. His name was Abraham Lincoln.

He was talking about slavery.

*                                            *                                        *

To many historians these days, much about American history can be explained by the fact that, as historian Leonard Richards of the University of Massachusetts put it in his 2000 book, The Slave Power: The Free North and Southern Domination, 1780-1860, so “long as there was an equal number of slave and free states”—which was more or less official American policy until the Civil War—“the South needed just one Northern vote to be an effective majority in the Senate.” That meant that controlling “the Senate, therefore, was child’s play for southern leaders,” and so “time and again a bill threatening the South [i.e., slavery above all else] made its way through the House only to be blocked in the Senate.” It’s a stunningly obvious point, at least in retrospect—at least for this reader—but I’d wager that few, if any, Americans have really thought through the consequences of this fact.

Geoghegan for example has noted that—as he put it in 1998’s The Secret Lives of Citizens: Pursuing the Promise of American Life—even today the Senate makes it exceedingly difficult to pass legislation: as he wrote, at present only “two-fifths of the Senate, or forty-one senators, can block any bill.” That is, it takes at least sixty senatorial votes to overcome the threat known as the “filibuster,” the invocation of which requires a supermajority to overcome. The filibuster however is not the only anti-majoritarian feature of the Senate, which is also equipped with such quaint customs as the “secret hold” and the quorum call and so forth, each of which can be used to delay a bill’s hearing—and so buy time to squelch potential legislation. Yet, these radically disproportionate senatorial powers merely mask the basic proportionate inequality at the heart of the Senate as an institution itself.

As political scientists Frances Lee and Bruce Oppenheimer point out in their Sizing Up the Senate: The Unequal Consequences of Equal Representation, the Senate is, because it makes small states the equal of large ones, “the most malapportioned legislature in the democratic world.” As Geoghegan has put the point, “the Senate depart[s] too much from one person, one vote,” because (as of the late 1990s) “90 percent of the population base as represented in the Senate could vote yes, and the bill would still lose.” Although Geoghegan wrote that nearly two decades ago, that is still largely true today: in 2013, Dylan Matthews of The Washington Post observed that while the “smallest 20 states amount to 11.27 percent of the U.S. population,” their senators “can successfully filibuster [i.e., block] legislation.” Thus, although the Senate is merely one antidemocratic feature of the U.S. Constitution, it’s an especially egregious one that, by itself, largely prevented a serious discussion of slavery in the years before the Civil War—and today prevents the serious discussion of gun control.

The headline of John Bresnahan’s 2013 article in Politico about the response to the Sandy Hook massacre, for example, was “Gun control hits brick wall in Senate.” Bresnahan quoted Nevadan Harry Reid, the Senate Majority Leader at the time, as saying that “the overwhelming number of Senate Republicans—and that is a gross understatement—are ignoring the voices of 90 percent of the American people.” The final vote was 54-46: in other words, the majority of the Senate was in favor of controls, but because the pro-control senators did not have a supermajority, the measure failed. In short, the measure was a near-perfect illustration of how the Senate can kill a measure that 90 percent of Americans favor.

And you know? Whatever you think about gun control, as an issue, if 90 percent of Americans want something, and what prevents them is not just a silly rule—but the same rule that protected slavery—well then, as Abraham Lincoln might tell us, that’s a problem.

It’s a problem because far from the Senate being—as George Washington supposedly said to Thomas Jefferson—the saucer that cools off politics, it’s actually a pressure cooker that exacerbates issues, rather than working them out. Imagine, say, had the South not had the Senate to protect its “peculiar institution” in the years leading to the Civil War: gradually, immigration to the North would have slowly turned the tide in Congress, which may have led to a series of small pieces of legislation that, eventually, would have abolished slavery.

Perhaps that may not have been a good thing: Ta Nehisi Coates, of The Atlantic, has written that every time he thinks of the 600,000-plus deaths that occurred as a result of the Civil War, he feels “positively fucking giddy.” That may sound horrible to some, of course, but there is something to the notion of “redemptive violence” when it comes to that war; Coates for instance cites the contemporary remarks of Private Thomas Strother, United States Colored Troops, in the Christian Recorder, the 19th century paper of the African Methodist Episcopal Church:

To suppose that slavery, the accursed thing, could be abolished peacefully and laid aside innocently, after having plundered cradles, separated husbands and wives, parents and children; and after having starved to death, worked to death, whipped to death, run to death, burned to death, lied to death, kicked and cuffed to death, and grieved to death; and worst of all, after having made prostitutes of a majority of the best women of a whole nation of people … would be the greatest ignorance under the sun.

“Were I not the descendant of slaves, if I did not owe the invention of my modern self to a bloody war,” Coates continues, “perhaps I’d write differently.” Maybe in some cosmic sense Coates is wrong, and violence is always wrong—but I don’t think I’m in a position to judge, particularly since I, as in part the descendant of Irish men and women in America, am aware that the Irish themselves may have codified that sort of “blood sacrifice theory” in the General Post Office of Dublin during Easter Week of 1916.

Whatever you think of that, there is certainly something to the idea that, because slaves were the single biggest asset in the entire United States in 1860, there was little chance the South would have agreed to end slavery without a fight. As historian Steven Deyle has noted in his Carry Me Back: The Domestic Slave Trade in American Life, the value of American slaves in 1860 was “equal to about seven times the total value of all currency in circulation in the country, three times the value of the entire livestock population, twelve times the value of the entire U.S. cotton crop and forty-eight times the total expenditure of the federal government”—certainly a value much more than it takes to start a war. But then had slavery not had, in effect, government protection during those antebellum years, it’s questionable whether slaves ever might have become such valuable commodities in the first place.

Far from “cooling” things off, in other words, it’s entirely likely that the U.S. Senate, and other anti-majoritarian features of the U.S. Constitution, actually act to enflame controversy. By ensuring that one side does not need to come to the bargaining table, in fact, all such oddities merely postpone—they do not prevent—the day of reckoning. They  build up fuel, ensuring that when the day finally arrives, it is all the more terrible. Or, to put it in the words of an old American song: these American constitutional idiosyncrasies merely trample “out the vintage where the grapes of wrath are stored.”

That truth, it seems, marches on.

Extra! Extra! White Man Wins Election!

 

Whenever you find yourself on the side of the majority,
it is time to pause and reflect
.
—Mark Twain

One of the more entertaining articles I’ve read recently appeared in the New York Times Magazine last October; written by Ruth Padawer and entitled “When Women Become Men At Wellesley,” it’s about how the newest “challenge,” as the terminology goes, facing American women’s colleges these days is the rise of students “born female who identified as men, some of whom had begun taking testosterone to change their bodies.” The beginning of the piece tells the story of “Timothy” Boatwright, a woman who’d decided she felt more like a man, and how Boatwright had decided to run for the post of “multicultural affairs coordinator” at the school, with the responsibility of “promoting a ‘culture of diversity’ among students and staff and faculty members.” After three “women of color” dropped out of the race for various unrelated reasons, that meant that Boatwright would be the only candidate still in the race—which meant that Wellesley, a woman’s college remember, would have as its next “diversity” official a white man. Yet according to Padawer this result wasn’t necessarily as ridiculous as it might seem: “After all,” the Times reporter said, “at Wellesley, masculine-of-center students are cultural minorities.” In the race to produce more and “better” minorities, then, Wellesley has produced a win for the ages—a result that, one might think, would cause reasonable people to stop and consider: just what is it about American society that is causing Americans constantly to redescribe themselves as one kind of “minority” or another? Although the easy answer is “because Americans are crazy,” the real answer might be that Americans are rationally responding to the incentives created by their political system: a system originally designed, as many historians have begun to realize, to protect a certain minority at the expense of the majority.

That, after all, is a constitutional truism, often repeated like a mantra by college students and other species of cretin: the United States Constitution, goes the zombie-like repetition, was designed to protect against the “tyranny of the majority”—even though that exact phrase was only first used by John Adams in 1788, a year after the Constitutional Convention. It is however true that Number 10 of the Federalist Papers does mention “the superior force of an interested and overbearing majority”—yet what those who discuss the supposed threat of the majority never seem to mention is that, while it is true that the United States Constitution is constructed with many, and indeed nearly a bewildering, variety of protections for the “minority,” the minority that was being protected at the moment of the Constitution’s writing was not some vague and theoretical interest: the authors of the Constitution were not professors of political philosophy sitting around a seminar room. Instead, the United States Constitution was, as political scientist Michael Parenti has put it, “a practical response to immediate material conditions”—in other words, the product of political horse-trading that resulted in a document that protected a very particular, and real, minority; one with names and families and, more significantly, a certain sort of property.

That property, as historians today are increasingly recognizing, was slavery. It isn’t for nothing that, as historian William Lee Miller has observed, not only was it that “for fifty of [the nation’s] first sixty four [years], the nation’s president was a slaveholder,” but also that the “powerful office of the Speaker of the House was held by a slaveholder for twenty-eight of the nation’s first thirty-five years,” and that the president pro tem of the Senate—one of the more obscure, yet still powerful, federal offices—“was virtually always a slaveholder.” Both Chief Justices of the Supreme Court through the first five decades of the nineteenth century, John Marshall and Roger Taney, were slaveholders, as were very many federal judges and other, lesser, federal office holders. As historian Garry Wills, author of Lincoln At Gettysburg among other volumes, has written, “the management of the government was disproportionately controlled by the South.” The reason why all of this was so was, as it happens, very ably explained at the time by none other than … Abraham Lincoln.

What Lincoln knew was that there was a kind of “thumb on the scale” when Northerners like the two Adams’, John and John Quincy, were weighed in national elections—a not-so-mysterious force that denied those Northern, anti-slavery men second terms as president. Lincoln himself explained what that force was in the speech he gave at Peoria, Illinois that signaled his return to politics in 1854. There, Lincoln observed that

South Carolina has six representatives, and so has Maine; South Carolina has eight presidential electors, and so has Maine. This is precise equality so far; and, of course they are equal in Senators, each having two. Thus in the control of the government, the two States are equals precisely. But how are they in the number of their white people? Maine has 581,813—while South Carolina has 274,567. Maine has twice as many as South Carolina, and 32,679 over. Thus each white man in South Carolina is more than the double of any man in Maine.

What Lincoln is talking about here is the notorious “Three-Fifths Compromise”: Article I, Section 2, Paragraph 3 of the United States Constitution. According to that proviso, slave states were entitled to representation in Congress according to the ratio of “three fifths of all other persons”—those being counted by that ratio being, of course, Southern slaves. And what the future president—the first president, it might be added, to be elected without the assistance of that ratio (a fact that would have, as I shall show, its own consequences)—was driving at was the effect this mathematical ratio was having on the political landscape of the country.

As Lincoln remarked in the same Peoria speech, the Three-Fifths Compromise meant that “five slaves are counted as being equal to three whites,” which meant that, as a practical matter, “it is an absolute truth, without an exception, that there is no voter in any slave State, but who has more legal power in the government, than any voter in any free State.” To put it more plainly, Lincoln said that the three-fifths clause “in the aggregate, gives the slave States, in the present Congress, twenty additional representatives.” Since the Constitution gave the same advantage in the Electoral College as it gave in the Congress, the reason for results like, say, the Adams’ lack of presidential staying power isn’t that hard to discern.

“One of those who particularly resented the role of the three-fifths clause in warping electoral college votes,” notes Miller, “was John Adams, who would probably have been reelected president over Thomas Jefferson in 1800 if the three-fifths ratio had not augmented the votes of the Southern states.” John Quincy himself had part of two national elections, 1824 and 1828, that had been skewed by what was termed at the time the “federal ratio”—which is to say that the reason why both Adams’ were one-term presidents likely had rather more with the form of the American government than with the content of their character, despite the representations of many historians after the fact.

Adams himself was quite aware of the effect of the “federal ratio.” The Hartford Convention of 1815, led by New Englanders like Adams, had recommended ending the advantage of the Southern states within the Congress, and in 1843 John Quincy’s son Charles Francis Adams caused the Massachusetts’ legislature to pass a measure that John Quincy would himself introduce to the U.S. Congress, “a resolution proposing that the Constitution be amended to eliminate the three-fifths ratio,” as Miller has noted. There were three more such attempts in 1844, three years before Lincoln’s arrival, all of which were soundly defeated, as Miller observes, by totals “skewed by the feature the proposed amendment would abolish.” The three-fifths ratio was not simply a bete noir of the Adams’ personally; all of New England was aware of that the three-fifths ratio protected the interests of the South in the national government—it’s one reason why, prior to the Civil War, “states’ rights” was often thought of as a Northern issue rather than a Southern one.

That the South itself recognized the advantages the United States Constitution gave them, specifically by that document’s protections of “minority”—in other words, slaveowner—interests, can be seen by reference to the reasons the South gave for starting the Civil War. South Carolina’s late 1860 declaration of secession, for example (the first such declaration) outright said that the state’s act of secession was provoked by the election of Abraham Lincoln—in other words, by the fact of the election of a presidential candidate who did not need the electoral votes of the South.

Hence, South Carolina’s declaration said that a “geographical line has been drawn across the Union, and all the States north of that line have united in the election of a man to the high office of President of the United States whose opinions and purposes are hostile to slavery.” The election had been enabled, the document went on to say, “by elevating to citizenship, persons who, by the supreme law of the land, are incapable of becoming citizens, and their votes have been used to inaugurate a new policy, hostile to the South.” Presumably, this is a veiled reference to the population gained by the Northern states over the course of the nineteenth century—a trend that was not only steadily weakening the advantage the South had initially enjoyed at the expense of the North at the time the Constitution had been enacted, but had only accelerated during the 1850s.

As one Northern newspaper observed in 1860, in response to the early figures being released by the United States Census Bureau at that time, the “difference in the relative standing of the slave states and the free, between 1850 and 1860, inevitably shows where the future greatness of our country is to be.” To Southerners the data had a different meaning: as Adam Goodheart noted in a piece for the New York Times’ series on the Civil War, Disunion, “the editor of the New Orleans Picayune noted that states like Michigan, Wisconsin, Iowa and Illinois would each be gaining multiple seats in Congress” while Southern states like Virginia, South Carolina and Tennessee would be losing seats. To the Southern slaveowners who would drive the road to secession during the winter of 1860, the fact that they were on the losing end of a demographic war could not have been far from mind.

Historian Leonard L. Richards of the University of Massachusetts, for example, has noted that when Alexis de Tocqueville traveled the American South in the early 1830s, he discovered that Southern leaders were “noticeably ‘irritated and alarmed’ by their declining influence in the House [of Representatives].” By the 1850s, those population trends were only accelerating: concerning the gains in population the Northern states were realizing by foreign immigration—presumably the subject of South Carolina’s complaint about persons “incapable of becoming citizens”—Richards cites Senator Stephen Adams of Mississippi, who “blamed the South’s plight”—that is, its declining population relative to the North—“on foreign immigration.” As Richards says, it was obvious to anyone paying attention to the facts that if “this trend continued, the North would in fifteen years have a two to one majority in the House and probably a similar majority in the Senate.” It seems unlikely to think that the most intelligent of Southern leaders could not have been cognizant of these primordial facts.

Their intellectual leaders, above all John Calhoun, had after all designed a political theory to justify the Southern, i.e. “minority,” dominance of the federal government. In Calhoun’s A Disquisition on Government, the South Carolinian Senator argued that a government “under the control of the numerical majority” would tend toward “oppression and abuse of power”—it was to correct this tendency, he writes, that the constitution of the United States made its different branches “the organs of the distinct interests or portions of the community; and to clothe each with a negative on the others.” It is, in other words, a fair description of the constitutional doctrine known as the “separation of powers,” a doctrine that Calhoun barely dresses up as something other than what it is: a brief for the protection of the right to own slaves. Every time, in other words, anyone utters the phrase “protecting minority rights” they are, wittingly or not, invoking the ideas of John Calhoun.

In any case, such a history could explain just why it is that Americans are so eager to describe themselves as a “minority,” of whatever kind. After all, it was the purpose of the American government initially to protect a particular minority, and so in political terms it makes sense to describe oneself as such in order to enjoy the protections that, initially built into the system, have become so endemic to American government: for example, the practice of racial gerrymandering, which has the perhaps-beneficial effect of protecting a particular minority—at the probable expense of the interests of the majority. Such a theory might perhaps also explain something else: just how it is, as professor Walter Benn Michaels of the University of Illinois at Chicago has remarked, that after “half a century of anti-racism and feminism, the U.S. today is a less equal society than was the racist, sexist society of Jim Crow.” Or, perhaps, how the election of—to use that favorite tool of American academics, quote marks to signal irony—a “white man” at a women’s college can, somehow, be a “victory” for whatever the American “left” is now. The real irony, of course, is that, in seeking to protect African-Americans and other minorities, that supposed left is merely reinforcing a system originally designed to protect slavery.