The End Of The Beginning

The essential struggle in America … will be between city men and yokels.
The yokels hang on because the old apportionments give them unfair advantages. …
But that can’t last.
—H.L. Mencken. 23 July 1928.

 

“It’s as if,” the American philosopher Richard Rorty wrote in 1998, “the American Left could not handle more than one initiative at a time, as if it either had to ignore stigma in order to concentrate on money, or vice versa.” Penn State literature professor Michael Bérubé sneered at Rorty at the time, writing that Rorty’s problem is that he “construes leftist thought as a zero-sum game,” as if somehow

the United States would have passed a national health-care plan, implemented a family-leave policy, and abolished ‘right to work’ laws if only … left-liberals in the humanities hadn’t been wasting our time writing books on cultural hybridity and popular music.

Bérubé then essentially asked Rorty, “where’s the evidence?”—knowing, of course, that it is impossible to prove a counterfactual, i.e. what didn’t happen. But even in 1998, there was evidence to think that Rorty was not wrong: that, by focusing on discrimination rather than on inequality, “left-liberals” have, as Rorty accused then, effectively “collaborated with the Right.” Take, for example, what are called “majority-minority districts,” which are designed to increase minority representation, and thus combat “stigma”—but have the effect of harming minorities.

A “majority-minority district,” according to Ballotpedia, “is a district in which a minority group or groups comprise a majority of the district’s total population.” They were created in response to Section Two of the Voting Rights Act of 1965, which prohibited drawing legislative districts in a fashion that would “improperly dilute minorities’ voting power.”  Proponents of their use maintain that they are necessary in order to prohibit what’s sometimes called “cracking,” or diluting a constituency so as to ensure that it is not a majority in any one district. It’s also claimed that “majority-minority” districts are the only way to ensure minority representation in the state legislatures and Congress—and while that may or may not be true, it is certainly true that after drawing such districts there were more minority members of Congress than there were before: according to the Congressional Research Service, prior to 1969 (four years after passage) there were less than ten black members of Congress, a number that then grew until, after the 106th Congress (1999-01), there have consistently been between 39 and 44 African-American members of Congress. Unfortunately, while that may have been good for individual representatives, it may not be all that great for their constituents.

That’s because while “majority-minority” districts may increase the number of black and minority congressmen and women, they may also decrease the total numbers of Democrats in Congress. As The Atlantic put the point in 2013: after the redistricting process following the Census of 1990, the “drawing of majority-minority districts not only elected more minorities, it also had the effect of bleeding minority voters out of all the surrounding districts”—making them virtually impregnably Republican. In 2012, for instance, Barack Obama won 44 Congressional districts by more than 50 percent of the vote, while Mitt Romney won only eight districts by such a large percentage. Figures like these could seem overwhelmingly in favor of the Democrats, of course—until it is realized that, by winning congressional seats by such huge margins in some districts, Democrats are effectively losing votes in others.

That’s why—despite the fact that he lost the popular vote—in 2012 Romney’s party won 226 of 435 Congressional districts, while Obama’s party won 209. In this past election, as I’ve mention in past posts, Republicans won 55% of the seats (241) despite getting 49.9% of the vote, while Democrats won 44% of the seats despite getting 47.3% of the vote. That might not seem like a large difference, but it is suggestive when these percentages always point in a single direction: going back to 1994, the year of the “Contract With America,” Republicans have consistently outperformed their share of the popular vote, while Democrats have consistently underperformed theirs.

From the perspective of the Republican party, that’s just jake, despite being—according to a lawsuit filed by the NAACP in North Carolina—due to “an intentional and cynical use of race.” Whatever the ethics of the thing, it’s certainly had major results. “In 1949,” as Ari Berman pointed out in The Nation not long ago, “white Democrats controlled 103 of 105 House seats in the former Confederacy,” while the last white Southern congressman not named Steve Cohen exited the House in 2014. Considered all together, then, as “majority-minority districts” have increased, the body of Southern congressmen (and women) has become like an Oreo: a thin surface of brown Democrats on the outside, thickly white and Republican on the inside—and nothing but empty calories.

Nate Silver, to be sure, discounted all this worry as so much ado about nothing in 2013: “most people,” he wrote then, “are putting too much weight on gerrymandering and not enough on geography.” In other words, “minority populations, especially African-Americans, tend to be highly concentrated in certain geographic areas,” so much so that it would a Herculean task “not to create overwhelmingly minority (and Democratic) districts on the South Side of Chicago, in the Bronx or in parts of Los Angeles or South Texas.” Furthermore, even if that could be accomplished such districts would violate “nonpartisan redistricting principles like compactness and contiguity.” But while Silver is right on the narrow ground he contests, it merely begs the question: why should geography have anything to do with voting? Silver’s position essentially ensures that African-American and other minority votes count for less. “Majority minority districts” imply that minority votes do not have as much effect on policy as votes in other kinds of districts: they create, as if the United States were some corporation with common and preferred shares, two kinds of votes.

Like discussions about, for example, the Electoral College—in which a vote in Wyoming is much more valuable than one in California—Silver’s position in other words implies that minority votes will remain less valuable than other votes because a vote in a “majority-minority” district will have less probability of electing a congressperson who is a member of a majority in Congress. What does it matter to African-Americans if one of their number is elected to Congress, if Congress can do nothing for them?  To Silver, there isn’t any issue with majority-minority districts because they reflect their underlying proportions of people—but what matters is whether whoever’s elected can get policies that benefit them.

Right here, in other words, we get to the heart of the dispute between the deceased Rorty and his former student Bérubé: the difference between procedural and substantive justice. To some left-liberal types like Michael Bérubé, that might appear just swell: to coders in the Valley (represented by California’s 17th, the only majority-Asian district in the continental United States) or cultural-studies theorists in Boston, what might be important is simply the numbers of minority representatives, not the ability to pass a legislative agenda that’s fair for all Americans. It all might seem like no skin off their nose. (More ominously, it conceivably might even be in their economic interests: the humanities and the arts after all are intellectually well-equipped for a politics of appearances—but much less so for a politics of substance.) But ultimately this also affects them, and for a similar reason: urban professionals are, after all, urban—which means that their votes are, like majority-minority districts, similarly concentrated.

“Urban Democrat House members”—as The Atlantic also noted in 2013—“win with huge majorities, but winning a district with 80 percent doesn’t help the party gain any more seats than winning with 60 percent.” As Silver put the same point, “white voters in cities with high minority populations tend to be quite liberal, yielding more redundancy for Democrats.” Although these percentages might appear heartening to some of those within such districts, they ought to be deeply worrying: individual votes are not translating into actual political power. The more geographically concentrated Democrats are the less and less capable their party becomes of accomplishing its goals. While winning individual races by huge margins might be satisfying to some, no one cares about running up the score in a junior varsity game.

What “left-liberal” types ought to be contesting, in other words, isn’t whether Congress has enough black and other minority people in it, but instead the ridiculous, anachronistic idea that voting power should be tied to geography. “People, not land or trees or pastures vote,” Chief Justice of the Supreme Court Earl Warren wrote in 1964; in that case, Wesberry v. Sanders, the Supreme Court ruled that, as much as possible, “one man’s vote in a Congressional election is to be worth as much as another’s.” By shifting discussion to procedural issues of identity and stigma, “majority-minority districts” obscure that much more substantive question of power. Like some gaggle of left-wing Roy Cohns, people like Michael Bérubé want to talk about who people are. His opponents ought to reply by saying they’re interested in what people could be—and building a real road to get there.

Advertisements

Size Matters

That men would die was a matter of necessity; which men would die, though, was a matter of circumstance, and Yossarian was willing to be the victim of anything but circumstance.
Catch-22.
I do not pretend to understand the moral universe; the arc is a long one, my eye reaches but little ways; I cannot calculate the curve and complete the figure by the experience of sight; I can divine it by conscience. And from what I see I am sure it bends towards justice.
Things refuse to be mismanaged long.
—“Of Justice and the Conscience.

 

monte-carlo-casino
The Casino at Monte Carlo

 

 

Once, wrote the baseball statistician Bill James, there was “a time when Americans” were such “an honest, trusting people” that they actually had “an unhealthy faith in the validity of statistical evidence”–but by the time James wrote in 1985, things had gone so far the other way that “the intellectually lazy [had] adopted the position that so long as something was stated as a statistic it was probably false.” Today, in no small part because of James’ work, that is likely no longer as true as it once was, but nevertheless the news has not spread to many portions of academia: as University of Virginia historian Sophia Rosenfeld remarked in 2012, in many departments it’s still fairly common to hear it asserted—for example—that all “universal notions are actually forms of ideology,” and that “there is no such thing as universal common sense.” Usually such assertions are followed by a claim for their political utility—but in reality widespread ignorance of statistical effects is what allowed Donald Trump to be elected, because although the media spent much of the presidential campaign focused on questions like the size of Donald Trump’s … hands, the size that actually mattered in determining the election was a statistical concept called sample size.

First mentioned by the mathematician Jacob Bernoulli made in his 1713 book, Ars Conjectandi, sample size is the idea that “it is not enough to take one or another observation for such a reasoning about an event, but that a large number of them are needed.” Admittedly, it might not appear like much of an observation: as Bernoulli himself acknowledged, even “the most stupid person, all by himself and without any preliminary instruction,” knows that “the more such observations are taken into account, the less is the danger of straying from the goal.” But Bernoulli’s remark is the very basis of science: as an article in the journal Nature put the point in 2013, “a study with low statistical power”—that is, few observations—“has a reduced chance of detecting a true effect.” Sample sizes need to be large enough to be able to eliminate chance as a possible factor.

If that isn’t known it’s possible to go seriously astray: consider an example drawn from the work of Israeli psychologists Amos Tversky (MacArthur “genius” grant winner) and (Nobel Prize-winning) Daniel Kahneman—a study “of two toys infants will prefer.” Let’s say that in the course of research our investigator finds that, of “the first five infants studied, four have shown a preference for the same toy.” To most psychologists, the two say, this would be enough for the researcher to conclude that she’s on to something—but in fact, the two write, a “quick computation” shows that “the probability of a result as extreme as the one obtained” being due simply to chance “is as high as 3/8.” The scientist might be inclined to think, in other words, that she has learned something—but in fact her result has a 37.5 percent chance of being due to nothing at all.

Yet when we turn from science to politics, what we find is that an American presidential election is like a study that draws grand conclusions from five babies. Instead of being one big sample—as a direct popular national election would be—presidential elections are broken up into fifty state-level elections: the Electoral College system. What that means is that American presidential elections maximize the role of chance, not minimize it.

The laws of statistics, in other words, predict that chance will play a large role in presidential elections—and as it happens, Tim Meko, Denise Lu and Lazaro Gamio reported for The Washington Post three days after the election that “Trump won the presidency with razor-thin margins in swing states.” “This election was effectively decided,” the trio went on to say, “by 107,000 people”—in an election in which more than 120 million votes were cast, that means that election was decided by less than a tenth of one percent of the total votes. Trump won Pennsylvania by less than 70,000 votes of nearly 6 million, Wisconsin by less than 30,000 of just less than three million, and finally Michigan by less than 11,000 out of 4.5 million: the first two by just more than one percent of the total vote each—and Michigan by a whopping .2 percent! Just to give you an idea of how insignificant these numbers are by comparison with the total vote cast, according to the Michigan Department of Transportation it’s possible that a thousand people in the five largest counties were involved in car crashes—which isn’t even to mention people who just decided to stay home because they couldn’t find a babysitter.

Trump owes his election, in short, to a system that is vulnerable to chance because it is constructed to turn a large sample (the total number of American voters) into small samples (the fifty states). Science tells us that small sample sizes increase the risk of random chance playing a role, American presidential elections use a smaller sample size than they could, and like several other presidential elections, the 2016 election did not go as predicted. Donald Trump could, in other words, be called “His Accidency” with even greater justice than John Tyler—the first vice-president to be promoted due to the death of his boss in office—was. Yet, why isn’t that point being made more publicly?

According to John Cassidy of The New Yorker, it’s because Americans haven’t “been schooled in how to think in probabilistic terms.” But just why that’s true—and he’s essentially making the same point Bill James did in 1985, though more delicately—is, I think, highly damaging to many of Clinton’s biggest fans: the answer is, because they’ve made it that way. It’s the disciplines where many of Clinton’s most vocal supporters make their home, in other words, that are most directly opposed to the type of probabilistic thinking that’s required to see the flaws in the Electoral College system.

As Stanford literary scholar Franco Moretti once observed, the “United States is the country of close reading”: the disciplines dealing with matters of politics, history, and the law within the American system have, in fact, more or less been explicitly constructed to prevent importing knowledge of the laws of chance into them. Law schools, for example, use what’s called the “case method,” in which a single case is used to stand in for an entire body of law: a point indicated by the first textbook to use this method, Christopher Langdell’s A Selection of Cases on the Law of Contracts. Other disciplines, such as history, are similar: as Emory University’s Mark Bauerlein has written, many such disciplines depend for their very livelihood upon “affirming that an incisive reading of a single text or event is sufficient to illustrate a theoretical or historical generality.” In other words, it’s the very basis of the humanities to reject the concept of sample size.

What’s particularly disturbing about this point is that, as Joe Pinsker documented in The Atlantic last year, the humanities attract a wealthier student pool than other disciplines—which is to say that the humanities tend to be populated by students and faculty with a direct interest in maintaining obscurity around the interaction between the laws of chance and the Electoral College. That doesn’t mean that there’s a connection between the architecture of presidential elections and the fact that—as Geoffrey Harpham, former president and director of the National Humanities Center, has observed—“the modern concept of the humanities” (that is, as a set of disciplines distinct from the sciences) “is truly native only to the United States, where the term acquired a meaning and a peculiar cultural force that it does not have elsewhere.” But it does perhaps explain just why many in the national media have been silent regarding that design in the month after the election.

Still, as many in the humanities like to say, it is possible to think that the current American university and political structure is “socially constructed,” or in other words could be constructed differently. The American division between the sciences and the humanities is not the only way to organize knowledge: as the editors of the massive volumes of The Literary and Cultural Reception of Darwin in Europe pointed out in 2014, “one has to bear in mind that the opposition of natural sciences … and humanities … does not apply to the nineteenth century.” If that opposition that we today find so omnipresent wasn’t then, it might not be necessary now. Hence, if the choice of the American people is between whether they ought to get a real say in the affairs of government (and there’s very good reason to think they don’t), or whether a bunch of rich yahoos spend time in their early twenties getting drunk, reading The Great Gatsby, and talking about their terrible childhoods …well, I know which side I’m on. But perhaps more significantly, although I would not expect that it happens tomorrow, still, given the laws of sample size and the prospect of eternity, I know how I’d bet.

Or, as another sharp operator who’d read his Bernoulli once put the point:

The arc of the moral universe is long, but it bends towards justice.”

 

The “Hero” We Deserve

“He’s the hero Gotham deserves, but not the one it needs …”
The Dark Knight. (2008).

 

The election of Donald Trump, Peter Beinart argued the other day in The Atlantic, was precisely “the kind of democratic catastrophe that the Constitution, and the Electoral College in particular, were in part designed to prevent.” It’s a fairly common sentiment, it seems, in some parts of the liberal press: Bob Cesca, of Salon, argued back in October that “the shrieking, wild-eyed, uncorked flailing that’s taking place among supporters of Donald Trump, both online and off” made an “abundantly self-evident” case for “the establishment of the Electoral College as a bulwark against destabilizing figures with the charisma to easily manipulate [sic] low-information voters.”  Such arguments often seem to think that their opponents are dewy-eyed idealists, their eyes clouded by Frank Capra movies: Cesca, for example, calls the view in favor of direct popular voting an argument for “popular whimsy.” In reality however it’s the supposedly-liberal argument in favor of the Electoral College that’s based on a misperception: what people like Beinart or Cesca don’t see is that the Electoral College is not a “bulwark” for preventing the election of candidates like Donald Trump—but in fact a machine for producing them. They don’t see it because they do not understand how the Electoral College is built on a flawed knowledge of probability—an argument in turn that, perhaps horrifically, suggests that the idea that powered Trump’s campaign, the thought that the American leadership class is dangerously out of touch with reality, is more or less right.

To see just how ignorant we all are concerning that knowledge, ask yourself this question (as Distinguished Research Scientist of the National Board of Medical Examiners Howard Wainer asked several years ago in the pages of American Scientist): what are the counties of the United States with the highest distribution of kidney cancer? As it happens, Wainer noted, they “tend to be very rural, Midwestern, Southern, or Western”—a finding that might make sense, say, in view of the fact that rural areas tend to be freer of the pollution that infects the largest cities. But, Wainer continued, consider also that the American counties with the lowest distribution of kidney cancer … “tend to be very rural, Midwestern, Southern, or Western”—a finding that might make sense, Wainer remarks, due to “the poverty of the rural lifestyle.” After all, people in rural counties very often don’t receive the best medical care, tend to eat worse, and tend to drink too much and use too much tobacco. But wait—one of these stories has to be wrong, they can’t both be right. Yet as Wainer goes on to write, they both are true: rural American counties have both the highest and the lowest incidences of kidney cancer. But how?

To solve the seeming-mystery, consider a hypothetical example taken from the Nobel Prize-winner Daniel Kahneman’s magisterial book, Thinking: Fast and Slow. “Imagine,” Kahneman says, “a large urn filled with marbles.” Some of these marbles are white, and some are red. Now imagine “two very patient marble counters” taking turns drawing from the urn: “Jack draws 4 marbles on each trial, Jill draws 7.” Every time one of them draws an unusual sample—that is, a sample of marbles that is either all-red or all-white—each records it. The question Kahneman then implicitly asks is: which marble counter will draw more all-white (or all-red) samples?

The answer is Jack—“by a factor of 8,” Kahneman notes: Jack is likely to draw a sample of only one color more than twelve percent of the time, while Jill is likely to draw such a sample less than two percent of the time. But it isn’t really necessary to know high-level mathematics to understand that because Jack is drawing fewer marbles at a time, it is more likely that he will draw all of one color or the other than Jill is. By drawing fewer marbles, Jack is simultaneously more exposed to extreme events—just as it is more likely that, as Wainer has observed, a “county with, say, 100 inhabitants that has no cancer deaths would be in the lowest category,” while conversely if that same county “has one cancer death it would be among the highest.” Because there are fewer people in rural American counties than urban ones, a rural county will have a more extreme rate of kidney cancer, either high or low, than an urban one—for the very same reason that Jack is more likely to have a set of all-white or all-red marbles. The sample size is smaller—and the smaller the sample size, the more likely it is that the sample will be an outlier.

So far, of course, I might be said to be merely repeating something everyone already knows—maybe you anticipated the point about Jack and Jill and the rural counties, or maybe you just don’t see how any of this has any bearing beyond the lesson that scientists ought to be careful when they are designing their experiments. As many Americans think these days, perhaps you think that science is one thing, and politics is something else—maybe because Americans have been taught for several generations now, by people as diverse as conservative philosopher Leo Strauss and liberal biologist Stephen Jay Gould, that the humanities are one thing and the sciences are another. (Which Geoffrey Harpham, formerly the director of the National Humanities Center, might not find surprising: Harpham has claimed that “the modern concept of the humanities” —that is, as something distinct from the sciences—“is truly native only to the United States.”) But consider another of Wainer’s examples: one drawn from, as it happens, the world of education.

“In the late 1990s,” Wainer writes, “the Bill and Melinda Gates Foundation began supporting small schools on a broad-ranging, intensive, national basis.” Other foundations supporting the movement for smaller schools included, Wainer reported, the Annenberg Foundation, the Carnegie Corporation, George Soro’s Open Society Institute, and the Pew Cheritable Trusts, as well as the U.S. Department of Education’s Smaller Learning Communities Program. These programs brought pressure—to the tune 1.7 billion dollars—on many American school systems to break up their larger schools (a pressure that, incidentally, succeeded in cities like Los Angeles, New York, Chicago, and Seattle, among others). The reason the Gates Foundation and its helpers cited for pressuring America’s educators was that, as Wainer writes, surveys showed that “among high-performing schools, there is an unrepresentatively large proportion of smaller schools.” That is, when researchers looked at American schools, they found the highest-achieving schools included a disproportionate number of small ones.

By now, you see where this is going. What all of these educational specialists didn’t consider—but Wainer’s subsequent research found, at least in Pennsylvania—was that small schools were also disproportionately represented among the lowest-achieving schools. The Gates Foundation (led, mind you, by Bill Gates) had simply failed to consider that of course small schools might be overrepresented among the best schools, simply because schools with smaller numbers of students are more likely to be extreme cases. (Something that, by the way, also may have consequences for that perennial goal of professional educators: the smaller class size.) Small schools tend to be represented at the extremes not for any particular reason, but just because that’s how math works.

The inherent humor of a group of educators (and Bill Gates) not understanding how to do basic mathematics is, admittedly, self-evident—and incidentally good reason not to take the testimony of “experts” at face value. But more significantly, it also demonstrates the very real problem here: if highly-educated people (along with college dropout Gates) cannot see the flaws in their own reasoning while discussing precisely the question of education, how much more vulnerable is everyone else to flaws in their thinking? To people like Bob Cesca or Peter Beinart (or David Frum; cf. “Noble Lie”), of course, the answer to this problem is to install more professionals, more experts, to protect us from our own ignorance: to erect, as Cesca urges, a “firewall[…] against ignorant populism.” (A wording that, one imagines, reflects Cesca’s mighty struggle to avoid the word “peasants.”) The difficulty with such reasoning, however, is that it ignores the fact that the Electoral College is an instance of the same sort of ignorance as that which bedeviled the Gates Foundation—or that you may have encountered in yourself when you considered the kidney cancer example above.

Just as rural American counties, that is, are more likely to have either lots of cases—or very few cases—of kidney cancer, so too must those very same sparsely-populated states be more likely to vote in an extreme fashion inconsistent with the rest of the country. For one, it’s a lot cheaper to convince the voters of Wyoming (the half a million or so of whom possess not only a congressman, but also two senators) than the voters of, say, Staten Island (who, despite being only slightly less in number than the inhabitants of Wyoming, have to share a single congressman with part of Brooklyn). Yet the existence of the Electoral College, according to Peter Beinart, demonstrates just how “prescient” the authors of the Constitution were: while Beinart says he “could never have imagined President Donald Trump,” he’s glad that the college is cleverly constructed so as to … well, so far as I can tell Beinart appears to be insinuating that the Electoral College somehow prevented Trump’s election—so, yeeaaaah. Anyway, for those of us still living in reality, suffice it to say that the kidney cancer example illustrates just how dividing one big election into fifty smaller ones inherently makes it more probable that some of those subsidiary elections will be outliers. Not for any particular reason, mind you, but simply because that’s how math works—as anyone not named Bill Gates seems intelligent enough to understand once it’s explained.

In any case, the Electoral College thusly does not make it less likely that an outlier candidate like Donald Trump is elected—but instead more likely that such a candidate would be elected. What Beinart and other cheerleaders for the Electoral College fail to understand (either due to ignorance or some other motive) is that the Electoral College is not a “bulwark” or “firewall” against the Donald Trumps of the world. In reality—a place that, Trump has often implied, those in power seem not to inhabit any more—the Electoral College did not prevent Donald Trump from becoming the president of the United States, but instead (just as everyone witnessed on Election Day), exactly the means by which the “short-fingered vulgarian” became the nation’s leader. Contrary to Beinart or Cesca, the Electoral College is not a “firewall” or some cybersecurity app—it is, instead, a roulette wheel, and a biased one at that.

Like a sucker can expect that, so long as she stays at the roulette wheel, she will eventually go bust, thusly so too can the United States expect, so long as the Electoral College exists, to get presidents like Donald Trump: “accidental” presidencies, after all, have been an occasional feature of presidential elections since at least 1824, when John Quincy Adams was elected despite the fact that Andrew Jackson had won the popular vote. If not even the watchdogs of the American leadership class—much less that class itself—can see the mathematical point of the argument against the Electoral College, that in and of itself is pretty good reason to think that, while the specifics of Donald Trump’s criticisms of the Establishment during the campaign might have been ridiculous, he wasn’t wrong to criticize it. Donald Trump then may not be the president-elect America needs—but he might just be the president people like Peter Beinart and Bob Cesca deserve.

 

Lex Majoris

The first principle of republicanism is that the lex majoris partis is the fundamental law of every society of individuals of equal rights; to consider the will of the society enounced by the majority of a single vote, as sacred as if unanimous, is the first of all lessons in importance, yet the last which is thoroughly learnt. This law once disregarded, there is no other but that of force, which ends necessarily in military despotism.
—Thomas Jefferson. Letter to Baron von Humboldt. 13 June 1817.

Since Hillary Clinton lost the 2016 American presidential election, many of her supporters have been quick to cry “racism” on the part of voters for her opponent, Donald Trump. According to Vox’s Jenée Desmond-Harris, for instance, Trump won the election “not despite but because he expressed unfiltered disdain toward racial and religious minorities in the country.” Aside from being the easier interpretation, because it allows Clinton voters to ignore the role their own economic choices may have played in the broad support Trump received throughout the country, such accusations are counterproductive even on their own terms because—only seemingly paradoxically—they reinforce many of the supports racism still receives in the United States: above all, because they weaken the intellectual argument for a national direct election for the presidency. By shouting “racism,” in other words, Hillary Clinton’s supporters may end up helping to continue racism’s institutional support.

That institutional support begins with the method by which Americans elect their president: the Electoral College—a method that, as many have noted, is not used in any other industrialized democracy. Although many scholars and others have advanced arguments for the existence of the college through the centuries, most of these “explanations” are, in fact, intellectually incoherent: while the most common of the traditional “explanations” concerns the differences between the “large states” and the “small,” for instance, in the actual United States—as James Madison, known as the “Father of the Constitution,” noted at the time—there had not then, and has not ever been since, a situation in American history that involved a conflict between larger-population and smaller-population states. Meanwhile, the other “explanations” for the Electoral College do not even rise to this level of incoherence.

In reality there is only one explanation for the existence of the college, and that explanation has been most forcefully and clearly made by law professor Paul Finkelman, now serving as a Senior Fellow at the University of Pennsylvania after spending much of his career at obscure law schools like the University of Tulsa College of Law, the Cleveland-Marshall College of Law, and the Albany Law School. As Finkelman has been arguing for decades (his first papers on the subject were written in the 1980s), the Electoral College was originally invented by the delegates to the Constitutional Convention of 1787 in order to protect slavery. That such was the purpose of the College can be known, most obviously, because the delegates to the convention said so.

When the means of electing a president were first debated, it’s important to remember that the convention had already decided, for the purposes of representation in the newly-created House of Representatives, to count black slaves by the means of the infamous three-fifths ratio. That ratio, in turn, had its effect when discussing the means of electing a president: delegates like James Madison argued, as Finkelman notes, that the existence of such a college—whose composition would be based on each state’s representation in the House of Representatives—would “guarantee that the nonvoting slaves could nevertheless influence the presidential election.” Or as Hugh Williamson, a delegate from North Carolina, observed during the convention, if American presidents were elected by direct national vote the South would be shut out of electing a national executive because “her slaves will have no suffrage”—that is, because in a direct vote all that would matter is the number of voters, the Southern states would lose the advantage the three-fifths ratio gave them in the House. Hence, the existence of the Electoral College is directly tied to the prior decision to grant Southern slave states an advantage in Congress, and so the Electoral College is another in a string of institutional decisions made by convention delegates to protect domestic slavery.

Yet, assuming that Finkelman’s case for the racism of the Electoral College is true, how can decrying the racism of the American voter somehow inflict harm on the case for abolishing the Electoral College? The answer goes back to the very justifications of, not only presidential elections, but elections in general—the gradual discovery, during the eighteenth century Enlightenment, of what is today known as the Law of Large Numbers.

Putting the law in capital letters, I admit, tends to mystify it, but anyone who buys insurance already understands the substance of the concept. As New Yorker writer Malcolm Gladwell once explained insurance, “the safest and most efficient way to provide insurance” is “to spread the costs and risks of benefits over the biggest and most diverse group possible.” In other words, the more people participating in an insurance plan, the greater the possibility that the plan’s members will be protected. The Law of Large Numbers explains why that is.

That reason is the same as the reason that, as Peter Bernstein remarks in Against the Gods: The Remarkable Story of Risk, if we toss a coin enough times that “will correspondingly increase the probability that the ratio of heads thrown to total throws” will decrease. Or, the reason that—as physicist Leonard Mlodinow has pointed out—in order really to tell which baseball team is better than another a World Series would have to be at least 23 games long (if one team were much better than the other), and possibly as long as 269 games (between two closely-matched opponents). Only by playing so many games can random chance be confidently excluded: as Carl Bialik of FiveThirtyEight once pointed out, usually “in sports, the longer the contest, the greater the chance that the favorite prevails.” Or, as Israeli psychologists Daniel Kahneman and Amos Tversky put the point in 1971, “the law of large numbers guarantees that very large samples will indeed be representative”: it’s what scientists rely upon to know that, if they have performed enough experiments or poured over enough data, they know enough to exclude idiosyncratic results. The Law of Large Numbers asserts, in short, that the more times we repeat something, the closer we will approach its true value.

It’s for just that reason that many have noted the connection between science and democratic government: “Science and democracy are powerful partners,” as the website for the Union of Concerned Scientists has put it. What makes these two objects such “powerful” partners is that the Law of Large Numbers is what underlies the act of holding elections: as James Surowiecki put the point in his book, The Wisdom of Crowds, the theory of democracy is that “the larger the group, the more reliable its judgment will be.” Just as scientists think that, by replicating an experiment, they can more readily trust in its results, so too does a democratic government implicitly think that, by including more people in the decision-making process, the government can the more readily arrive at the “correct” solution: as James Madison put it in The Federalist No. 10, if you “take in a greater variety of parties and interests,” then “you make it less probable that a majority of the whole will have a common motive for invading the rights of other citizens.” Without such a belief, after all, there would be no reason not to trust, say, a ruling caste to make decisions for society—or even a single, perhaps orange-toned, individual. Without some concept of the Law of Large Numbers—some belief that increasing the numbers of trials, or increasing the number of inputs, will make for better results—there is no reason for democratic government at all.

That’s why, when people criticize the Electoral College, they are implicitly invoking the Law of Large Numbers. The Electoral College divides the pool of American voters into fifty smaller pools, but a national popular vote would collect all Americans into a single lump—a point that some defenders of the College sometimes seek to make into a virtue, instead of the vice it is. In the wake of the 2000 election, for example, Senator Mitch McConnell wrote that the “Electoral College served to center the post-election battles in Florida,” preventing the “vote recounts and court battles in nearly every state of the Union” that, McConnell assures us, would have occurred in the college’s absence. But as Timothy Noah pointed out in The New Republic in 2012, what McConnell’s argument “fails to realize is that when you’re assembling one big count rather than a lot of little ones it’s a lot less clear what’s to be gained from rigging any of the little ones.” If what matters is the popular vote, what happens in any one location doesn’t matter so much; hence, stealing votes in downstate Illinois won’t allow you to steal the entire state—just as, with enough samples or experiments run, the fact that the lab assistant was drowsy at the time she recorded one set of results won’t matter so much. Or why deliberately losing a single game in July hardly matters so much as tanking a game of the World Series.

Put in such a way, it’s hard to see how anyone without a vested stake in the construction of the present system could defend the Electoral College—yet, as I suspect we are about to see, the very people now ascribing Donald Trump’s victory to the racism of the American voter will soon be doing just that. The reason will be precisely the same reason that such advocates want to blame racism, rather than the ongoing thievery of economic elites, for the rejection of Clinton: because racism is a “cultural” phenomenon, and most left-wing critics of the United States now obtain credentials in “cultural,” rather than scientific, disciplines.

If, in other words, Donald Trump’s victory was due to a complex series of renegotiations of the global contract between capital and labor, then that would require experts in economic and other, similar, disciplines to explain it; if his victory was due to racism, however—racism being considered a cultural phenomenon—then that will call forth experts in “cultural” fields. Because those with “liberal” or “leftist” political leanings now tend to gather in “cultural” fields, those with those political leanings will (indeed, must) now attempt to shift the battleground towards their areas of expertise. That shift, I would wager, will in turn lead those who argue for “cultural” explanations for the rise of Trump against arguments for the elimination of the Electoral College.

The reason is not difficult to understand: it isn’t too much to say, in fact, that one way to define the study of the humanities is to say it comprises the disciplines that largely ignore, or even oppose, the Law of Large Numbers both as a practical matter and as a philosophic one. As literary scholar Franco Moretti, now of Stanford, observed in his Atlas of the European Novel, 1800-1900, just as “silver fork novels”—a genre published in England between the 1820s and the 1840s—do not “show ‘London,’ but only a small, monochrome portion of it,” so too does the average student of literature not really study her ostensible subject matter. “I work on west European narrative between 1790 and 1930, and already feel like a charlatan outside of Britain and France,” Moretti confesses in an essay entitled “Distant Reading”—and even then, he only works “on its canonical fraction, which is not even 1 percent of published literature.” As Joshua Rothman put the point in a New Yorker profile of Moretti a few years ago, Moretti instead insists that “if you really want to understand literature, you can’t just read a few books or poems over and over,” but instead “you have to work with hundreds or even thousands of texts at a time”—that is, he insists on the significance of the Law of Large Numbers in his field, an insistence whose very novelty demonstrates how literary study is a field that has historically resisted precisely that recognition.

In order to proceed, in other words, disciplines like literary study or art history—or even history itself—must argue for the representativeness of a given body of work: usually termed, at least in literary study, “the Canon.” Such disciplines are already, simply by their very nature, committed to the idea that it is not necessary to read all of what Moretti says is the “thirty thousand nineteenth-century British novels out there” in order to arrive at conclusions about the nineteenth-century British novel: in the first place, “no one really knows” how many there really are (there could easily be twice as many), and in the second “no one has read them [all], [and] no one ever will.” In order to get off the ground, such disciplines must necessarily deny the Law of Large Numbers: as Moretti says, “you invest so much in individual texts only if you think that very few of them really matter”—a belief with an obvious political corollary. Rejection of the Law of Large Numbers is thusly, as Moretti also observes, “an unconscious and invisible premiss” for most who study such fields—which is to say that although students of the humanities often make claims for the political utility of their work, they sometimes forget that the enabling presuppositions of their fields are inherently those of the pre-Enlightenment ancien régime.

Perhaps that’s why—as Joe Pinsker observed in a fascinating, but short, article for The Atlantic several years ago—studies of college students find that those “from lower-income families tend toward ‘useful’ majors, such as computer science, math, and physics,” while students “whose parents make more money flock to history, English, and the performing arts”: the baseline assumptions of those disciplines are, no matter the particular predilections of a given instructor, essentially aristocratic, not democratic. To put it most baldly, the disciplines of the humanities must reject the premise of the Law of Large Numbers, which says that as more examples are added, the closer we approach to the truth—a point that can be directly witnessed when, for instance, English professor Michael Bérubé of Pennsylvania State University observes that the “humanists at [his] end of the [academic] hallway roundly dismissed” Harvard biologist E.O. Wilson’s book, Consilience: The Unity of Knowledge for arguing that “all human knowledge can and eventually will be unified under the rubric of the natural sciences.” Rejecting the Law of Large Numbers is foundational to the very operation of the humanities: without making that rejection, they cannot exist.

In recent decades, of course, presumably Franco Moretti has not been the only professor of the humanities to realize that their disciplines stood on a collision course with the Law of Large Numbers—it may perhaps explain why disciplines like literature and others have, for years, been actively recruiting among members of minority groups. The institutional motivations of such hiring, in other words, ought to be readily apparent: by making such hires, departments of the humanities could insulate themselves from charges from the political left—while at the same time continuing the practices that, without such cover, might have appeared increasingly anachronistic in a democratic age. Minority hiring, that is, may not be so politically “progressive” as its defenders sometimes argue: it may, in fact, have prevented the intellectual reforms within the humanities urged by people like Franco Moretti for a generation or more. Of course, by joining such departments, members of minority groups also may have, consciously or not, tied their own fortunes to a philosophic rejection of concepts like the Law of Large Numbers—as African-American sportswriter Michael Wilbon, of ESPN fame, wrote this past May, black people supposedly have some kind of allergy to statistical analysis: “in ‘BlackWorld,’” Wilbon solemnly intoned, “never is heard an advanced analytical word.” I suspect then that many who claim to be on the political left will soon come out to defend the Electoral College. If that happens, then in one last cruel historical irony the final defenders of American slavery may end up being precisely those slavery meant to oppress.

Striking Out

When a man’s verses cannot be understood … it strikes a man more dead than a great reckoning in a little room.
As You Like It. III, iii.

 

There’s a story sometimes told by the literary critic Stanley Fish about baseball, and specifically the legendary early twentieth-century umpire Bill Klem. According to the story, Klem is working behind the plate one day. The pitcher throws a pitch; the ball comes into the plate, the batter doesn’t swing, and the catcher catches it. Klem doesn’t say anything. The batter turns around and says (Fish tells us),

“O.K., so what was it, a ball or a strike?” And Klem says, “Sonny, it ain’t nothing ’till I call it.” What the batter is assuming is that balls and strikes are facts in the world and that the umpire’s job is to accurately say which one each pitch is. But in fact balls and strikes come into being only on the call of an umpire.

Fish is expressing here what is now the standard view of American departments of the humanities: the dogma (a word precisely used) known as “social constructionism.” As Fish says elsewhere, under this dogma, “what is and is not a reason will always be a matter of faith, that is of the assumptions that are bedrock within a discursive system which because it rests upon them cannot (without self-destructing) call them into question.” To many within the academy, this view is inherently liberating: the notion that truth isn’t “out there” but rather “in here” is thought to be a sub rosa method of aiding the political change that, many have thought, has long been due in the United States. Yet, while joining the “social construction” bandwagon is certainly the way towards success in the American academy, it isn’t entirely obvious that it’s an especially good way to practice American politics: specifically, because the academy’s focus on the doctrines of “social constructionism” as a means of political change has obscured another possible approach—an approach also suggested by baseball. Or, to be more precise, suggested by the World Series of 1904 that didn’t happen.

“He’d have to give them,” wrote Will Hively, in Discover magazine in 1996, “a mathematical explanation of why we need the electoral college.” The article describes how one Alan Natapoff, a physicist at the Massachusetts Institute of Technology, became involved in the question of the Electoral College: the group, assembled once every four years, that actually elects an American president. (For those who have forgotten their high school civics lessons, the way an American presidential election works is that each American state elects a number of “electors” equal in number to that state’s representation  in Congress; i.e., the number of congresspeople each state is entitled to by population, plus two senators. Those electors then meet to cast their votes in what is the actual election.) The Electoral College has been derided for years: the House of Representatives introduced a constitutional amendment to abolish it in 1969, for instance, while at about the same time the American Bar Association called the college “archaic, undemocratic, complex, ambiguous, indirect, and dangerous.” Such criticisms have a point: as has been seen a number times in American history (most recently in 2000), the Electoral College makes it possible to elect a president without a majority of the votes. But to Natapoff, such criticisms fundamentally miss the point because, according to him, they misunderstood the math.

The example Natapoff turned to in order to support his argument for the Electoral College was drawn from baseball. As Anthony Ramirez wrote in a New York Times article about Natapoff and his argument, also from 1996, the physicist’s favorite analogy is to the World Series—a contest in which, as Natapoff says, “the team that scores the most runs overall is like a candidate who gets the most popular votes.” But scoring more runs than your opponent is not enough to win the World Series, as Natapoff goes on to say: in order to become the champion baseball team of the year, “that team needs to win the most games.” And scoring runs is not the same as winning games.

Take, for instance, the 1960 World Series: in that contest, as Lively says in Discover, “the New York Yankees, with the awesome slugging combination of Mickey Mantle, Roger Maris, and Bill ‘Moose’ Skowron, scored more than twice as many total runs as the Pittsburgh Pirates, 55 to 27.” Despite that difference in production, the Pirates won the last game of the series (in perhaps the most exciting game in Series history—the only one that has ever ended with a ninth-inning, walk-off home run) and thusly won the series, four games to three. Nobody would dispute, Natapoff’s argument runs, that the Pirates deserved to win the series—and so, similarly, nobody should dispute the legitimacy of the Electoral College.

Why? Because if, as Lively writes, in the World Series “[r]uns must be grouped in a way that wins games,” in the Electoral College “votes must be grouped in a way that wins states.” Take, for instance, the election of 1888—a famous case for political scientists studying the Electoral College. In that election, Democratic candidate Grover Cleveland gained over 5.5 million votes to Republican candidate Benjamin Harrison’s 5.4 million votes. But Harrison not only won more states than Cleveland, but also won states with more electoral votes: including New York, Pennsylvania, Ohio, and Illinois, each of whom had at least six more electoral votes than the most populous state Cleveland won, Missouri. In this fashion, Natapoff argues that Harrison is like the Pirates: although he did not win more votes than Cleveland (just as the Pirates did not score more runs than the Yankees), still he deserved to win—on the grounds that the total numbers of popular votes do not matter, but rather how those votes are spread around the country.

In this argument, then, games are to states just as runs are to votes. It’s an analogy that has an easy appeal to it: everyone feels they understand the World Series (just as everyone feels they understand Stanley Fish’s umpire analogy) and so that understanding appears to transfer easily to the matter of presidential elections. Yet, while clever, in fact most people do not understand the purpose of the World Series: although people think it is the task of the Series to identify the best baseball team in the major leagues, that is not what it is designed to do. It is not the purpose of the World Series to discover the best team in baseball, but instead to put on an exhibition that will draw a large audience, and thus make a great deal of money. Or so said the New York Giants, in 1904.

As many people do not know, there was no World Series in 1904. A World Series, as baseball fans do know, is a competition between the champions of the National League and the American League—which, because the American League was only founded in 1901, meant that the first World Series was held in 1903, between the Boston Americans (soon to become the Red Sox) and the same Pittsburgh Pirates also involved in Natapoff’s example. But that series was merely a private agreement between the two clubs; it created no binding precedent. Hence, when in 1904 the Americans again won their league and the New York Giants won the National League—each achieving that distinction by winning more games than any other team over the course of the season—there was no requirement that the two teams had to play each other. And the Giants saw no reason to do so.

As legendary Giants manager, John McGraw, said at the time, the Giants were the champions of the “only real major league”: that is, the Giants’ title came against tougher competition than the Boston team faced. So, as The Scrapbook History of Baseball notes, the Giants, “who had won the National League by a wide margin, stuck to … their plan, refusing to play any American League club … in the proposed ‘exhibition’ series (as they considered it).” The Giants, sensibly enough, felt that they could not gain much by playing Boston—they would be expected to beat the team from the younger league—and, conversely, they could lose a great deal. And mathematically speaking, they were right: there was no reason to put their prestige on the line by facing an inferior opponent that stood a real chance to win a series that, for that very reason, could not possibly answer the question of which was the better team.

“That there is,” writes Nate Silver and Dayn Perry in Baseball Between the Numbers: Why Everything You Know About the Game Is Wrong, “a great deal of luck involved in the playoffs is an incontrovertible mathematical fact.” But just how much luck is involved is something that the average fan hasn’t considered—though former Caltech physicist Leonard Mlodinow has. In Mlodinow’s book, The Drunkard’s Walk: How Randomness Rules Our Lives, the scientist writes that—just by virtue of doing the math—it can be concluded that “in a 7-game series there is a sizable chance that the inferior team will be crowned champion”:

For instance, if one team is good enough to warrant beating another in 55 percent of its games, the weaker team will nevertheless win a 7-game series about 4 times out of 10. And if the superior team could be expected to beat its opponent, on average, 2 out of each 3 times they meet, the inferior team will still win a 7-game series about once every 5 matchups.

What Mlodinow means is this: let’s say that, for every game, we roll a one-hundred sided die to determine whether the team with the 55 percent edge wins or not. If we do that four times, there’s still a good chance that the inferior team is still in the series: that is, that the superior team has not won all the games. In fact, there’s a real possibility that the inferior team might turn the tables, and instead sweep the superior team. Seven games, in short, is just not enough games to demonstrate conclusively that one team is better than another.

In fact, in order to eliminate randomness as much as possible—that is, make it as likely as possible for the better team to win—the World Series would have to be much longer than it currently is: “In the lopsided 2/3-probability case,” Mlodinow says, “you’d have to play a series consisting of at minimum the best of 23 games to determine the winner with what is called statistical significance, meaning the weaker team would be crowned champion 5 percent or less of the time.” In other words, even in a case where one team has a two-thirds likelihood of winning a game, it would still take 23 games to make the chance of the weaker team winning the series less than 5 percent—and even then, there would still be a chance that the weaker team could still win the series. Mathematically then, winning a seven-game series is meaningless—there have been just too few games to eliminate the potential for a lesser team to beat a better team.

Just how mathematically meaningless a seven-game series is can be demonstrated by the case of a team that is only five percent better than another team: “in the case of one team’s having only a 55-45 edge,” Mlodinow goes on to say, “the shortest statistically significant ‘world series’ would be the best of 269 games” (emp. added). “So,” Mlodinow writes, “sports playoff series can be fun and exciting, but being crowned ‘world champion’ is not a very reliable indication that a team is actually the best one.” Which, as a matter of fact about the history of the World Series, is simply a point that true baseball professionals have always acknowledged: the World Series is not a competition, but an exhibition.

What the New York Giants were saying in 1904 then—and Mlodinow more recently—is that establishing the real worth of something requires a lot of trials: many, many different repetitions. That’s something that, all of us, ought to know from experience: to learn anything, for instance, requires a lot of practice. (Even if the famous “10,000 hour rule” New Yorker writer Malcolm Gladwell concocted for this book, Outliers: The Story of Success, has been complicated by those who did the original research Gladwell based his research upon.) More formally, scientists and mathematicians call this the “Law of Large Numbers.”

What that law means, as the Encyclopedia of Mathematics defines it, is that “the frequency of occurence of a random event tends to become equal to its probability as the number of trials increases.” Or, to use the more natural language of Wikipedia, “the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.” What the Law of Large Numbers implies is that Natapoff’s analogy between the Electoral College and the World Series just might be correct—though for the opposite reason Natapoff brought it up. Namely, if the Electoral College is like the World Series, and the World Series is not designed to find the best team in baseball but instead be merely an exhibition, then that implies that the Electoral College is not a serious attempt to find the best president—because what the Law would appear to advise is that, in order to obtain a better result, it is better to gather more voters.

Yet the currently-fashionable dogma of the academy, it would seem, is expressly-designed to dismiss that possibility: if, as Fish says, “balls and strikes” (or just things in general) are the creations of the “umpire” (also known as a “discursive system”), then it is very difficult to confront the wrongheadedness of Natapoff’s defense of the Electoral College—or, for that matter, the wrongheadedness of the Electoral College itself. After all, what does an individual run matter—isn’t what’s important the game in which it is scored? Or, to put it another way, isn’t it more important where (to Natapoff, in which state; to Fish, less geographically inclined, in which “discursive system”) a vote is cast, rather than whether it was cast? The answer in favor of the former at the expense of the latter to many, if not most, literary-type intellectuals is clear—but as any statistician will tell you, it’s possible for any run of luck to continue for quite a bit longer than the average person might expect. (That’s one reason why it takes at least 23 games to minimize the randomness between two closely-matched baseball teams.) Even so, it remains difficult to believe—as it would seem that many today, both within and without the academy, do—that the umpire can continue to call every pitch a strike.

 

Extra! Extra! White Man Wins Election!

 

Whenever you find yourself on the side of the majority,
it is time to pause and reflect
.
—Mark Twain

One of the more entertaining articles I’ve read recently appeared in the New York Times Magazine last October; written by Ruth Padawer and entitled “When Women Become Men At Wellesley,” it’s about how the newest “challenge,” as the terminology goes, facing American women’s colleges these days is the rise of students “born female who identified as men, some of whom had begun taking testosterone to change their bodies.” The beginning of the piece tells the story of “Timothy” Boatwright, a woman who’d decided she felt more like a man, and how Boatwright had decided to run for the post of “multicultural affairs coordinator” at the school, with the responsibility of “promoting a ‘culture of diversity’ among students and staff and faculty members.” After three “women of color” dropped out of the race for various unrelated reasons, that meant that Boatwright would be the only candidate still in the race—which meant that Wellesley, a woman’s college remember, would have as its next “diversity” official a white man. Yet according to Padawer this result wasn’t necessarily as ridiculous as it might seem: “After all,” the Times reporter said, “at Wellesley, masculine-of-center students are cultural minorities.” In the race to produce more and “better” minorities, then, Wellesley has produced a win for the ages—a result that, one might think, would cause reasonable people to stop and consider: just what is it about American society that is causing Americans constantly to redescribe themselves as one kind of “minority” or another? Although the easy answer is “because Americans are crazy,” the real answer might be that Americans are rationally responding to the incentives created by their political system: a system originally designed, as many historians have begun to realize, to protect a certain minority at the expense of the majority.

That, after all, is a constitutional truism, often repeated like a mantra by college students and other species of cretin: the United States Constitution, goes the zombie-like repetition, was designed to protect against the “tyranny of the majority”—even though that exact phrase was only first used by John Adams in 1788, a year after the Constitutional Convention. It is however true that Number 10 of the Federalist Papers does mention “the superior force of an interested and overbearing majority”—yet what those who discuss the supposed threat of the majority never seem to mention is that, while it is true that the United States Constitution is constructed with many, and indeed nearly a bewildering, variety of protections for the “minority,” the minority that was being protected at the moment of the Constitution’s writing was not some vague and theoretical interest: the authors of the Constitution were not professors of political philosophy sitting around a seminar room. Instead, the United States Constitution was, as political scientist Michael Parenti has put it, “a practical response to immediate material conditions”—in other words, the product of political horse-trading that resulted in a document that protected a very particular, and real, minority; one with names and families and, more significantly, a certain sort of property.

That property, as historians today are increasingly recognizing, was slavery. It isn’t for nothing that, as historian William Lee Miller has observed, not only was it that “for fifty of [the nation’s] first sixty four [years], the nation’s president was a slaveholder,” but also that the “powerful office of the Speaker of the House was held by a slaveholder for twenty-eight of the nation’s first thirty-five years,” and that the president pro tem of the Senate—one of the more obscure, yet still powerful, federal offices—“was virtually always a slaveholder.” Both Chief Justices of the Supreme Court through the first five decades of the nineteenth century, John Marshall and Roger Taney, were slaveholders, as were very many federal judges and other, lesser, federal office holders. As historian Garry Wills, author of Lincoln At Gettysburg among other volumes, has written, “the management of the government was disproportionately controlled by the South.” The reason why all of this was so was, as it happens, very ably explained at the time by none other than … Abraham Lincoln.

What Lincoln knew was that there was a kind of “thumb on the scale” when Northerners like the two Adams’, John and John Quincy, were weighed in national elections—a not-so-mysterious force that denied those Northern, anti-slavery men second terms as president. Lincoln himself explained what that force was in the speech he gave at Peoria, Illinois that signaled his return to politics in 1854. There, Lincoln observed that

South Carolina has six representatives, and so has Maine; South Carolina has eight presidential electors, and so has Maine. This is precise equality so far; and, of course they are equal in Senators, each having two. Thus in the control of the government, the two States are equals precisely. But how are they in the number of their white people? Maine has 581,813—while South Carolina has 274,567. Maine has twice as many as South Carolina, and 32,679 over. Thus each white man in South Carolina is more than the double of any man in Maine.

What Lincoln is talking about here is the notorious “Three-Fifths Compromise”: Article I, Section 2, Paragraph 3 of the United States Constitution. According to that proviso, slave states were entitled to representation in Congress according to the ratio of “three fifths of all other persons”—those being counted by that ratio being, of course, Southern slaves. And what the future president—the first president, it might be added, to be elected without the assistance of that ratio (a fact that would have, as I shall show, its own consequences)—was driving at was the effect this mathematical ratio was having on the political landscape of the country.

As Lincoln remarked in the same Peoria speech, the Three-Fifths Compromise meant that “five slaves are counted as being equal to three whites,” which meant that, as a practical matter, “it is an absolute truth, without an exception, that there is no voter in any slave State, but who has more legal power in the government, than any voter in any free State.” To put it more plainly, Lincoln said that the three-fifths clause “in the aggregate, gives the slave States, in the present Congress, twenty additional representatives.” Since the Constitution gave the same advantage in the Electoral College as it gave in the Congress, the reason for results like, say, the Adams’ lack of presidential staying power isn’t that hard to discern.

“One of those who particularly resented the role of the three-fifths clause in warping electoral college votes,” notes Miller, “was John Adams, who would probably have been reelected president over Thomas Jefferson in 1800 if the three-fifths ratio had not augmented the votes of the Southern states.” John Quincy himself had part of two national elections, 1824 and 1828, that had been skewed by what was termed at the time the “federal ratio”—which is to say that the reason why both Adams’ were one-term presidents likely had rather more with the form of the American government than with the content of their character, despite the representations of many historians after the fact.

Adams himself was quite aware of the effect of the “federal ratio.” The Hartford Convention of 1815, led by New Englanders like Adams, had recommended ending the advantage of the Southern states within the Congress, and in 1843 John Quincy’s son Charles Francis Adams caused the Massachusetts’ legislature to pass a measure that John Quincy would himself introduce to the U.S. Congress, “a resolution proposing that the Constitution be amended to eliminate the three-fifths ratio,” as Miller has noted. There were three more such attempts in 1844, three years before Lincoln’s arrival, all of which were soundly defeated, as Miller observes, by totals “skewed by the feature the proposed amendment would abolish.” The three-fifths ratio was not simply a bete noir of the Adams’ personally; all of New England was aware of that the three-fifths ratio protected the interests of the South in the national government—it’s one reason why, prior to the Civil War, “states’ rights” was often thought of as a Northern issue rather than a Southern one.

That the South itself recognized the advantages the United States Constitution gave them, specifically by that document’s protections of “minority”—in other words, slaveowner—interests, can be seen by reference to the reasons the South gave for starting the Civil War. South Carolina’s late 1860 declaration of secession, for example (the first such declaration) outright said that the state’s act of secession was provoked by the election of Abraham Lincoln—in other words, by the fact of the election of a presidential candidate who did not need the electoral votes of the South.

Hence, South Carolina’s declaration said that a “geographical line has been drawn across the Union, and all the States north of that line have united in the election of a man to the high office of President of the United States whose opinions and purposes are hostile to slavery.” The election had been enabled, the document went on to say, “by elevating to citizenship, persons who, by the supreme law of the land, are incapable of becoming citizens, and their votes have been used to inaugurate a new policy, hostile to the South.” Presumably, this is a veiled reference to the population gained by the Northern states over the course of the nineteenth century—a trend that was not only steadily weakening the advantage the South had initially enjoyed at the expense of the North at the time the Constitution had been enacted, but had only accelerated during the 1850s.

As one Northern newspaper observed in 1860, in response to the early figures being released by the United States Census Bureau at that time, the “difference in the relative standing of the slave states and the free, between 1850 and 1860, inevitably shows where the future greatness of our country is to be.” To Southerners the data had a different meaning: as Adam Goodheart noted in a piece for the New York Times’ series on the Civil War, Disunion, “the editor of the New Orleans Picayune noted that states like Michigan, Wisconsin, Iowa and Illinois would each be gaining multiple seats in Congress” while Southern states like Virginia, South Carolina and Tennessee would be losing seats. To the Southern slaveowners who would drive the road to secession during the winter of 1860, the fact that they were on the losing end of a demographic war could not have been far from mind.

Historian Leonard L. Richards of the University of Massachusetts, for example, has noted that when Alexis de Tocqueville traveled the American South in the early 1830s, he discovered that Southern leaders were “noticeably ‘irritated and alarmed’ by their declining influence in the House [of Representatives].” By the 1850s, those population trends were only accelerating: concerning the gains in population the Northern states were realizing by foreign immigration—presumably the subject of South Carolina’s complaint about persons “incapable of becoming citizens”—Richards cites Senator Stephen Adams of Mississippi, who “blamed the South’s plight”—that is, its declining population relative to the North—“on foreign immigration.” As Richards says, it was obvious to anyone paying attention to the facts that if “this trend continued, the North would in fifteen years have a two to one majority in the House and probably a similar majority in the Senate.” It seems unlikely to think that the most intelligent of Southern leaders could not have been cognizant of these primordial facts.

Their intellectual leaders, above all John Calhoun, had after all designed a political theory to justify the Southern, i.e. “minority,” dominance of the federal government. In Calhoun’s A Disquisition on Government, the South Carolinian Senator argued that a government “under the control of the numerical majority” would tend toward “oppression and abuse of power”—it was to correct this tendency, he writes, that the constitution of the United States made its different branches “the organs of the distinct interests or portions of the community; and to clothe each with a negative on the others.” It is, in other words, a fair description of the constitutional doctrine known as the “separation of powers,” a doctrine that Calhoun barely dresses up as something other than what it is: a brief for the protection of the right to own slaves. Every time, in other words, anyone utters the phrase “protecting minority rights” they are, wittingly or not, invoking the ideas of John Calhoun.

In any case, such a history could explain just why it is that Americans are so eager to describe themselves as a “minority,” of whatever kind. After all, it was the purpose of the American government initially to protect a particular minority, and so in political terms it makes sense to describe oneself as such in order to enjoy the protections that, initially built into the system, have become so endemic to American government: for example, the practice of racial gerrymandering, which has the perhaps-beneficial effect of protecting a particular minority—at the probable expense of the interests of the majority. Such a theory might perhaps also explain something else: just how it is, as professor Walter Benn Michaels of the University of Illinois at Chicago has remarked, that after “half a century of anti-racism and feminism, the U.S. today is a less equal society than was the racist, sexist society of Jim Crow.” Or, perhaps, how the election of—to use that favorite tool of American academics, quote marks to signal irony—a “white man” at a women’s college can, somehow, be a “victory” for whatever the American “left” is now. The real irony, of course, is that, in seeking to protect African-Americans and other minorities, that supposed left is merely reinforcing a system originally designed to protect slavery.