The “Hero” We Deserve

“He’s the hero Gotham deserves, but not the one it needs …”
The Dark Knight. (2008).

 

The election of Donald Trump, Peter Beinart argued the other day in The Atlantic, was precisely “the kind of democratic catastrophe that the Constitution, and the Electoral College in particular, were in part designed to prevent.” It’s a fairly common sentiment, it seems, in some parts of the liberal press: Bob Cesca, of Salon, argued back in October that “the shrieking, wild-eyed, uncorked flailing that’s taking place among supporters of Donald Trump, both online and off” made an “abundantly self-evident” case for “the establishment of the Electoral College as a bulwark against destabilizing figures with the charisma to easily manipulate [sic] low-information voters.”  Such arguments often seem to think that their opponents are dewy-eyed idealists, their eyes clouded by Frank Capra movies: Cesca, for example, calls the view in favor of direct popular voting an argument for “popular whimsy.” In reality however it’s the supposedly-liberal argument in favor of the Electoral College that’s based on a misperception: what people like Beinart or Cesca don’t see is that the Electoral College is not a “bulwark” for preventing the election of candidates like Donald Trump—but in fact a machine for producing them. They don’t see it because they do not understand how the Electoral College is built on a flawed knowledge of probability—an argument in turn that, perhaps horrifically, suggests that the idea that powered Trump’s campaign, the thought that the American leadership class is dangerously out of touch with reality, is more or less right.

To see just how ignorant we all are concerning that knowledge, ask yourself this question (as Distinguished Research Scientist of the National Board of Medical Examiners Howard Wainer asked several years ago in the pages of American Scientist): what are the counties of the United States with the highest distribution of kidney cancer? As it happens, Wainer noted, they “tend to be very rural, Midwestern, Southern, or Western”—a finding that might make sense, say, in view of the fact that rural areas tend to be freer of the pollution that infects the largest cities. But, Wainer continued, consider also that the American counties with the lowest distribution of kidney cancer … “tend to be very rural, Midwestern, Southern, or Western”—a finding that might make sense, Wainer remarks, due to “the poverty of the rural lifestyle.” After all, people in rural counties very often don’t receive the best medical care, tend to eat worse, and tend to drink too much and use too much tobacco. But wait—one of these stories has to be wrong, they can’t both be right. Yet as Wainer goes on to write, they both are true: rural American counties have both the highest and the lowest incidences of kidney cancer. But how?

To solve the seeming-mystery, consider a hypothetical example taken from the Nobel Prize-winner Daniel Kahneman’s magisterial book, Thinking: Fast and Slow. “Imagine,” Kahneman says, “a large urn filled with marbles.” Some of these marbles are white, and some are red. Now imagine “two very patient marble counters” taking turns drawing from the urn: “Jack draws 4 marbles on each trial, Jill draws 7.” Every time one of them draws an unusual sample—that is, a sample of marbles that is either all-red or all-white—each records it. The question Kahneman then implicitly asks is: which marble counter will draw more all-white (or all-red) samples?

The answer is Jack—“by a factor of 8,” Kahneman notes: Jack is likely to draw a sample of only one color more than twelve percent of the time, while Jill is likely to draw such a sample less than two percent of the time. But it isn’t really necessary to know high-level mathematics to understand that because Jack is drawing fewer marbles at a time, it is more likely that he will draw all of one color or the other than Jill is. By drawing fewer marbles, Jack is simultaneously more exposed to extreme events—just as it is more likely that, as Wainer has observed, a “county with, say, 100 inhabitants that has no cancer deaths would be in the lowest category,” while conversely if that same county “has one cancer death it would be among the highest.” Because there are fewer people in rural American counties than urban ones, a rural county will have a more extreme rate of kidney cancer, either high or low, than an urban one—for the very same reason that Jack is more likely to have a set of all-white or all-red marbles. The sample size is smaller—and the smaller the sample size, the more likely it is that the sample will be an outlier.

So far, of course, I might be said to be merely repeating something everyone already knows—maybe you anticipated the point about Jack and Jill and the rural counties, or maybe you just don’t see how any of this has any bearing beyond the lesson that scientists ought to be careful when they are designing their experiments. As many Americans think these days, perhaps you think that science is one thing, and politics is something else—maybe because Americans have been taught for several generations now, by people as diverse as conservative philosopher Leo Strauss and liberal biologist Stephen Jay Gould, that the humanities are one thing and the sciences are another. (Which Geoffrey Harpham, formerly the director of the National Humanities Center, might not find surprising: Harpham has claimed that “the modern concept of the humanities” —that is, as something distinct from the sciences—“is truly native only to the United States.”) But consider another of Wainer’s examples: one drawn from, as it happens, the world of education.

“In the late 1990s,” Wainer writes, “the Bill and Melinda Gates Foundation began supporting small schools on a broad-ranging, intensive, national basis.” Other foundations supporting the movement for smaller schools included, Wainer reported, the Annenberg Foundation, the Carnegie Corporation, George Soro’s Open Society Institute, and the Pew Cheritable Trusts, as well as the U.S. Department of Education’s Smaller Learning Communities Program. These programs brought pressure—to the tune 1.7 billion dollars—on many American school systems to break up their larger schools (a pressure that, incidentally, succeeded in cities like Los Angeles, New York, Chicago, and Seattle, among others). The reason the Gates Foundation and its helpers cited for pressuring America’s educators was that, as Wainer writes, surveys showed that “among high-performing schools, there is an unrepresentatively large proportion of smaller schools.” That is, when researchers looked at American schools, they found the highest-achieving schools included a disproportionate number of small ones.

By now, you see where this is going. What all of these educational specialists didn’t consider—but Wainer’s subsequent research found, at least in Pennsylvania—was that small schools were also disproportionately represented among the lowest-achieving schools. The Gates Foundation (led, mind you, by Bill Gates) had simply failed to consider that of course small schools might be overrepresented among the best schools, simply because schools with smaller numbers of students are more likely to be extreme cases. (Something that, by the way, also may have consequences for that perennial goal of professional educators: the smaller class size.) Small schools tend to be represented at the extremes not for any particular reason, but just because that’s how math works.

The inherent humor of a group of educators (and Bill Gates) not understanding how to do basic mathematics is, admittedly, self-evident—and incidentally good reason not to take the testimony of “experts” at face value. But more significantly, it also demonstrates the very real problem here: if highly-educated people (along with college dropout Gates) cannot see the flaws in their own reasoning while discussing precisely the question of education, how much more vulnerable is everyone else to flaws in their thinking? To people like Bob Cesca or Peter Beinart (or David Frum; cf. “Noble Lie”), of course, the answer to this problem is to install more professionals, more experts, to protect us from our own ignorance: to erect, as Cesca urges, a “firewall[…] against ignorant populism.” (A wording that, one imagines, reflects Cesca’s mighty struggle to avoid the word “peasants.”) The difficulty with such reasoning, however, is that it ignores the fact that the Electoral College is an instance of the same sort of ignorance as that which bedeviled the Gates Foundation—or that you may have encountered in yourself when you considered the kidney cancer example above.

Just as rural American counties, that is, are more likely to have either lots of cases—or very few cases—of kidney cancer, so too must those very same sparsely-populated states be more likely to vote in an extreme fashion inconsistent with the rest of the country. For one, it’s a lot cheaper to convince the voters of Wyoming (the half a million or so of whom possess not only a congressman, but also two senators) than the voters of, say, Staten Island (who, despite being only slightly less in number than the inhabitants of Wyoming, have to share a single congressman with part of Brooklyn). Yet the existence of the Electoral College, according to Peter Beinart, demonstrates just how “prescient” the authors of the Constitution were: while Beinart says he “could never have imagined President Donald Trump,” he’s glad that the college is cleverly constructed so as to … well, so far as I can tell Beinart appears to be insinuating that the Electoral College somehow prevented Trump’s election—so, yeeaaaah. Anyway, for those of us still living in reality, suffice it to say that the kidney cancer example illustrates just how dividing one big election into fifty smaller ones inherently makes it more probable that some of those subsidiary elections will be outliers. Not for any particular reason, mind you, but simply because that’s how math works—as anyone not named Bill Gates seems intelligent enough to understand once it’s explained.

In any case, the Electoral College thusly does not make it less likely that an outlier candidate like Donald Trump is elected—but instead more likely that such a candidate would be elected. What Beinart and other cheerleaders for the Electoral College fail to understand (either due to ignorance or some other motive) is that the Electoral College is not a “bulwark” or “firewall” against the Donald Trumps of the world. In reality—a place that, Trump has often implied, those in power seem not to inhabit any more—the Electoral College did not prevent Donald Trump from becoming the president of the United States, but instead (just as everyone witnessed on Election Day), exactly the means by which the “short-fingered vulgarian” became the nation’s leader. Contrary to Beinart or Cesca, the Electoral College is not a “firewall” or some cybersecurity app—it is, instead, a roulette wheel, and a biased one at that.

Like a sucker can expect that, so long as she stays at the roulette wheel, she will eventually go bust, thusly so too can the United States expect, so long as the Electoral College exists, to get presidents like Donald Trump: “accidental” presidencies, after all, have been an occasional feature of presidential elections since at least 1824, when John Quincy Adams was elected despite the fact that Andrew Jackson had won the popular vote. If not even the watchdogs of the American leadership class—much less that class itself—can see the mathematical point of the argument against the Electoral College, that in and of itself is pretty good reason to think that, while the specifics of Donald Trump’s criticisms of the Establishment during the campaign might have been ridiculous, he wasn’t wrong to criticize it. Donald Trump then may not be the president-elect America needs—but he might just be the president people like Peter Beinart and Bob Cesca deserve.

 

Advertisements

I Think I’m Gonna Be Sad

In all Republics the voice of a majority must prevail.
—Andrew Jackson.

I know no safe depository of the ultimate powers of the society but the people themselves, and if we think them not enlightened enough to exercise that control with a wholesome discretion, the remedy is not to take control from them, but to inform their discretion.
—Thomas Jefferson. “Letter to William Charles Jarvis.” 28 September, 1820

 

 

When the Beatles first came to America, in February of 1964—Michael Tomasky noted recently for The Daily Beast—they rode from their gig at Ed Sullivan’s show in New York City to their first American concert in Washington, D.C. by train, arriving two hours and fifteen minutes after leaving Manhattan. It’s a seemingly trivial detail—until it’s pointed out, as Tomasky realized, that anyone trying that trip today would be lucky to do it in three hours. American infrastructure in short is not what it was: as the American Society of Civil Engineers wrote in 2009’s Report Card for American Infrastructure, “years of delayed maintenance and lack of modernization have left Americans with an outdated and failing infrastructure that cannot meet our needs.” But what to do about it? “What’s needed,” wrote John Cassidy, of The New Yorker, recently, “is some way to protect essential infrastructure investments from the vicissitudes of congressional politics and the cyclical ups and downs of the economy.” He suggests, instead, “an independent, nonpartisan board” that could “carry out cost-benefit analyses of future capital-spending proposals.” This board, presumably, would be composed of professionals above the partisan fray, and thus capable of seeing to the long-term needs of the country. It all sounds really jake, and just the thing that the United States ought to do—excepting only for the disappointing fact that the United States already has just such a board, and the existence of that “board” is the very reason why Americans don’t invest in infrastructure.

First though—has national spending on infrastructure declined, and is “politics” the reason for that decline? Many think so: “Despite the pressing infrastructure investment needs of the United States,” businessman Scott Thomasson wrote for the Council on Foreign Relations recently, “federal infrastructure policy is paralyzed by partisan wrangling over massive infrastructure bills that fail to move through Congress.” Those who take that line do have evidence, at least for the first proposition.

Take for instance the Highway Trust Fund, an account that provides federal money for investments in roads and bridges. In 2014, the Fund was in danger of “drying up,” as Rebecca Kaplan reported for CBS News at the time, mostly because the federal gas tax of 18.4 cents per gallon hasn’t been increased since 1993. Gradually, then, both the federal government and the states have, in relative terms, decreased spending on highways and other projects of that sort—so much so that people like former presidential economic advisor and president of Harvard University, Lawrence Summers, say (as Summers did last year) that “the share of public investment [in infrastructure], adjusting for depreciation … is zero.” (That is, spending on infrastructure is effectively less than the rate of inflation—which itself is pretty low.) So, while the testimony of the American Society of Civil Engineers might, to say the least, be biased—asking an engineer whether there ought to be more spending on engineering is like asking an ice cream man whether you need a sundae—there’s a good deal of evidence that the United States could stand more investment in the structures that support American life.

Yet, even if that’s so, is the relative decline in spending really the result of politics—rather than, say, a recognition that the United States simply doesn’t need the same sort of spending on highways and railroads that it once did? Maybe—because “the Internet,” or something—there simply isn’t the need for so much physical building any more. Still, aside from such spectacular examples as the Minneapolis Interstate 35 bridge collapse in 2007 or the failure of the levees in New Orleans during Hurricane Katrina in 2005, there’s evidence that the United States would be spending more money on infrastructure under a different political architecture.

Consider, for example, how the U.S. Senate “shot down … a measure to spend $50 billion on highway, rail, transit and airport improvements” in November of 2011, as The Washington Post’s Rosalind S. Helderman reported at the time. Although the measure was supported by 51 votes in favor to 49 votes against, the measure failed to pass—because, as Helderman wrote, according to the rules of the Senate “the measure needed 60 votes to proceed to a full debate.” Passing bills in the Senate these days requires, it seems, more than majority support—which, near as I can make out, is just what is meant by “congressional gridlock.” What “gridlock” means is the inability of a majority to pass its programs—absent that inability, nearly certainly the United States would be spending more money on infrastructure. At this point, then, the question can be asked: why should the American government be built in a fashion that allows a minority to hold the majority for ransom?

The answer, it seems, might be deflating for John Cassidy’s idea: when the American Constitution was written, it inscribed into its very foundation what has been called (by The Economist, among many, many others) the “dream of bipartisanship”—the notion that, somewhere, there exists a group of very wise men (and perhaps women?) who can, if they were merely handed the power, make all the world right again, and make whole that which is broken. In America, the name of that body is the United States Senate.

As every schoolchild knows, the Senate was originally designed as a body of “notables,” or “wise men”: as the Senate’s own website puts it, the Senate was originally designed to be an “independent body of responsible citizens.” Or, as James Madison wrote to another “Founding Father,” Edmund Randolph, justifying the institution, the Senate’s role was “first to protect the people against their rulers [and] secondly to protect the people against transient impressions into which they themselves might be led.” That last justification may be the source of the famous anecdote regarding the Senate, which involves George Washington saying to Thomas Jefferson that “we pour our legislation into the senatorial saucer to cool it.” While the anecdote itself only appeared nearly a century later, in 1872, still it captures something of what the point of the Senate has always been held to be: a body that would rise above petty politicking and concern itself with the national interest—just the thing that John Cassidy recommends for our current predicament.

This “dream of bipartisanship,” as it happens, is not just one held by the founding generation. It’s a dream that, journalist and gadfly Thomas Frank has said, “is a very typical way of thinking for the professional class” of today. As Frank amplified his remarks, “Washington is a city of professionals with advanced degrees,” and the thought of those professionals is “‘[w]e know what the problems are and we know what the answers are, and politics just get in the way.’” To members of this class, Frank says, “politics is this ugly thing that you don’t really need.” For such people, in other words, John Cassidy’s proposal concerning an “independent, nonpartisan board” that could make decisions regarding infrastructure in the interests of the nation as a whole, rather than from the perspective of this or that group, might seem entirely “natural”—as the only way out of the impasse created by “political gridlock.” Yet in reality—as numerous historians have documented—it’s in fact precisely the “dream of bipartisanship” that created the gridlock in the first place.

An examination of history in other words demonstrates that—far from being the disinterested, neutral body that would look deep into the future to examine the nation’s infrastructure needs—the Senate has actually functioned to discourage infrastructure spending. After John Quincy Adams was elected president in the contested election of 1824, for example, the new leader proposed a sweeping program of investment in roads and canals and bridges, but also a national university, subsidies for scientific research and learning, a national observatory, Western exploration, a naval academy, and a patent law to encourage invention. Yet, as Paul C. Nagel observes in his recent biography of the Massachusetts president, virtually none of Adams’ program was enacted: “All of Adams’ scientific and educational proposals were defeated, as were his efforts to enlarge the road and canal systems.” Which is true, so far as that goes. But Nagel’s somewhat bland remarks do not do justice to the matter of how Adams’ proposals were defeated.

After the election of 1824, which elected the 19th Congress, Adams’ party had a majority in the House of Representatives—one reason why Adams became president at all, because the chaotic election of 1824, split between three major candidates, was decided (as per the Constitution) by the House of Representatives. But while Adams’ faction had a majority in the House, they did not in the Senate, where Andrew Jackson’s pro-Southern faction held sway. Throughout the 19th Congress, the Jacksonian party controlled the votes of 25 Senators (in a Senate of 48 senators, two to a state) while Adams’ faction controlled, at the beginning of the Congress, 20. Given the structure of the U.S. Constitution, which requires agreement between the two houses of Congress as the national legislature before bills can become law, this meant that the Senate could—as it did—effectively veto any of the Adams’ party’s proposals: control of the Senate effectively meant control of the government itself. In short, a recipe for gridlock.

The point of the history lesson regarding the 19th Congress is that, far from being “above” politics as it was advertised to be in the pages of The Federalist Papers and other, more recent, accounts of the U.S. Constitution, the U.S. Senate proved, in the event, hardly to be more neutral than the House of Representatives—or even the average city council. Instead of considering the matter of investment in the future on its own terms, historians have argued, senators thought about Adams’ proposals in terms of how they would affect a matter seemingly remote from the matters of building bridges or canals. Hence, although senators like John Tyler of Virginia, for example—who would later be elected president himself—opposed Adams-proposed “bills that mandated federal spending for improving roads and bridges and other infrastructure” on the grounds that such bills “were federal intrusions on the states” (as Roger Matuz put it in his The Presidents’ Fact Book), many today argue that their motives were not so high-minded. In fact, they were actually as venial as any motive could be.

Many of Adams’ opponents, that is—as William Lee Miller of the University of Virginia wrote in his Arguing About Slavery: John Quincy Adams and the Great Battle in the United States Congress—thought that the “‘National’ program that [Adams] proposed would have enlarged federal powers in a way that might one day threaten slavery.” And, as Miller also remarks, the “‘strict construction’ of the Constitution and states’ rights that [Adams’] opponents insisted upon”— were, “in addition to whatever other foundations in sentiment and philosophy they had, barriers of protection against interference with slavery.” In short—as historian Harold M. Hyman remarked in his magisterial A More Perfect Union: The Impact of the Civil War and Reconstruction on the Constitution—while the “constitutional notion that tight limits existed on what government could do was a runaway favorite” at the time, in reality these seemingly-resounding defenses of limited government were actually motivated by a less-than savory interest: “statesmen of the Old South,” Hyman wrote, found that these doctrines of constitutional limits were “a mighty fortress behind which to shelter slavery.” Senators, in other words, did not consider whether spending money on a national university would be a worthwhile investment for its own sake; instead, they worried about the effect that such an expenditure would have on slavery.

Now, it could still reasonably be objected at this point—and doubtless will be—that the 19th Congress is, in political terms, about as relevant to today’s politics as the Triassic: the debates between a few dozen, usually elderly, white men nearly two centuries ago have been rendered impotent by the passage of time. “This time, it’s different,” such arguments could, and probably will, say. Yet, at a different point in American history, it was well-understood that the creation of such “blue-ribbon” committees or the like—such as the Senate—were in fact simply a means for elite control.

As Alice Sturgis, of Stanford University, wrote in the third edition of her The Standard Code of Parliamentary Procedure (now in its fourth edition, after decades in print, and still the paragon of the field), while some “parliamentary writers have mistakenly assumed that the higher the vote required to take an action, the greater the protection of the members,” in reality “the opposite is true.” “If a two-thirds vote is required to pass a proposal and sixty-five members vote for the proposal and thirty-five members vote against it,” Sturgis went on to write, “the thirty-five members make the decision”—which then makes for “minority, not majority, rule.” In other words, even if many circumstances in American life have changed since 1825, it still remains the case that the American government is (still) largely structured in a fashion that solidifies the ability of a minority—like, say, oligarchical slaveowners—to control the American government. And while slavery was abolished by the Civil War, it still remains the case that a minority can block things like infrastructure spending.

Hence, since infrastructure spending is—nearly by definition—for the improvement of every American, it’s difficult to see how making infrastructure spending less democratic, as Cassidy wishes, would make it easier to spend money on infrastructure. We already have a system that’s not very democratic—arguably, that’s the reason why we aren’t spending money on infrastructure, not because (as pundits like Cassidy might have it), “Washington” has “gotten too political.” The problem with American spending on infrastructure, in sum, is not that it is political. In fact, it is precisely the opposite: it isn’t political enough. That people like John Cassidy—who, by the way, is a transplanted former subject of the Queen of England—think the contrary is itself, I’d wager, reason enough to give him, and people like him, what the boys from Liverpool called a ticket to ride.

Extra! Extra! White Man Wins Election!

 

Whenever you find yourself on the side of the majority,
it is time to pause and reflect
.
—Mark Twain

One of the more entertaining articles I’ve read recently appeared in the New York Times Magazine last October; written by Ruth Padawer and entitled “When Women Become Men At Wellesley,” it’s about how the newest “challenge,” as the terminology goes, facing American women’s colleges these days is the rise of students “born female who identified as men, some of whom had begun taking testosterone to change their bodies.” The beginning of the piece tells the story of “Timothy” Boatwright, a woman who’d decided she felt more like a man, and how Boatwright had decided to run for the post of “multicultural affairs coordinator” at the school, with the responsibility of “promoting a ‘culture of diversity’ among students and staff and faculty members.” After three “women of color” dropped out of the race for various unrelated reasons, that meant that Boatwright would be the only candidate still in the race—which meant that Wellesley, a woman’s college remember, would have as its next “diversity” official a white man. Yet according to Padawer this result wasn’t necessarily as ridiculous as it might seem: “After all,” the Times reporter said, “at Wellesley, masculine-of-center students are cultural minorities.” In the race to produce more and “better” minorities, then, Wellesley has produced a win for the ages—a result that, one might think, would cause reasonable people to stop and consider: just what is it about American society that is causing Americans constantly to redescribe themselves as one kind of “minority” or another? Although the easy answer is “because Americans are crazy,” the real answer might be that Americans are rationally responding to the incentives created by their political system: a system originally designed, as many historians have begun to realize, to protect a certain minority at the expense of the majority.

That, after all, is a constitutional truism, often repeated like a mantra by college students and other species of cretin: the United States Constitution, goes the zombie-like repetition, was designed to protect against the “tyranny of the majority”—even though that exact phrase was only first used by John Adams in 1788, a year after the Constitutional Convention. It is however true that Number 10 of the Federalist Papers does mention “the superior force of an interested and overbearing majority”—yet what those who discuss the supposed threat of the majority never seem to mention is that, while it is true that the United States Constitution is constructed with many, and indeed nearly a bewildering, variety of protections for the “minority,” the minority that was being protected at the moment of the Constitution’s writing was not some vague and theoretical interest: the authors of the Constitution were not professors of political philosophy sitting around a seminar room. Instead, the United States Constitution was, as political scientist Michael Parenti has put it, “a practical response to immediate material conditions”—in other words, the product of political horse-trading that resulted in a document that protected a very particular, and real, minority; one with names and families and, more significantly, a certain sort of property.

That property, as historians today are increasingly recognizing, was slavery. It isn’t for nothing that, as historian William Lee Miller has observed, not only was it that “for fifty of [the nation’s] first sixty four [years], the nation’s president was a slaveholder,” but also that the “powerful office of the Speaker of the House was held by a slaveholder for twenty-eight of the nation’s first thirty-five years,” and that the president pro tem of the Senate—one of the more obscure, yet still powerful, federal offices—“was virtually always a slaveholder.” Both Chief Justices of the Supreme Court through the first five decades of the nineteenth century, John Marshall and Roger Taney, were slaveholders, as were very many federal judges and other, lesser, federal office holders. As historian Garry Wills, author of Lincoln At Gettysburg among other volumes, has written, “the management of the government was disproportionately controlled by the South.” The reason why all of this was so was, as it happens, very ably explained at the time by none other than … Abraham Lincoln.

What Lincoln knew was that there was a kind of “thumb on the scale” when Northerners like the two Adams’, John and John Quincy, were weighed in national elections—a not-so-mysterious force that denied those Northern, anti-slavery men second terms as president. Lincoln himself explained what that force was in the speech he gave at Peoria, Illinois that signaled his return to politics in 1854. There, Lincoln observed that

South Carolina has six representatives, and so has Maine; South Carolina has eight presidential electors, and so has Maine. This is precise equality so far; and, of course they are equal in Senators, each having two. Thus in the control of the government, the two States are equals precisely. But how are they in the number of their white people? Maine has 581,813—while South Carolina has 274,567. Maine has twice as many as South Carolina, and 32,679 over. Thus each white man in South Carolina is more than the double of any man in Maine.

What Lincoln is talking about here is the notorious “Three-Fifths Compromise”: Article I, Section 2, Paragraph 3 of the United States Constitution. According to that proviso, slave states were entitled to representation in Congress according to the ratio of “three fifths of all other persons”—those being counted by that ratio being, of course, Southern slaves. And what the future president—the first president, it might be added, to be elected without the assistance of that ratio (a fact that would have, as I shall show, its own consequences)—was driving at was the effect this mathematical ratio was having on the political landscape of the country.

As Lincoln remarked in the same Peoria speech, the Three-Fifths Compromise meant that “five slaves are counted as being equal to three whites,” which meant that, as a practical matter, “it is an absolute truth, without an exception, that there is no voter in any slave State, but who has more legal power in the government, than any voter in any free State.” To put it more plainly, Lincoln said that the three-fifths clause “in the aggregate, gives the slave States, in the present Congress, twenty additional representatives.” Since the Constitution gave the same advantage in the Electoral College as it gave in the Congress, the reason for results like, say, the Adams’ lack of presidential staying power isn’t that hard to discern.

“One of those who particularly resented the role of the three-fifths clause in warping electoral college votes,” notes Miller, “was John Adams, who would probably have been reelected president over Thomas Jefferson in 1800 if the three-fifths ratio had not augmented the votes of the Southern states.” John Quincy himself had part of two national elections, 1824 and 1828, that had been skewed by what was termed at the time the “federal ratio”—which is to say that the reason why both Adams’ were one-term presidents likely had rather more with the form of the American government than with the content of their character, despite the representations of many historians after the fact.

Adams himself was quite aware of the effect of the “federal ratio.” The Hartford Convention of 1815, led by New Englanders like Adams, had recommended ending the advantage of the Southern states within the Congress, and in 1843 John Quincy’s son Charles Francis Adams caused the Massachusetts’ legislature to pass a measure that John Quincy would himself introduce to the U.S. Congress, “a resolution proposing that the Constitution be amended to eliminate the three-fifths ratio,” as Miller has noted. There were three more such attempts in 1844, three years before Lincoln’s arrival, all of which were soundly defeated, as Miller observes, by totals “skewed by the feature the proposed amendment would abolish.” The three-fifths ratio was not simply a bete noir of the Adams’ personally; all of New England was aware of that the three-fifths ratio protected the interests of the South in the national government—it’s one reason why, prior to the Civil War, “states’ rights” was often thought of as a Northern issue rather than a Southern one.

That the South itself recognized the advantages the United States Constitution gave them, specifically by that document’s protections of “minority”—in other words, slaveowner—interests, can be seen by reference to the reasons the South gave for starting the Civil War. South Carolina’s late 1860 declaration of secession, for example (the first such declaration) outright said that the state’s act of secession was provoked by the election of Abraham Lincoln—in other words, by the fact of the election of a presidential candidate who did not need the electoral votes of the South.

Hence, South Carolina’s declaration said that a “geographical line has been drawn across the Union, and all the States north of that line have united in the election of a man to the high office of President of the United States whose opinions and purposes are hostile to slavery.” The election had been enabled, the document went on to say, “by elevating to citizenship, persons who, by the supreme law of the land, are incapable of becoming citizens, and their votes have been used to inaugurate a new policy, hostile to the South.” Presumably, this is a veiled reference to the population gained by the Northern states over the course of the nineteenth century—a trend that was not only steadily weakening the advantage the South had initially enjoyed at the expense of the North at the time the Constitution had been enacted, but had only accelerated during the 1850s.

As one Northern newspaper observed in 1860, in response to the early figures being released by the United States Census Bureau at that time, the “difference in the relative standing of the slave states and the free, between 1850 and 1860, inevitably shows where the future greatness of our country is to be.” To Southerners the data had a different meaning: as Adam Goodheart noted in a piece for the New York Times’ series on the Civil War, Disunion, “the editor of the New Orleans Picayune noted that states like Michigan, Wisconsin, Iowa and Illinois would each be gaining multiple seats in Congress” while Southern states like Virginia, South Carolina and Tennessee would be losing seats. To the Southern slaveowners who would drive the road to secession during the winter of 1860, the fact that they were on the losing end of a demographic war could not have been far from mind.

Historian Leonard L. Richards of the University of Massachusetts, for example, has noted that when Alexis de Tocqueville traveled the American South in the early 1830s, he discovered that Southern leaders were “noticeably ‘irritated and alarmed’ by their declining influence in the House [of Representatives].” By the 1850s, those population trends were only accelerating: concerning the gains in population the Northern states were realizing by foreign immigration—presumably the subject of South Carolina’s complaint about persons “incapable of becoming citizens”—Richards cites Senator Stephen Adams of Mississippi, who “blamed the South’s plight”—that is, its declining population relative to the North—“on foreign immigration.” As Richards says, it was obvious to anyone paying attention to the facts that if “this trend continued, the North would in fifteen years have a two to one majority in the House and probably a similar majority in the Senate.” It seems unlikely to think that the most intelligent of Southern leaders could not have been cognizant of these primordial facts.

Their intellectual leaders, above all John Calhoun, had after all designed a political theory to justify the Southern, i.e. “minority,” dominance of the federal government. In Calhoun’s A Disquisition on Government, the South Carolinian Senator argued that a government “under the control of the numerical majority” would tend toward “oppression and abuse of power”—it was to correct this tendency, he writes, that the constitution of the United States made its different branches “the organs of the distinct interests or portions of the community; and to clothe each with a negative on the others.” It is, in other words, a fair description of the constitutional doctrine known as the “separation of powers,” a doctrine that Calhoun barely dresses up as something other than what it is: a brief for the protection of the right to own slaves. Every time, in other words, anyone utters the phrase “protecting minority rights” they are, wittingly or not, invoking the ideas of John Calhoun.

In any case, such a history could explain just why it is that Americans are so eager to describe themselves as a “minority,” of whatever kind. After all, it was the purpose of the American government initially to protect a particular minority, and so in political terms it makes sense to describe oneself as such in order to enjoy the protections that, initially built into the system, have become so endemic to American government: for example, the practice of racial gerrymandering, which has the perhaps-beneficial effect of protecting a particular minority—at the probable expense of the interests of the majority. Such a theory might perhaps also explain something else: just how it is, as professor Walter Benn Michaels of the University of Illinois at Chicago has remarked, that after “half a century of anti-racism and feminism, the U.S. today is a less equal society than was the racist, sexist society of Jim Crow.” Or, perhaps, how the election of—to use that favorite tool of American academics, quote marks to signal irony—a “white man” at a women’s college can, somehow, be a “victory” for whatever the American “left” is now. The real irony, of course, is that, in seeking to protect African-Americans and other minorities, that supposed left is merely reinforcing a system originally designed to protect slavery.