Several And A Single Place

 

What’s the matter,
That in these several places of the city
You cry against the noble senate?
Coriolanus 

 

The explanation, says labor lawyer Thomas Geoghegan, possesses amazing properties: he can, the one-time congressional candidate says, “use it to explain everything … because it seems to work on any issue.” But before trotting out what that explanation is, let me select an issue that might appear difficult to explain: gun control, and more specifically just why, as Christopher Ingraham of the Washington Post wrote in July, “it’s never the right time to discuss gun control.” “In recent years,” as Ingraham says, “politicians and commentators from across the political spectrum have responded to mass shootings with an invocation of the phrase ‘now is not the time,’ or a close variant.” That inability even to discuss gun control is a tremendously depressing fact, at least insofar as you have sympathy for the needless waste of lives gun deaths are—until you realize that we Americans have been here before. And that demonstrates, just maybe, that Thomas Geoghegan has a point.

Over a century and a half ago, Americans were facing another issue that, in the words of one commentator, “must not be discussed at all.” It was so grave an issue, in fact, that very many Americans found “fault with those who denounce it”—a position that this commenter found odd: “You say that you think [it] is wrong,” he observed, “but you denounce all attempts to restrain it.” That’s a pretty strange position, because who thinks something is wrong, but yet is “not willing to deal with [it] as a wrong?” What other subject could be called a wrong, but should not be called “wrong in politics because that is bringing morality into politics,” and conversely should not be called “wrong in the pulpit because that is bringing politics into religion.” To sum up, this commenter said, “there is no single place, according to you, where this wrong thing can properly be called wrong!”

The place where this was said was New Haven, Connecticut; the time, March of 1860; the speaker, a failed senatorial candidate now running for president for a brand-new political party. His name was Abraham Lincoln.

He was talking about slavery.

*                                            *                                        *

To many historians these days, much about American history can be explained by the fact that, as historian Leonard Richards of the University of Massachusetts put it in his 2000 book, The Slave Power: The Free North and Southern Domination, 1780-1860, so “long as there was an equal number of slave and free states”—which was more or less official American policy until the Civil War—“the South needed just one Northern vote to be an effective majority in the Senate.” That meant that controlling “the Senate, therefore, was child’s play for southern leaders,” and so “time and again a bill threatening the South [i.e., slavery above all else] made its way through the House only to be blocked in the Senate.” It’s a stunningly obvious point, at least in retrospect—at least for this reader—but I’d wager that few, if any, Americans have really thought through the consequences of this fact.

Geoghegan for example has noted that—as he put it in 1998’s The Secret Lives of Citizens: Pursuing the Promise of American Life—even today the Senate makes it exceedingly difficult to pass legislation: as he wrote, at present only “two-fifths of the Senate, or forty-one senators, can block any bill.” That is, it takes at least sixty senatorial votes to overcome the threat known as the “filibuster,” the invocation of which requires a supermajority to overcome. The filibuster however is not the only anti-majoritarian feature of the Senate, which is also equipped with such quaint customs as the “secret hold” and the quorum call and so forth, each of which can be used to delay a bill’s hearing—and so buy time to squelch potential legislation. Yet, these radically disproportionate senatorial powers merely mask the basic proportionate inequality at the heart of the Senate as an institution itself.

As political scientists Frances Lee and Bruce Oppenheimer point out in their Sizing Up the Senate: The Unequal Consequences of Equal Representation, the Senate is, because it makes small states the equal of large ones, “the most malapportioned legislature in the democratic world.” As Geoghegan has put the point, “the Senate depart[s] too much from one person, one vote,” because (as of the late 1990s) “90 percent of the population base as represented in the Senate could vote yes, and the bill would still lose.” Although Geoghegan wrote that nearly two decades ago, that is still largely true today: in 2013, Dylan Matthews of The Washington Post observed that while the “smallest 20 states amount to 11.27 percent of the U.S. population,” their senators “can successfully filibuster [i.e., block] legislation.” Thus, although the Senate is merely one antidemocratic feature of the U.S. Constitution, it’s an especially egregious one that, by itself, largely prevented a serious discussion of slavery in the years before the Civil War—and today prevents the serious discussion of gun control.

The headline of John Bresnahan’s 2013 article in Politico about the response to the Sandy Hook massacre, for example, was “Gun control hits brick wall in Senate.” Bresnahan quoted Nevadan Harry Reid, the Senate Majority Leader at the time, as saying that “the overwhelming number of Senate Republicans—and that is a gross understatement—are ignoring the voices of 90 percent of the American people.” The final vote was 54-46: in other words, the majority of the Senate was in favor of controls, but because the pro-control senators did not have a supermajority, the measure failed. In short, the measure was a near-perfect illustration of how the Senate can kill a measure that 90 percent of Americans favor.

And you know? Whatever you think about gun control, as an issue, if 90 percent of Americans want something, and what prevents them is not just a silly rule—but the same rule that protected slavery—well then, as Abraham Lincoln might tell us, that’s a problem.

It’s a problem because far from the Senate being—as George Washington supposedly said to Thomas Jefferson—the saucer that cools off politics, it’s actually a pressure cooker that exacerbates issues, rather than working them out. Imagine, say, had the South not had the Senate to protect its “peculiar institution” in the years leading to the Civil War: gradually, immigration to the North would have slowly turned the tide in Congress, which may have led to a series of small pieces of legislation that, eventually, would have abolished slavery.

Perhaps that may not have been a good thing: Ta Nehisi Coates, of The Atlantic, has written that every time he thinks of the 600,000-plus deaths that occurred as a result of the Civil War, he feels “positively fucking giddy.” That may sound horrible to some, of course, but there is something to the notion of “redemptive violence” when it comes to that war; Coates for instance cites the contemporary remarks of Private Thomas Strother, United States Colored Troops, in the Christian Recorder, the 19th century paper of the African Methodist Episcopal Church:

To suppose that slavery, the accursed thing, could be abolished peacefully and laid aside innocently, after having plundered cradles, separated husbands and wives, parents and children; and after having starved to death, worked to death, whipped to death, run to death, burned to death, lied to death, kicked and cuffed to death, and grieved to death; and worst of all, after having made prostitutes of a majority of the best women of a whole nation of people … would be the greatest ignorance under the sun.

“Were I not the descendant of slaves, if I did not owe the invention of my modern self to a bloody war,” Coates continues, “perhaps I’d write differently.” Maybe in some cosmic sense Coates is wrong, and violence is always wrong—but I don’t think I’m in a position to judge, particularly since I, as in part the descendant of Irish men and women in America, am aware that the Irish themselves may have codified that sort of “blood sacrifice theory” in the General Post Office of Dublin during Easter Week of 1916.

Whatever you think of that, there is certainly something to the idea that, because slaves were the single biggest asset in the entire United States in 1860, there was little chance the South would have agreed to end slavery without a fight. As historian Steven Deyle has noted in his Carry Me Back: The Domestic Slave Trade in American Life, the value of American slaves in 1860 was “equal to about seven times the total value of all currency in circulation in the country, three times the value of the entire livestock population, twelve times the value of the entire U.S. cotton crop and forty-eight times the total expenditure of the federal government”—certainly a value much more than it takes to start a war. But then had slavery not had, in effect, government protection during those antebellum years, it’s questionable whether slaves ever might have become such valuable commodities in the first place.

Far from “cooling” things off, in other words, it’s entirely likely that the U.S. Senate, and other anti-majoritarian features of the U.S. Constitution, actually act to enflame controversy. By ensuring that one side does not need to come to the bargaining table, in fact, all such oddities merely postpone—they do not prevent—the day of reckoning. They  build up fuel, ensuring that when the day finally arrives, it is all the more terrible. Or, to put it in the words of an old American song: these American constitutional idiosyncrasies merely trample “out the vintage where the grapes of wrath are stored.”

That truth, it seems, marches on.

Advertisements

Joe Maddon and the Fateful Lightning 

All things are an interchange for fire, and fire for all things,
just like goods for gold and gold for goods.
—Heraclitus

Chicago Cubs logo
Chicago Cubs Logo

Last month, one of the big stories about presidential candidate and Wisconsin governor Scott Walker was his plan not only to cut the state’s education budget, but also to change state law in order to allow, according to The New Republic, “tenured faculty to be laid off at the discretion of the chancellors and Board of Regents.” Given that Wisconsin was the scene of the Ely case of 1894—which ended with the board of trustees of the University of Wisconsin issuing the ringing declaration: “Whatever may be the limitations which trammel inquiry elsewhere we believe the great state University of Wisconsin should ever encourage that continual and fearless sifting and winnowing by which alone truth can be found”—Walker’s attempt is a threat to the entire system of tenure. Yet it may be that American academia in general, if not Wisconsin academics in particular, are not entirely blameless—not because, as American academics might smugly like to think, because they are so totally radical, dude, but on the contrary because they have not been radical enough: to the point that, as I will show, probably the most dangerous, subversive and radical thinker on the North American continent at present is not an academic, nor even a writer, at all. His name is Joe Maddon, and he is the manager of the Chicago Cubs.

First though, what is Scott Walker attempting to do, and why is it a big deal? Specifically, Walker wants to change Section 39 of the relevant Wisconsin statute so that Wisconsin’s Board of Regents could, “with appropriate notice, terminate any faculty or academic staff appointment when such an action is deemed necessary … instead of when a financial emergency exists as under current law.” In other words, Walker’s proposal would more or less allow Wisconsin’s Board of Regents to fire anyone virtually at will, which is why the American Association of University Professors “has already declared that the proposed law would represent the loss of a viable tenure system,” as reported by TNR.

The rationale given for the change is the usual one of allowing for more “flexibility” on the part of campus leaders: by doing so, supposedly, Wisconsin’s university system can better react to the fast-paced changes of the global economy … feel free to insert your own clichés of corporate speak here. The seriousness with which Walker takes the university’s mission as a searcher for truth might perhaps be discerned by the fact that he appointed the son of his campaign chairman to the Board of Regents—nepotism apparently being, in Walker’s view, a sure sign of intellectual probity.

The tenure system was established, of course, exactly to prevent political appointee yahoos from having anything to say about the production of truth—a principle that, one might think, ought to be sacrosanct, especially in the United States, where every American essentially exists right now, today, on the back of intellectual production usually conducted in a university lab. (For starters, it was the University of Chicago that gave us what conservatives seem to like to think of as the holy shield of the atomic bomb.) But it’s difficult to blame “conservatives” for doing what’s in, as the scorpion said to the frog, their nature: what’s more significant is that academics ever allowed this to happen in the first place—and while it is surely the case that all victims everywhere wish to hold themselves entirely blameless for whatever happens to them, it’s also true that no one is surprised when somebody hits a car driving the wrong way.

A clue toward how American academia has been driving the wrong way can be found in a New Yorker story from last October, where Maria Konnikova described a talk moral psychologist Jonathan Haidt gave to the Society for Personality and Social Psychology. The thesis of the talk? That psychology, as a field, had “a lack of political diversity that was every bit as dangerous as a lack of, say, racial or religious or gender diversity.” In other words, the whole field was inhabited by people who were at least liberal, and many who were radicals, on the ideological spectrum, and very few conservatives.

To Haidt, this was a problem because it “introduced bias into research questions [and] methodology,” particularly concerning “politicized notions, like race, gender, stereotyping, and power and inequality.” Yet a follow-up study surveying 800 social psychologists found something interesting: actually, these psychologists were only markedly left-of-center compared to the general population when it came to something called “the social-issues scale.” Whereas in economic matters or foreign affairs, these professors tilted left at about a sixty to seventy percent clip, when it came to what sometimes are called “culture war” issues the tilt was in the ninety percent range. It’s the gap between those measures, I think, that Scott Walker is able to exploit.

In other words, while it ought to be born in mind that this is merely one study of a narrow range of professors, the study doesn’t disprove Professor Walter Benn Michaels’ generalized assertion that American academia has largely become the “human resources department of the right”: that is, the figures seem to say that, sure, economic inequality sorta bothers some of these smart guys and gals—but really to wind them up you’d best start talking about racism or abortion, buster. And what that might mean is that the rise of so-called “tenured radicals” since the 1960s hasn’t really been the fearsome beast the conservative press likes to make it out to be: in fact, it might be so that—like some predator/prey model from ecological study—the more left the professoriate turns, the more conservative the nation becomes.

That’s why it’s Joe Maddon of the Chicago Cubs, rather than any American academic, who is the most radical man in America right now. Why? Because Joe Maddon is doing something interesting in these days of American indifference to reality: he is paying attention to what the world is telling him, and doing something about it in a manner that many, if not most, academics could profit by examining.

What Joe Maddon is doing is batting the pitcher eighth.

That might, obviously, sound like small beer when the most transgressive of American academics are plumbing the atomic secrets of the universe, or questioning the existence of the biological sexes, or any of the other surely fascinating topics the American academy are currently investigating. In fact, however, there is at present no more important philosophical topic of debate anywhere in America, from the literary salons of New York City to the programming pits of Northern California, than the one that has been ongoing throughout this mildest of summers on the North Side of the city of Chicago.

Batting the pitcher eighth is a strategy that has been tried before in the history of American baseball: in 861 games since 1914. But twenty percent of those games, reports Grantland, “have come in 2015,” this season, and of those games, 112 and counting, have been those played by the Chicago Cubs—because in every single game the Cubs have played in this year, the pitcher has batted in the eighth spot. That’s something that no major league baseball team has ever done—and the reasons Joe Maddon has for tossing aside baseball orthodoxy like so many spit cups of tobacco juice is the reason why, eggheads and corporate lackeys aside, Joe Maddon is at present the most screamingly dangerous man in America.

Joe Maddon is dangerous because he saw something in a peculiarity in the rule of baseball, something that most fans are so inured to they have become unconscious to its meaning. That peculiarity is this: baseball has history. It’s a phrase that might sound vague and sentimental, but that’s not the point at all: what it refers to is that, with every new inning, a baseball lineup does not begin again at the beginning, but instead jumps to the next player after the last batter of the previous inning. This is important because, traditionally, pitchers bat in the ninth spot in a given lineup because they are usually the weakest batters on any team by a wide margin, which means that by batting them last, a manager usually ensures that they do not bat until at least the second, or even third, inning at the earliest. Batting the pitcher ninth enables a manager to hide his weaknesses and emphasize his strengths.

That has been orthodox doctrine since the beginnings of the sport: the tradition is so strong that when Babe Ruth, who first played in the major leagues as a pitcher, came to Boston he initially batted in the ninth spot. But what Maddon saw was that while the orthodox theory does minimize the numbers of plate appearances on the part of the pitcher, that does not in itself necessarily maximize the overall efficiency of the offense—because, as Russell Carleton put it for FoxSports, “in baseball, a lot of scoring depends on stringing a couple of hits together consecutively before the out clock runs out.” In other words, while batting the pitcher ninth does hide that weakness as much as possible, that strategy also involves giving up an opportunity: in the words of Ben Lindbergh of Grantland, by “hitting a position player in the 9-hole as a sort of second leadoff man,” a manager could “increase the chances of his best hitter(s) batting with as many runners on base as possible.” Because baseball lineups do not start at the beginning with every new inning, batting the weakest hitter last means that a lineup’s best players—usually the one through three spots—do not have as many runners on base as they might otherwise.

Now, the value of this move of putting the pitcher eighth is debated by baseball statisticians: “Study after study,” says Ben Lindbergh of Grantland, “has shown that the tactic offers at best an infinitesimal edge: two or three runs per season in the right lineup, or none in the wrong one.” In other words, Maddon may very well be chasing a will-o’-the-wisp, a perhaps-illusionary advantage: as Lindbergh says, “it almost certainly isn’t going to make or break the season.” Yet, in an age in which runs are much scarcer than they were in the juiced-up steroid era of the 1990s, and simultaneously the best teams in the National League (the American League, which does not allow pitchers to bat, is immune to the problem) are separated in the standings by only a few games, a couple of runs over the course of a season may be exactly what allows one team to make the playoffs and, conversely, prevents another from doing the same: “when there’s so little daylight separating the top teams in the standings,” as Lindbergh also remarked, “it’s more likely that a few runs—which, once in a while, will add an extra win—could actually account for the different between making and missing the playoffs.” Joe Maddon, in other words, is attempting to squeeze every last run he can from his players with every means at his disposal—even if it means taking on a doctrine that has been part of baseball nearly since its beginnings.

Yet, why should that matter at all, much less make Joe Maddon perhaps the greatest threat to the tranquility of the Republic since John Brown? The answer is that Joe Maddon is relentlessly focused on the central meaningful event of his business: the act of scoring. Joe Maddon’s job is to make sure that his team scores as many runs as possible, and he is willing to do what it takes in order to make that happen. The reason that he is so dangerous—and why the academics of America may just deserve the thrashing the Scott Walkers of the nation appear so willing to give them—is that American democracy is not so singlemindedly devoted to getting the maximum value out of its central meaningful event: the act of voting.

Like the baseball insiders who scoff at Joe Maddon for scuttling after a spare run or two over the course of 162 games—like the major league assistant general quoted by Lindbergh who dismissed the concept by saying “the benefit of batting the pitcher eighth is tiny if it exists at all”—American political insiders believe that a system that profligately disregards the value of votes doesn’t really matter over the course of a political season—or century. And it is indisputable that the American political system is profligate with the value of American votes. The value of a single elector in the Electoral College, for example, can differ by hundreds of thousands of votes cast by voters each Election Day, depending on the state; while through “the device of geographic—rather than population-based—representation in the Senate, [the system] substantially dilutes the voice and voting power of the majority of Americans who live in urban and metropolitan areas in favor of those living in rural areas,” as one Princeton political scientist has put the point. Or to put it more directly, as Dylan Matthews put it for the Washington Post two years ago, if “senators representing 17.82 percent of the population agree, they can get a majority”—while on the other hand “11.27 percent of the U.S. population,” as represented by the smallest 20 states, “can successfully filibuster legislation.” Perhaps most significantly, as Frances Lee and Bruce Oppenheimer have shown in their Sizing Up the Senate: The Unequal Consequences of Equal Representation, “less populous states consistently receive more federal funding than states with more people.” As presently constructed, in other words, the American political system is designed to waste votes, not to seek all of their potential value.

American academia, however, does not discuss such matters. Indeed, the disciplines usually thought of as the most politically “radical”—usually those in the humanities—are more or less expressly designed to rule out the style of thought (naturalistic, realistic) taken on here: one reason, perhaps, explaining the split in psychology professors between their opinions on economic matters and “cultural” ones observed by Maria Konnikova. Yet just because an opinion is not registered in academia does not mean it does not exist: imbalances are inevitably corrected, which undoubtedly will occur in this matter of the relative value of an American vote. The problem of course is that such “price corrections,” when it comes to issues like this, are not particularly known for being calm or smooth. Perhaps there is one possible upside however: when that happens—and there is no doubt that the day of what the song calls “the fateful lightning” will arrive, be it tomorrow or in the coming generations—Joe Maddon may receive his due as not just a battler in the frontlines of sport, but a warrior for justice. That, at least, might not be entirely surprising to his fellow Chicagoans—who remember that it was not the flamboyant tactics of busting up liquor stills that ultimately got Capone, but instead the slow and patient work of tax accountants and auditors.

You know, the people who counted.

Extra! Extra! White Man Wins Election!

 

Whenever you find yourself on the side of the majority,
it is time to pause and reflect
.
—Mark Twain

One of the more entertaining articles I’ve read recently appeared in the New York Times Magazine last October; written by Ruth Padawer and entitled “When Women Become Men At Wellesley,” it’s about how the newest “challenge,” as the terminology goes, facing American women’s colleges these days is the rise of students “born female who identified as men, some of whom had begun taking testosterone to change their bodies.” The beginning of the piece tells the story of “Timothy” Boatwright, a woman who’d decided she felt more like a man, and how Boatwright had decided to run for the post of “multicultural affairs coordinator” at the school, with the responsibility of “promoting a ‘culture of diversity’ among students and staff and faculty members.” After three “women of color” dropped out of the race for various unrelated reasons, that meant that Boatwright would be the only candidate still in the race—which meant that Wellesley, a woman’s college remember, would have as its next “diversity” official a white man. Yet according to Padawer this result wasn’t necessarily as ridiculous as it might seem: “After all,” the Times reporter said, “at Wellesley, masculine-of-center students are cultural minorities.” In the race to produce more and “better” minorities, then, Wellesley has produced a win for the ages—a result that, one might think, would cause reasonable people to stop and consider: just what is it about American society that is causing Americans constantly to redescribe themselves as one kind of “minority” or another? Although the easy answer is “because Americans are crazy,” the real answer might be that Americans are rationally responding to the incentives created by their political system: a system originally designed, as many historians have begun to realize, to protect a certain minority at the expense of the majority.

That, after all, is a constitutional truism, often repeated like a mantra by college students and other species of cretin: the United States Constitution, goes the zombie-like repetition, was designed to protect against the “tyranny of the majority”—even though that exact phrase was only first used by John Adams in 1788, a year after the Constitutional Convention. It is however true that Number 10 of the Federalist Papers does mention “the superior force of an interested and overbearing majority”—yet what those who discuss the supposed threat of the majority never seem to mention is that, while it is true that the United States Constitution is constructed with many, and indeed nearly a bewildering, variety of protections for the “minority,” the minority that was being protected at the moment of the Constitution’s writing was not some vague and theoretical interest: the authors of the Constitution were not professors of political philosophy sitting around a seminar room. Instead, the United States Constitution was, as political scientist Michael Parenti has put it, “a practical response to immediate material conditions”—in other words, the product of political horse-trading that resulted in a document that protected a very particular, and real, minority; one with names and families and, more significantly, a certain sort of property.

That property, as historians today are increasingly recognizing, was slavery. It isn’t for nothing that, as historian William Lee Miller has observed, not only was it that “for fifty of [the nation’s] first sixty four [years], the nation’s president was a slaveholder,” but also that the “powerful office of the Speaker of the House was held by a slaveholder for twenty-eight of the nation’s first thirty-five years,” and that the president pro tem of the Senate—one of the more obscure, yet still powerful, federal offices—“was virtually always a slaveholder.” Both Chief Justices of the Supreme Court through the first five decades of the nineteenth century, John Marshall and Roger Taney, were slaveholders, as were very many federal judges and other, lesser, federal office holders. As historian Garry Wills, author of Lincoln At Gettysburg among other volumes, has written, “the management of the government was disproportionately controlled by the South.” The reason why all of this was so was, as it happens, very ably explained at the time by none other than … Abraham Lincoln.

What Lincoln knew was that there was a kind of “thumb on the scale” when Northerners like the two Adams’, John and John Quincy, were weighed in national elections—a not-so-mysterious force that denied those Northern, anti-slavery men second terms as president. Lincoln himself explained what that force was in the speech he gave at Peoria, Illinois that signaled his return to politics in 1854. There, Lincoln observed that

South Carolina has six representatives, and so has Maine; South Carolina has eight presidential electors, and so has Maine. This is precise equality so far; and, of course they are equal in Senators, each having two. Thus in the control of the government, the two States are equals precisely. But how are they in the number of their white people? Maine has 581,813—while South Carolina has 274,567. Maine has twice as many as South Carolina, and 32,679 over. Thus each white man in South Carolina is more than the double of any man in Maine.

What Lincoln is talking about here is the notorious “Three-Fifths Compromise”: Article I, Section 2, Paragraph 3 of the United States Constitution. According to that proviso, slave states were entitled to representation in Congress according to the ratio of “three fifths of all other persons”—those being counted by that ratio being, of course, Southern slaves. And what the future president—the first president, it might be added, to be elected without the assistance of that ratio (a fact that would have, as I shall show, its own consequences)—was driving at was the effect this mathematical ratio was having on the political landscape of the country.

As Lincoln remarked in the same Peoria speech, the Three-Fifths Compromise meant that “five slaves are counted as being equal to three whites,” which meant that, as a practical matter, “it is an absolute truth, without an exception, that there is no voter in any slave State, but who has more legal power in the government, than any voter in any free State.” To put it more plainly, Lincoln said that the three-fifths clause “in the aggregate, gives the slave States, in the present Congress, twenty additional representatives.” Since the Constitution gave the same advantage in the Electoral College as it gave in the Congress, the reason for results like, say, the Adams’ lack of presidential staying power isn’t that hard to discern.

“One of those who particularly resented the role of the three-fifths clause in warping electoral college votes,” notes Miller, “was John Adams, who would probably have been reelected president over Thomas Jefferson in 1800 if the three-fifths ratio had not augmented the votes of the Southern states.” John Quincy himself had part of two national elections, 1824 and 1828, that had been skewed by what was termed at the time the “federal ratio”—which is to say that the reason why both Adams’ were one-term presidents likely had rather more with the form of the American government than with the content of their character, despite the representations of many historians after the fact.

Adams himself was quite aware of the effect of the “federal ratio.” The Hartford Convention of 1815, led by New Englanders like Adams, had recommended ending the advantage of the Southern states within the Congress, and in 1843 John Quincy’s son Charles Francis Adams caused the Massachusetts’ legislature to pass a measure that John Quincy would himself introduce to the U.S. Congress, “a resolution proposing that the Constitution be amended to eliminate the three-fifths ratio,” as Miller has noted. There were three more such attempts in 1844, three years before Lincoln’s arrival, all of which were soundly defeated, as Miller observes, by totals “skewed by the feature the proposed amendment would abolish.” The three-fifths ratio was not simply a bete noir of the Adams’ personally; all of New England was aware of that the three-fifths ratio protected the interests of the South in the national government—it’s one reason why, prior to the Civil War, “states’ rights” was often thought of as a Northern issue rather than a Southern one.

That the South itself recognized the advantages the United States Constitution gave them, specifically by that document’s protections of “minority”—in other words, slaveowner—interests, can be seen by reference to the reasons the South gave for starting the Civil War. South Carolina’s late 1860 declaration of secession, for example (the first such declaration) outright said that the state’s act of secession was provoked by the election of Abraham Lincoln—in other words, by the fact of the election of a presidential candidate who did not need the electoral votes of the South.

Hence, South Carolina’s declaration said that a “geographical line has been drawn across the Union, and all the States north of that line have united in the election of a man to the high office of President of the United States whose opinions and purposes are hostile to slavery.” The election had been enabled, the document went on to say, “by elevating to citizenship, persons who, by the supreme law of the land, are incapable of becoming citizens, and their votes have been used to inaugurate a new policy, hostile to the South.” Presumably, this is a veiled reference to the population gained by the Northern states over the course of the nineteenth century—a trend that was not only steadily weakening the advantage the South had initially enjoyed at the expense of the North at the time the Constitution had been enacted, but had only accelerated during the 1850s.

As one Northern newspaper observed in 1860, in response to the early figures being released by the United States Census Bureau at that time, the “difference in the relative standing of the slave states and the free, between 1850 and 1860, inevitably shows where the future greatness of our country is to be.” To Southerners the data had a different meaning: as Adam Goodheart noted in a piece for the New York Times’ series on the Civil War, Disunion, “the editor of the New Orleans Picayune noted that states like Michigan, Wisconsin, Iowa and Illinois would each be gaining multiple seats in Congress” while Southern states like Virginia, South Carolina and Tennessee would be losing seats. To the Southern slaveowners who would drive the road to secession during the winter of 1860, the fact that they were on the losing end of a demographic war could not have been far from mind.

Historian Leonard L. Richards of the University of Massachusetts, for example, has noted that when Alexis de Tocqueville traveled the American South in the early 1830s, he discovered that Southern leaders were “noticeably ‘irritated and alarmed’ by their declining influence in the House [of Representatives].” By the 1850s, those population trends were only accelerating: concerning the gains in population the Northern states were realizing by foreign immigration—presumably the subject of South Carolina’s complaint about persons “incapable of becoming citizens”—Richards cites Senator Stephen Adams of Mississippi, who “blamed the South’s plight”—that is, its declining population relative to the North—“on foreign immigration.” As Richards says, it was obvious to anyone paying attention to the facts that if “this trend continued, the North would in fifteen years have a two to one majority in the House and probably a similar majority in the Senate.” It seems unlikely to think that the most intelligent of Southern leaders could not have been cognizant of these primordial facts.

Their intellectual leaders, above all John Calhoun, had after all designed a political theory to justify the Southern, i.e. “minority,” dominance of the federal government. In Calhoun’s A Disquisition on Government, the South Carolinian Senator argued that a government “under the control of the numerical majority” would tend toward “oppression and abuse of power”—it was to correct this tendency, he writes, that the constitution of the United States made its different branches “the organs of the distinct interests or portions of the community; and to clothe each with a negative on the others.” It is, in other words, a fair description of the constitutional doctrine known as the “separation of powers,” a doctrine that Calhoun barely dresses up as something other than what it is: a brief for the protection of the right to own slaves. Every time, in other words, anyone utters the phrase “protecting minority rights” they are, wittingly or not, invoking the ideas of John Calhoun.

In any case, such a history could explain just why it is that Americans are so eager to describe themselves as a “minority,” of whatever kind. After all, it was the purpose of the American government initially to protect a particular minority, and so in political terms it makes sense to describe oneself as such in order to enjoy the protections that, initially built into the system, have become so endemic to American government: for example, the practice of racial gerrymandering, which has the perhaps-beneficial effect of protecting a particular minority—at the probable expense of the interests of the majority. Such a theory might perhaps also explain something else: just how it is, as professor Walter Benn Michaels of the University of Illinois at Chicago has remarked, that after “half a century of anti-racism and feminism, the U.S. today is a less equal society than was the racist, sexist society of Jim Crow.” Or, perhaps, how the election of—to use that favorite tool of American academics, quote marks to signal irony—a “white man” at a women’s college can, somehow, be a “victory” for whatever the American “left” is now. The real irony, of course, is that, in seeking to protect African-Americans and other minorities, that supposed left is merely reinforcing a system originally designed to protect slavery.