A Fable of a Snake

 

… Thus the orb he roamed
With narrow search; and with inspection deep
Considered every creature, which of all
Most opportune might serve his wiles; and found
The Serpent subtlest beast of all the field.
Paradise Lost. Book IX.
The Commons of England assembled in Parliament, [find] by too long experience, that
the House of Lords is useless and dangerous to the people of England …
—Parliament of England. “An Act for the Abolishing of the House of Peers.” 19 March 1649.

 

Imagine,” wrote the literary critic Terry Eagleton some years ago in the first line of his review of the biologist Richard Dawkins’ book, The God Delusion, “someone holding forth on biology whose only knowledge of the subject is the Book of British Birds, and you have a rough idea of what it feels like to read Richard Dawkins on theology.” Eagleton could quite easily have left things there—the rest of the review contains not much more information, though if you have a taste for that kind of thing it does have quite a few more mildly-entertaining slurs. Like a capable prosecutor, Eagleton arraigns Dawkins for exceeding his brief as a biologist: that is, of committing the scholarly heresy of speaking from ignorance. Worse, Eagleton appears to be right: of the two, clearly Eagleton is better read in theology. Yet although it may be that Dawkins the real person is ignorant of the subtleties of the study of God, the rules of logic suggest that it’s entirely possible that someone could be just as educated as Eagleton in the theology—and yet hold arguably views closer to Dawkins’ than to Eagleton’s. As it happens, that person not only once existed, but Eagleton wrote a review of someone else’s biography of him. His name is Thomas Aquinas.

Thomas Aquinas is, of course, the Roman Catholic saint whose writings stand, even today, as the basis of Church doctrine: according to Aeterni Patris, an encyclical delivered by Pope Leo XIII in 1879, Aquinas stands as “the chief and master of all” the scholastic Doctors of the church. Just as, in other words, the scholar Richard Hofstadter called American Senator John Calhoun of South Carolina “the Marx of the master class,” so too could Aquinas be called the Marx of the Catholic Church: when a good Roman Catholic searches for the answer to a difficult question, Aquinas is usually the first place to look. It might be difficult then to think of Aquinas, the “Angelic Doctor” as he is sometimes referred to by Catholics, as being on Dawkins’ side in this dispute: both Aquinas and Eagleton lived by means of examining old books and telling people about what they found, whereas Dawkins is, by training at any rate, a zoologist.

Yet, while in that sense it could be argued that the Good Doctor (as another of his Catholic nicknames puts it) is therefore more like Eagleton (who was educated in Catholic schools) than he is like Dawkins, I think it could equally well be argued that it is Dawkins who makes better use of the tools Aquinas made available. Not merely that, however: it’s something that can be demonstrated simply by reference to Eagleton’s own work on Aquinas.

“Whatever other errors believers may commit,” Eagleton for example says about Aquinas’ theology, “not being able to count is not one of them”: in other words, as Eagleton properly says, one of the aims of Aquinas’ work was to assert that “God and the universe do not make two.” That’s a reference to Aquinas’ famous remark, sometimes called the “principle of parsimony,” in his magisterial Summa Contra Gentiles: “If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments where one suffices.” But what’s strange about Eagleton’s citation of Aquinas’ thought is that it is usually thought of as a standard argument on Richard Dawkins’ side of the ledger.

Aquinas’ statement is after all sometimes held to be one of the foundations of scientific belief. Sometimes called “Occam’s Razor,” Isaac Newton referred to Aquinas’ axiom in the Principia Mathematica when the great Englishman held that his work would “admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” Later still, in a lecture Albert Einstein gave at Oxford University in 1933, Newton’s successor affirmed that “the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.” Through these lines of argument runs more or less Aquinas’ thought that there is merely a single world—it’s just that the scientists had a rather different idea of what that world is than Aquinas did.

“God for Aquinas is not a thing in or outside the world,” according to Eagleton, “but the ground of possibility of anything whatever”: that is, the world according to Aquinas is a God-infused one. The two great scientists seem to have held, however, a position closer to the view supposed to have been expressed to Napoleon by the eighteenth-century mathematician Pierre-Simon LaPlace: that there is “no need of that hypothesis.” Both in other words think there is a single world; the distinction to be made is simply whether the question of God is important to that world’s description—or not.

One way to understand the point is to say that the scientists have preserved Aquinas’ way of thinking—the axiom sometimes known as the “principle of parsimony”—while discarding (as per the principle itself) that which was unnecessary: that is, God. Viewed in that way, the scientists might be said to be more like Aquinas than Aquinas—or, at least, than Terry Eagleton is like Aquinas. For Eagleton’s disagreement with Aquinas is different: instead of accepting the single-world hypothesis and rejecting whether it is God or not, Eagleton’s contention is with the “principle of parsimony” itself—the contention that there can be merely a single explanation for the world.

Now, getting into that whole subject is worth a library, so we’ll leave it aside here; let me simply ask you to stipulate that there is a lot of discussion about Occam’s Razor and its relation to the sciences, and that Terry Eagleton (a—former?—Marxist) is both aware of it and bases his objection to Aquinas upon it. The real question to my mind is this one: although Eagleton—as befitting a political radical—does what he does on political grounds, is the argumentative move he makes here as legitimate and as righteous as he makes it out to be? The reason I ask this is because the “principle of parsimony” is an essential part of a political case that’s been made for over two centuries—which is to say that, by abandoning Thomas Aquinas’ principle, people adopting Eagleton’s anti-scientific view are essentially conceding that political goal.

That political application concerns the design of legislatures: just as Eagleton and Dawkins argue over whether there is a single world or two, in politics the question of whether legislatures ought to have one house or two has occupied people for centuries. (Leaving aside such cases as Sweden, which once had—in a lovely display of the “diversity” so praised by many of Eagleton’s compatriots—four legislative houses.) The French revolutionary leader, the Abbè Sieyés—author of the manifesto of the French Revolution, What Is the Third Estate?—has likely put the case for a single house most elegantly: the abbè once wrote that legislatures ought to have one house instead of two on the grounds that “if the second chamber agrees with the first, it is useless; if it disagrees it is dangerous.” Many other French revolutionary leaders had similar thoughts: for example, Mirabeau wrote that what are usually termed “second chambers,” like the British House of Lords or the American Senate, are often “the constitutional refuge of the aristocracy and the preservation of the feudal system.” The Marquis de Condorcet thought much the same. But such a thought has not been limited to the eighteenth-century, nor to the right-hand side of the English Channel.

Indeed, there has long been similar-minded people across the Channel—there’s reason in fact to think that the French got the idea from the English in the first place given that Oliver Cromwell’s “Roundhead” regime had abolished the House of Lords in 1649. (Though it was brought back after the return of Charles II.) In 1867’s The English Constitution, the writer and editor-in-chief of The Economist, Walter Bagehot, had asserted that the “evil of two co-equal Houses of distinct natures is obvious.” George Orwell, the English novelist and essayist, thought much the same: in the early part of World War II he fully expected that the need for efficiency produced by the war would result in a government that would “abolish the House of Lords”—and in reality, when the war ended and Clement Atlee’s Labour government took power, one of Orwell’s complaints about it was that it had not made a move “against the House of Lords.” Suffice it to say, in other words, that the British tradition regarding the idea of a single legislative body is at least as strong as that of the French.

Support for the idea of a single legislative house, called unicameralism, is however not limited to European sources. For example, the French revolutionary leader, the Marquis de Condorcet, only began expressing support for the concept after meeting Benjamin Franklin in 1776—the Philadelphian having recently arrived in Paris from an American state, Pennsylvania, best-known for its single-house legislature. (A result of 1701’s Charter of Privileges.) Franklin himself contributed to the literature surrounding this debate by introducing what he called “the famous political Fable of the Snake, with two Heads and one Body,” in which the said thirsty Snake, like Buridan’s Ass, cannot decide which way to proceed towards water—and hence dies of dehydration. Franklin’s concerns were later taken up, a century and half later, by the Nebraskan George Norris—ironically, a member of the U.S. Senate—who criss-crossed his state in the summer of 1934 (famously wearing out two sets of tires in the process) campaigning for the cause of unicameralism. Norris’ side won, and today Nebraska’s laws are passed by a single legislative house.

Lately, however, the action has swung back across the Atlantic: both Britain and Italy have sought to reform, if not abolish, their upper houses. In 1999, the Parliament of Great Britain passed the House of Lords Act, which ended a tradition that had lasted nearly a thousand years: the hereditary right of the aristocracy to sit in that house. More recently, Italian prime minister Matteo Renzi called “for eliminating the Italian Senate,” as Alexander Stille put it in The New Yorker, which the Italian leader claimed—much as Norris had claimed—that doing so would “reduc[e] the cost of the political class and mak[e] its system more functional.” That proved, it seems, a bridge too far for many Italians, who forced Renzi out of office in 2016; similarly, despite the withering scorn of Orwell (who could be quite withering), the House of Lords has not been altogether abolished.

Nevertheless, American professor of political science James Garner observed so early as 1910, citing the example of Canadian provincial legislatures, that among “English speaking people the tendency has been away from two chambers of equal rank for nearly two hundred years”—and the latest information indicates the same tendency at work worldwide. According to the Inter-Parliamentary Union—a kind of trade organization for legislatures—there are for instance currently 116 unicameral legislatures in the world, compared with 77 bicameral ones. That represents a change even from 2014, when there were 3 less unicameral ones and 2 more bicameral ones, according to a 2015 report by Betty Drexage for the Dutch government. Globally, in other words, bicameralism appears to be on the defensive and unicameralism on the rise—for reasons, I would suggest, that have much to do with widespread adoption of a perspective closer to Dawkins’ than to Eagleton’s.

Within the English-speaking world, however—and in particular within the United States—it is in fact Eagleton’s position that appears ascendent. Eagleton’s dualism is, after all, institutionally a far more useful doctrine for the disciplines known, in the United States, as “the humanities”: as the advertisers know, product differentiation is a requirement for success in any market. Yet as the former director of the American National Humanities Center, Geoffrey Galt Harpham, has remarked, the humanities are “truly native only to the United States”—which implies that the dualist conception of knowledge that depicts the sciences as opposed to something called “the humanities” is one that is merely contingent, not a necessary part of reality. Therefore, Terry Eagleton, and other scholars in those disciplines, may advertise themselves as on the side of “the people,” but the real history of the world may differ—which is to say, I suppose, that somebody’s delusional, all right.

It just may not be Richard Dawkins.

Advertisements

To Hell Or Connacht

And I looked, and behold a pale horse, and his name that sat on him was Death,
and Hell followed with him.
Revelations 6:8. 

In republics, it is a fundamental principle, that the majority govern, and that the minority comply with the general voice.
—Oliver Ellsworth.

In all Republics the voice of a majority must prevail.
—Andrew Jackson.

 

“They are at the present eating, or have already eaten, their seed potatoes and seed corn, to preserve life,” goes the sentence from the Proceedings of the Mansion House Committee for the Relief of Distress in Ireland During the Months of January and February, 1880. Not many are aware, but the Great Hunger of 1845-52 (or, in Gaelic, an Gorta Mór) was not the last Irish potato famine; by the autumn of 1879, the crop had failed and starvation loomed for thousands—especially in the west of the country, in Connacht. (Where, Oliver Cromwell had said two centuries before, was one choice for Irish Catholics to go if they did not wish to be murdered by Cromwell’s New Model Army—the other being Hell.) But this sentence records the worst fear: it was because the Irish had been driven to eat their seed potatoes in the winter of 1846 that the famine that had been brewing since 1845 became the Great Hunger in the year known as “Black ’47”: although what was planted in the spring of 1847 largely survived to harvest, there hadn’t been enough seeds to plant in the first place. Hence, everyone who heard that sentence from the Mansion House Committee in 1880 knew what it meant: the coming of that rider on a pale horse spoken of in Revelations. It’s a history lesson I bring up to suggest that “eating your seed corn” also explains the coming of another specter that many American intellectuals may have assumed lay in the past: Donald Trump.

There are two hypotheses about the rise of Donald Trump to the presumptive candidacy of the Republican Party. The first—that of many Hillary Clinton Democrats—is that Trump is tapping into a reservoir of racism that is simply endemic to the United States: in this view, “’murika” is simply a giant cesspool of hate waiting to break out at any time. But that theory is an ahistorical one: why should a Trump-like candidate—that is, one sustained by racism—only become the presumptive nominee of a major party now? “Since the 1970s support for public and political forms of discrimination has shrunk significantly” says one voice on the subject (Anna Maria Barry-Jester’s, surveying many sociological studies for FiveThirtyEight). If the studies Barry-Jester highlights are correct, and yet levels of racism remain precisely the same as in the past, then that must mean that the American public is not getting less racist—but instead merely getting better at hiding it. That then raises the question: if the level of racism still remains as high as in the past, why wasn’t it enough to propel, say, former Alabama governor George Wallace to a major party nomination in 1968 or 1972? In other words, why Trump now, rather than George Wallace then? Explaining Trump’s rise as due to racism has a timing problem: it’s difficult to think that, somehow, racism has become more acceptable today than it was forty or more years ago.

Yet, if not racism, then what is fueling Trump? Journalist and gadfly Thomas Frank suggests an answer: the rise of Donald Trump is not the result of racism, but of efforts to fight racism—or rather, the American Left’s focus on racism at the expense of economics. To wildly overgeneralize: Trump is not former Republican political operative Karl Rove’s fault, but rather Fannie Lou Hamer’s.

Although little known today, Fannie Lou Hamer was once famous as a leader of the Mississippi Freedom Democratic Party’s delegation to the 1964 Democratic Party Convention. On arrival Hamer addressed the convention’s Credentials Committee to protest the seating of Mississippi’s “regular” Democratic delegation on the grounds that Mississippi’s official delegation, an all-white slate of delegates, had only become the “official” delegation by suppressing the votes of the state’s 400,000 black people—which had the disadvantageous quality, from the national party’s perspective, of being true. What’s worse, when the “practical men” sent to negotiate with her—especially Senator Hubert Humphrey of Minnesota—asked her to step down her challenge on the pragmatic grounds that her protest risked losing the entire South for President Lyndon Johnson in the upcoming general election, Hamer refused: “Senator Humphrey,” Hamer rebuked him; “I’m going to pray to Jesus for you.” With that, Hamer rejected the hardheaded, practical calculus that informed Humphrey’s logic; in doing so, she set a example that many on the American Left have followed since—an example that, to follow Frank’s argument, has provoked the rise of Trump.

Trump’s success, Frank explains, is not the result of cynical Republican electoral exploitation, but instead because of policy choices made by Democrats: choices that not only suggest that cynical Republican choices can be matched by cynical Democratic ones, but that Democrats have abandoned the key philosophical tenet of their party’s very existence. First, though, the specific policy choices: one of them is the “austerity diet” Jimmy Carter (and Carter’s “hand-picked” Federal Reserve chairman, Paul Volcker), chose for the nation’s economic policy at the end of the 1970s. In his latest book, Listen, Liberal: or, Whatever Happened to the Party of the People?, Frank says that policy “was spectacularly punishing to the ordinary working people who had once made up the Democratic base”—an assertion Frank is hardly alone in repeating, because as noted not-radical Fortune magazine has observed, “Volcker’s policies … helped push the country into recession in 1980, and the unemployment rate jumped from 6% in August 1979, the month of Volcker’s appointment, to 7.8% in 1980 (and peaked at 10.8 % in 1982).” And Carter was hardly the last Democratic president who made economic choices contrary to the interests of what might appear to be the Democratic Party’s constituency.

The next Democratic president, Bill Clinton, after all put the North American Free Trade Agreement through Congress: an agreement that had the effect (as the Economic Policy Institute has observed) of “undercut[ing] the bargaining power of American workers” because it established “the principle that U.S. corporations could relocate production elsewhere and sell back into the United States.” Hence, “[a]s soon as NAFTA became law,” the EPI’s Jeff Faux wrote in 2013, “corporate managers began telling their workers that their companies intended to move to Mexico unless the workers lowered the cost of their labor.” (The agreement also allowed companies to extort tax breaks from state and municipal coffers by threatening to move, with the attendant long-term costs—including an inability to fight for workers.) In this way, Frank says, NAFTA “ensure[d] that labor would be too weak to organize workers from that point forward”—and NAFTA has also become the basis for other trade agreements, such as the Trans-Pacific Partnership backed by another Democratic administration: Barack Obama’s.

That these economic policies have had the effects described is, perhaps, debatable; what is not debatable, however, is that economic inequality has grown in the United States. As the Pew Research Center reports, “in real terms the average wage peaked more than 40 years ago,” and as Christopher Ingraham of the Washington Post reported last year, “the fact that the top 20 percent of earners rake in over 50 percent of the total earnings in any given year” has become something of a cliché in policy circles. Ingraham also reports that “the wealthiest 10 percent of U.S. households have captured a whopping 76 percent of all the wealth in America”—a “number [that] is considerably higher than in other rich nations.” These figures could be multiplied; they represent a reality that even Republican candidates other than Trump—who for the most part was the only candidate other than Bernie Sanders to address these issues—began to respond to during the primary season over the past year.

“Today,” said Senator and then-presidential candidate Ted Cruz in January—repeating the findings of University of California, Berkeley economist Emmanuel Saez—“the top 1 percent earn a higher share of our national income than any year since 1928.” While the cause of these realities are still argued over—Cruz for instance sought to blame, absurdly, Obamacare—it’s nevertheless inarguable that the country has become radically remade economically over recent decades.

That reformation has troubling potential consequences, if they have not already themselves become real. One of them has been adequately described by Nobel Prize-winning economist Joseph Stiglitz: “as more money becomes concentrated at the top, aggregate demand goes into a decline.” What Stiglitz means is this: say you’re Mitt Romney, who had a 2010 income of $21.7 million. “Even if Romney chose to live a much more indulgent lifestyle” than he actually does, Stiglitz says, “he would only spend a fraction of that sum in a typical year to support himself and his wife in their several homes.” “But take the same amount of money and divide it among 500 people,” Stiglitz continues, “say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all of the money gets spent.” That expenditure represents economic activity: as should surely, but apparently isn’t to many people, be self-evident, a lot more will happen economically if 500 people split twenty million dollars than if one person has all of it.

Stiglitz, of course, did not invent this argument: it used to be bedrock for Democrats. As Frank points out, the same theory was advanced by the Democratic Party’s presidential nominee—in 1896. As expressed by William Jennings Bryan at the 1896 Democratic Convention, the Democratic idea is, or used to be, this one:

There are two ideas of government. There are those who believe that, if you will only legislate to make the well-to-do prosperous, their prosperity will leak through on those below. The Democratic idea, however, has been that if you legislate to make the masses prosperous, their prosperity will find its way up through every class which rests upon them.

To many, if not most, members of the Democratic Party today, this argument is simply assumed to fit squarely with Fannie Lou Hamer’s claim for representation at the 1964 Democratic Convention: on the one hand, economic justice for working people; on the other, political justice for those oppressed on account of their race. But there are good reasons to think that Hamer’s claim for political representation at the 1964 convention puts Bryan’s (and Stiglitz’) argument in favor of a broadly-based economic policy in grave doubt—which might explain just why so many of today’s campus activists against racism, sexism, or homophobia look askance at any suggestion that they demonstrate, as well, against neoliberal economic policies, and hence perhaps why the United States has become more and more unequal in recent decades.

After all, the focus of much of the Democratic Party has been on Fannie Lou Hamer’s question about minority representation, rather than majority representation. A story told recently by Elizabeth Kolbert of The New Yorker in a review of a book entitled Ratf**ked: The True Story Behind the Secret Plan to Steal America’s Democracy, by David Daley, demonstrates the point. In 1990, it seems, Lee Atwater—famous as the mastermind behind George H.W. Bush’s presidential victory in 1988 and then-chairman of the Republican National Committee—made an offer to the Congressional Black Caucus, as a result of which the “R.N.C. [Republican National Committee] and the Congressional Black Caucus joined forces for the creation of more majority-black districts”—that is, districts “drawn so as to concentrate, or ‘pack,’ African-American voters.” The bargain had an effect: Kolbert mentions the state of Georgia, which in 1990 had nine Democratic congressmen—eight of whom were white. “In 1994,” however, Kolbert notes, “the state sent three African-Americans to Congress”—while “only one white Democrat got elected.” 1994 was, of course, also the year of Newt Gingrich’s “Contract With America” and the great wave of Republican congressmen—the year Democrats lost control of the House for the first time since 1952.

The deal made by the Congressional Black Caucus in other words, implicitly allowed by the Democratic Party’s leadership, enacted what Fannie Lou Hamer demanded in 1964: a demand that was also a rejection of a political principle known as “majoritarianism”—the right of majorities to rule. It’s a point that’s been noticed by those who follow such things: recently, some academics have begun to argue against the very idea of “majority rule.” Stephen Macedo—perhaps significantly, the Laurance S. Rockefeller Professor of Politics and the University Center for Human Values at Princeton University—recently wrote, for instance, that majoritarianism “lacks legitimacy if majorities oppress minorities and flaunt their rights.” Hence, Macedo argues, “we should stop talking about ‘majoritarianism’ as a plausible characterization of a political system that we would recommend” on the grounds that “the basic principle of democracy” is not that it protects the interests of the majority but instead something he calls “political equality.” In other words, Macedo asks: “why should we regard majority rule as morally special?” Why should it matter, in other words, if one candidate should get more votes than another? Some academics, in short, have begun to wonder publicly about why we should even bother holding elections.

What is so odd about Macedo’s arguments to a student of American history, of course, is that he is merely echoing certain older arguments—like this one, from the nineteenth century: “It is not an uncommon impression, that the government of the United States is a government based simply on population; that numbers are its only element, and a numerical majority its only controlling power,” this authority says. But that idea is false, the writer goes on to say: “No opinion can be more erroneous.” The United States is, instead, “a government of the concurrent majority,” and “population, mere numbers,” are, “strictly speaking, excluded.” It’s an argument that, as it is spieled out, might sound plausible; after all, the structure of the government of the United States does have a number of features that are, “strictly speaking,” not determined solely by population: the Senate and the Supreme Court, for example, are pieces of the federal government that are, in conception and execution, nearly entirely opposed to the notion of “numerical majority.” (“By reference to the one person, one vote standard,” Francis E. Lee and Bruce I. Oppenheimer observe for instance in Sizing Up the Senate: The Unequal Consequences of Equal Representation, “the Senate is the most malapportioned legislature in the world.”) In that sense, then, one could easily imagine Macedo having written the above, or these ideas being articulated by Fannie Lou Hamer or the Congressional Black Caucus.

Except, of course, for one thing: the quotes in the above paragraph were taken from the writings of John Calhoun, the former Senator, Secretary of War, and Vice President of the United States—which, in one sense, might seem to give the weight of authority to Macedo’s argument against majoritarianism. At least, it might if not for a couple of other facts about Calhoun: not only did he personally own dozens of slaves (at his plantation, Fort Hill; now the site of Clemson University), he is also well-known as the most formidable intellectual defender of slavery in American history. His most cunning arguments after all—laid out in such works as the Fort Hill Address and the Disquisition on Government—are against majoritarianism and in favor of slavery; indeed, to Calhoun they are much the same: anti-majoritarianism is more or less the same as being pro-slavery. (A point that historians like Paul Finkelman of the University of Tulsa have argued is true: the anti-majoritarian features of the U.S. Constitution, these historians say, were originally designed to protect slavery—a point that might sound outré except for the fact that it was made at the time of the Constitutional Convention itself by none other than James Madison.) And that is to say that Stephen Macedo and Fannie Lou Hamer are choosing a very odd intellectual partner—while the deal between the RNC and the Congressional Black Caucus demonstrates that those arguments are having very real effects.

What’s really significant, in short, about Macedo’s “insights” about majoritarianism is that, as a possessor of a named chair at one of the most prestigious universities in the world, his work shows just how a concern, real or feigned, for minority rights can be used as a means of undermining the very idea of democracy itself. It’s in this way that activists against racism, sexism, homophobia and other pet campus causes can effectively function as what Lenin called “useful idiots”: by dismantling the agreements that have underwritten the existence of a large and prosperous proportion of the population for nearly a century, “intellectuals” like Macedo may be helping to dismantle economically the American middle class. If the opinion of the majority of the people does not matter politically, after all, it’s hard to think that their opinion could matter in any other way—which is to say that arguments like Macedo’s are thusly a kind of intellectual strip-mining operation: they consume the intellectual resources of the past in order to provide a short-term gain for a small number of operators.

They are, in sum, eating their seed-corn.

In that sense, despite the puzzled brows of many of the country’s talking heads, the Trump phenomenon makes a certain kind of potted sense—even if it appears utterly irrational to the elite. Although they might not express themselves in terms that those with elite educations find palatable—in a fashion that, significantly, suggests a return to those Victorian codes of “breeding” and “politesse” that elites have always used against what used to be called the “lower classes”—there really may be an ideological link between a Democratic Party governed by those with elite educations and the current economic reality faced by the majority of Americans. That reality may be the result of the elites’ loss of faith in what even Calhoun called the “fundamental principle, the great cardinal maxim” of democratic government: “that the people are the source of all power.” So, while the organs of elite opinion like The New York Times or other outlets might continue to crank out stories decrying the “irrationality” of Donald Trump’s supporters, it may be that Trumps’ fans (Trumpettes?) are in fact in possession of a deeper rationality than that of those criticizing them. What their votes for Trump may signal is a recognition that, if the Republican Party has become the party of the truly rich, “the 1%,” the Democratic Party has ceased to be the party of the majority and has instead become the party of the professional class: the “10%.” Or, as Frank says, in swapping Republicans and Democrats the nation “merely exchange[s] one elite for another: a cadre of business types for a collection of high-achieving professionals.” Both, after all, disbelieve in the virtues of democracy; what may (or may not) be surprising, while also deeply terrifying, is that supposed “intellectuals” have apparently come to accept that there is no difference between Connacht—and the Other Place.

 

 

**Update: In the hours since I first posted this, I’ve come across two different recent articles in magazines with “New York” in their titles: in one, for The New Yorker, Jill Lepore—a professor of history at Harvard in her day job—argues that “more democracy is very often less,” while the other, written by Andrew Sullivan for New York magazine, is entitled “Democracies End When They Are Too Democratic.” Draw conclusions where you will.

Extra! Extra! White Man Wins Election!

 

Whenever you find yourself on the side of the majority,
it is time to pause and reflect
.
—Mark Twain

One of the more entertaining articles I’ve read recently appeared in the New York Times Magazine last October; written by Ruth Padawer and entitled “When Women Become Men At Wellesley,” it’s about how the newest “challenge,” as the terminology goes, facing American women’s colleges these days is the rise of students “born female who identified as men, some of whom had begun taking testosterone to change their bodies.” The beginning of the piece tells the story of “Timothy” Boatwright, a woman who’d decided she felt more like a man, and how Boatwright had decided to run for the post of “multicultural affairs coordinator” at the school, with the responsibility of “promoting a ‘culture of diversity’ among students and staff and faculty members.” After three “women of color” dropped out of the race for various unrelated reasons, that meant that Boatwright would be the only candidate still in the race—which meant that Wellesley, a woman’s college remember, would have as its next “diversity” official a white man. Yet according to Padawer this result wasn’t necessarily as ridiculous as it might seem: “After all,” the Times reporter said, “at Wellesley, masculine-of-center students are cultural minorities.” In the race to produce more and “better” minorities, then, Wellesley has produced a win for the ages—a result that, one might think, would cause reasonable people to stop and consider: just what is it about American society that is causing Americans constantly to redescribe themselves as one kind of “minority” or another? Although the easy answer is “because Americans are crazy,” the real answer might be that Americans are rationally responding to the incentives created by their political system: a system originally designed, as many historians have begun to realize, to protect a certain minority at the expense of the majority.

That, after all, is a constitutional truism, often repeated like a mantra by college students and other species of cretin: the United States Constitution, goes the zombie-like repetition, was designed to protect against the “tyranny of the majority”—even though that exact phrase was only first used by John Adams in 1788, a year after the Constitutional Convention. It is however true that Number 10 of the Federalist Papers does mention “the superior force of an interested and overbearing majority”—yet what those who discuss the supposed threat of the majority never seem to mention is that, while it is true that the United States Constitution is constructed with many, and indeed nearly a bewildering, variety of protections for the “minority,” the minority that was being protected at the moment of the Constitution’s writing was not some vague and theoretical interest: the authors of the Constitution were not professors of political philosophy sitting around a seminar room. Instead, the United States Constitution was, as political scientist Michael Parenti has put it, “a practical response to immediate material conditions”—in other words, the product of political horse-trading that resulted in a document that protected a very particular, and real, minority; one with names and families and, more significantly, a certain sort of property.

That property, as historians today are increasingly recognizing, was slavery. It isn’t for nothing that, as historian William Lee Miller has observed, not only was it that “for fifty of [the nation’s] first sixty four [years], the nation’s president was a slaveholder,” but also that the “powerful office of the Speaker of the House was held by a slaveholder for twenty-eight of the nation’s first thirty-five years,” and that the president pro tem of the Senate—one of the more obscure, yet still powerful, federal offices—“was virtually always a slaveholder.” Both Chief Justices of the Supreme Court through the first five decades of the nineteenth century, John Marshall and Roger Taney, were slaveholders, as were very many federal judges and other, lesser, federal office holders. As historian Garry Wills, author of Lincoln At Gettysburg among other volumes, has written, “the management of the government was disproportionately controlled by the South.” The reason why all of this was so was, as it happens, very ably explained at the time by none other than … Abraham Lincoln.

What Lincoln knew was that there was a kind of “thumb on the scale” when Northerners like the two Adams’, John and John Quincy, were weighed in national elections—a not-so-mysterious force that denied those Northern, anti-slavery men second terms as president. Lincoln himself explained what that force was in the speech he gave at Peoria, Illinois that signaled his return to politics in 1854. There, Lincoln observed that

South Carolina has six representatives, and so has Maine; South Carolina has eight presidential electors, and so has Maine. This is precise equality so far; and, of course they are equal in Senators, each having two. Thus in the control of the government, the two States are equals precisely. But how are they in the number of their white people? Maine has 581,813—while South Carolina has 274,567. Maine has twice as many as South Carolina, and 32,679 over. Thus each white man in South Carolina is more than the double of any man in Maine.

What Lincoln is talking about here is the notorious “Three-Fifths Compromise”: Article I, Section 2, Paragraph 3 of the United States Constitution. According to that proviso, slave states were entitled to representation in Congress according to the ratio of “three fifths of all other persons”—those being counted by that ratio being, of course, Southern slaves. And what the future president—the first president, it might be added, to be elected without the assistance of that ratio (a fact that would have, as I shall show, its own consequences)—was driving at was the effect this mathematical ratio was having on the political landscape of the country.

As Lincoln remarked in the same Peoria speech, the Three-Fifths Compromise meant that “five slaves are counted as being equal to three whites,” which meant that, as a practical matter, “it is an absolute truth, without an exception, that there is no voter in any slave State, but who has more legal power in the government, than any voter in any free State.” To put it more plainly, Lincoln said that the three-fifths clause “in the aggregate, gives the slave States, in the present Congress, twenty additional representatives.” Since the Constitution gave the same advantage in the Electoral College as it gave in the Congress, the reason for results like, say, the Adams’ lack of presidential staying power isn’t that hard to discern.

“One of those who particularly resented the role of the three-fifths clause in warping electoral college votes,” notes Miller, “was John Adams, who would probably have been reelected president over Thomas Jefferson in 1800 if the three-fifths ratio had not augmented the votes of the Southern states.” John Quincy himself had part of two national elections, 1824 and 1828, that had been skewed by what was termed at the time the “federal ratio”—which is to say that the reason why both Adams’ were one-term presidents likely had rather more with the form of the American government than with the content of their character, despite the representations of many historians after the fact.

Adams himself was quite aware of the effect of the “federal ratio.” The Hartford Convention of 1815, led by New Englanders like Adams, had recommended ending the advantage of the Southern states within the Congress, and in 1843 John Quincy’s son Charles Francis Adams caused the Massachusetts’ legislature to pass a measure that John Quincy would himself introduce to the U.S. Congress, “a resolution proposing that the Constitution be amended to eliminate the three-fifths ratio,” as Miller has noted. There were three more such attempts in 1844, three years before Lincoln’s arrival, all of which were soundly defeated, as Miller observes, by totals “skewed by the feature the proposed amendment would abolish.” The three-fifths ratio was not simply a bete noir of the Adams’ personally; all of New England was aware of that the three-fifths ratio protected the interests of the South in the national government—it’s one reason why, prior to the Civil War, “states’ rights” was often thought of as a Northern issue rather than a Southern one.

That the South itself recognized the advantages the United States Constitution gave them, specifically by that document’s protections of “minority”—in other words, slaveowner—interests, can be seen by reference to the reasons the South gave for starting the Civil War. South Carolina’s late 1860 declaration of secession, for example (the first such declaration) outright said that the state’s act of secession was provoked by the election of Abraham Lincoln—in other words, by the fact of the election of a presidential candidate who did not need the electoral votes of the South.

Hence, South Carolina’s declaration said that a “geographical line has been drawn across the Union, and all the States north of that line have united in the election of a man to the high office of President of the United States whose opinions and purposes are hostile to slavery.” The election had been enabled, the document went on to say, “by elevating to citizenship, persons who, by the supreme law of the land, are incapable of becoming citizens, and their votes have been used to inaugurate a new policy, hostile to the South.” Presumably, this is a veiled reference to the population gained by the Northern states over the course of the nineteenth century—a trend that was not only steadily weakening the advantage the South had initially enjoyed at the expense of the North at the time the Constitution had been enacted, but had only accelerated during the 1850s.

As one Northern newspaper observed in 1860, in response to the early figures being released by the United States Census Bureau at that time, the “difference in the relative standing of the slave states and the free, between 1850 and 1860, inevitably shows where the future greatness of our country is to be.” To Southerners the data had a different meaning: as Adam Goodheart noted in a piece for the New York Times’ series on the Civil War, Disunion, “the editor of the New Orleans Picayune noted that states like Michigan, Wisconsin, Iowa and Illinois would each be gaining multiple seats in Congress” while Southern states like Virginia, South Carolina and Tennessee would be losing seats. To the Southern slaveowners who would drive the road to secession during the winter of 1860, the fact that they were on the losing end of a demographic war could not have been far from mind.

Historian Leonard L. Richards of the University of Massachusetts, for example, has noted that when Alexis de Tocqueville traveled the American South in the early 1830s, he discovered that Southern leaders were “noticeably ‘irritated and alarmed’ by their declining influence in the House [of Representatives].” By the 1850s, those population trends were only accelerating: concerning the gains in population the Northern states were realizing by foreign immigration—presumably the subject of South Carolina’s complaint about persons “incapable of becoming citizens”—Richards cites Senator Stephen Adams of Mississippi, who “blamed the South’s plight”—that is, its declining population relative to the North—“on foreign immigration.” As Richards says, it was obvious to anyone paying attention to the facts that if “this trend continued, the North would in fifteen years have a two to one majority in the House and probably a similar majority in the Senate.” It seems unlikely to think that the most intelligent of Southern leaders could not have been cognizant of these primordial facts.

Their intellectual leaders, above all John Calhoun, had after all designed a political theory to justify the Southern, i.e. “minority,” dominance of the federal government. In Calhoun’s A Disquisition on Government, the South Carolinian Senator argued that a government “under the control of the numerical majority” would tend toward “oppression and abuse of power”—it was to correct this tendency, he writes, that the constitution of the United States made its different branches “the organs of the distinct interests or portions of the community; and to clothe each with a negative on the others.” It is, in other words, a fair description of the constitutional doctrine known as the “separation of powers,” a doctrine that Calhoun barely dresses up as something other than what it is: a brief for the protection of the right to own slaves. Every time, in other words, anyone utters the phrase “protecting minority rights” they are, wittingly or not, invoking the ideas of John Calhoun.

In any case, such a history could explain just why it is that Americans are so eager to describe themselves as a “minority,” of whatever kind. After all, it was the purpose of the American government initially to protect a particular minority, and so in political terms it makes sense to describe oneself as such in order to enjoy the protections that, initially built into the system, have become so endemic to American government: for example, the practice of racial gerrymandering, which has the perhaps-beneficial effect of protecting a particular minority—at the probable expense of the interests of the majority. Such a theory might perhaps also explain something else: just how it is, as professor Walter Benn Michaels of the University of Illinois at Chicago has remarked, that after “half a century of anti-racism and feminism, the U.S. today is a less equal society than was the racist, sexist society of Jim Crow.” Or, perhaps, how the election of—to use that favorite tool of American academics, quote marks to signal irony—a “white man” at a women’s college can, somehow, be a “victory” for whatever the American “left” is now. The real irony, of course, is that, in seeking to protect African-Americans and other minorities, that supposed left is merely reinforcing a system originally designed to protect slavery.

Bend Sinister

The rebs say that I am a traitor to my country. Why tis this[?] [B]ecause I am for a majority ruling, and for keeping the power in the people[?]
—Jesse Dobbins
Yadkin County, North Carolina
Federal pension application
Adjutant General’s Office
United States Department of War
3 July 1883.

Golf and (the theory of) capitalism were born in the same small country (Scotland) at the same historical moment, but while golf is entwined within the corporate world these days there’s actually a profound difference between the two: for capitalism everything is relative, but the value of a golf shot is absolute. Every shot is strictly as valuable as every other. The difference can be found in the concept of arbitrage—which conventional dictionaries define as taking advantage of a price difference between two markets. It’s at the heart of the financial kind of capitalism we live with these days—it’s why everything is relative under the regime of capitalism—but it’s completely antithetical to golf: you can’t trade golf shots. Still, the concept of arbitrage does explain one thing about golf: how a golf club in South Carolina, in the Low Country—the angry furnace of the Confederacy—could come to be composed of Northern financial types and be named “Secession,” in a manner that suggested its members believed, if only half-jokingly, that the firebrands of 1860 might have not been all wrong.

That, however, gets ahead of starting another golf tournament on the tenth tee. Historically, as some readers may remember, I haven’t done well starting on the tenth hole. To recap: twice I’ve started loops for professional golfers in tournaments on the tenth tee, and each time my pro has blown the first shot of the day out of bounds. So when I saw where we were starting at Oldfield Country Club just outside of Hilton Head in South Carolina, site of an eGolf tournament, my stomach dropped as if I were driving over one of the arched bridges across the housing development’s canals.

Both of those tenth holes were also, coincidentally or not, dog-leg rights; holes that begin at the tee, or upper left so to speak, and move towards the green in a more-or-less curved arc that ends, figuratively, on the lower right. In heraldry, a stripe in such a fashion is called a “bend sinister”: as Vladimir Nabokov put it in explaining the title of his novel by that name, “a bar drawn from the upper left to the lower right on a coat of arms.” My player was, naturally, assigned to start at the tenth tee. My history with such starts went unmentioned.

Superstitious nonsense aside, however, there’s likely reasons why my pros should have had a hard time of a dog-leg right. Very often on a dogleg right trees close off the right side quickly: there’s no room on the right to start the ball there in order to draw it back onto the fairway; which is to say, golfers who draw the ball are at a disadvantage. As this is the typical flight of your better player—while it might be so that the very longest players very often play a “power fade”—it’s perhaps not accidental that marginal players (the only type I, as an unproven commodity, might hope to obtain) ought to be drawers of the ball.

Had I known what I found out later, I might have been more anxious: my golfer had “scrapped … Operation Left to Right”—a project designed to enable him to hit a fade on command—all the way back in 2011, as detailed in a series of Golf Channel articles about him and his struggles in golf’s minor leagues. (“The Minors” golfchannel.com) His favorite ball shape was a draw, a right-to-left shot, which is just about the worst kind of shot you can have on a dogleg-right hole. The tenth at Oldfield had, of course, just that kind of shape.

Already, the sky was threatening, and the air had a chill to it: the kind of chill that can cause the muscles in your hand to be less supple, which can make it just that much harder to “release” the clubhead—which can cause a slice, a left-to-right movement of the ball. Later on my player actually would lose several tee shots to the right, all of them push-fades, including a tough-to-take water ball on the twelfth (our third) hole, a drivable par four.
Eventually the rain would become so bad that the next day the final round would be canceled, which left me at loose ends.

Up past Beaufort there’s a golf club called Secession—a reference to South Carolina’s pride of place with regard to the events leading up to the Civil War: it was the first state to secede, in late December of 1860, and actually helped persuade the other Southern states to secede with it by sending encouraging emissaries to those states. Yet while that name might appear deeply Southern, the membership is probably anything but: Secession, the golf club, is an extremely private course that has become what Augusta began as: a club for the financial guys of New York and Chicago to go to and gamble large sums on golf. Or, to put it another way, the spiritual descendants of the guys who financed Abraham Lincoln’s war.

You might think, of course, that such a place would be somewhat affected by the events of the past five years or so: in fact not, as on the day I stopped in every tee box seemed filled with foursomes, with quite a few filled by loopers carrying doubles. Perhaps I should have known better, since as Chris Lehmann at The Baffler has noted, the “top 1 percent of income earners have taken in fully 93 percent of the economic gains since the Great Recession.” In any case, my errand was unsuccessful: I found out, essentially, that I would need some kind of clout. So, rather than finding my way back directly, I spent a pleasant afternoon in Beaufort. While there, I learned the story of one Robert Smalls, namesake of a number of the town’s landmarks.

“I thought the Planter,” said Robert Smalls when he reached the deck of the USS Onward outside of Charleston Harbor in the late spring of 1862, “might be of some use to Uncle Abe.” Smalls, the pilot, had, along with his crew, stolen the Confederate ship Planter right out from under the Confederate guns by mimicking the Planter’s captain—Smalls knew what the usual signals to leave the harbor were, and by the half-light of dawn he looked sufficiently enough like that officer to secure permission from the sentries at Sumter. (He also knew enough to avoid the minefields, since he’d helped to lay them.) Upon reaching the Union blockade ships on the open Atlantic, Smalls surrendered his vessel to the United States officer in command.

After the war—and a number of rather exciting exploits—Smalls came back to Beaufort, where he bought his former master’s house—a man named McKee—with the bounty money he got for stealing the Planter, and got elected to both the South Carolina House of Representatives and the South Carolina Senate, founding the Republican Party in South Carolina along the way. In office he wrote legislation that provided for South Carolina to have the first statewide public school system in the history of the United States, and then he was elected to the United States House of Representatives, where he became the last Republican congressman from his district until 2010.

Historical tourism in Beaufort thusly means confronting the fact that the entire of the Lowcountry, as it’s called down here, was the center of secessionism. That’s in part why, in a lot of South Carolina, the war ended much earlier than in most of the South, because the Union invaded by sea in late 1861: 80 years before Normandy, in a fleet whose size would not be rivaled until after Pearl Harbor. That’s also why, as the British owner of a bar in the town I’m staying in, Bluffton, notes, the first thing the Yankees did when they arrived in Bluffton was burn in down. It was in order to make a statement similar to the larger point Sherman would later make during his celebrated visit to Atlanta.

The reason for such vindictiveness was because the slaveowners of the Lowcountry were at what their longtime Senator, John Calhoun, had long before called the “furthest outpost” of slavery’s empire. They not only wanted to continue slavery, they wanted to expand its reach—it’s the moral, in fact, of the curious tale of the yacht Wanderer, funded by a South Carolinian. It’s one of those incidents that happened just before the war, one of those incidents whose meaning would only become clear after the passage of time—and Sherman.

The Wanderer was built in 1857 on Long Island, New York, as a pleasure yacht. Her first owner, Col. John Johnson, sailed her down the Atlantic coast to New Orleans, then sailed her back to New York where a William Corrie, of Charleston, South Carolina, bought her. Corrie made some odd alterations to the ship—adding, for instance, a 15,000 gallon water tank. The work attracted the attention of federal officers aboard the steam revenue cutter USS Harriet Lane, who seized the ship when she attempted to leave New York harbor on 9 June 1858—as a suspected slave ship. But there was no other evidence of the intentions of her owner other than the basic alterations, and so the Wanderer was released. She arrived in Charleston on 25 June, completed her fitting out as a slave ship and, after a stop in Port of Spain, Trinidad, sailed for the Congo on 27 July. The Wanderer returned to the United States on 28 November, at Jekyll Island in Georgia, still in the Lowcountry.

The ship bore a human cargo.

Why, though, would William Corrie—and his partners, including the prominent Savannah businessman Charles Lamar, a member of a family that “included the second president of the Republic of Texas, a U.S. Supreme Court justice, and U.S. Secretary of the Treasury Howell Cobb”—have taken so desperate a measure as to have attempted to smuggle slaves into the United States? The slave trade had been banned in the United States since 1808, as per the United States Constitution, which is to say that importing human beings for the purpose of slavery was a federal crime. The punishment was death by hanging.

Ultimately, Corrie and his partners evaded conviction—there were three trials, all held in Savannah, all of which ended with a Savannah jury refusing to convict their local grandees. Oncoming events would, to be sure, soon make the whole episode beside the point. Still, Corrie and Lamar could not have known that, and on the whole the desperate crime seems rather a long chance to take. But the syndicate, led by Lamar, had two motives: one economic, and the other ideological.

The first motive was grasped by Thomas Jefferson, of all people, as early as 1792. Jefferson memorialized his thought, according to the Smithsonian magazine, “in a barely legible, scribbled note in the middle of a page, enclosed in brackets.” The earth-shaking, terrible thought was this: “he was making a 4 percent profit every year on the birth of black children.” In other words, like the land which his slaves worked, every year brought an increase to the value of Jefferson’s human capital. The value of slaves would, with time, become almost incredible: “In 1860,” historian David Brion Davis has noted, “the value of Southern slaves was about three times the amount invested in manufacturing or railroads nationwide.” And that value was only increased by the ban on the slave trade.

First, then, the voyage of the Wanderer was an act of economic arbitrage, which sought to exploit the price difference between slaves in Africa and those in the United States. But it was also an act of provocation—much like John Brown’s raid on Harper’s Ferry less than a year after the Wanderer landed in Georgia. Like the more celebrated case, the sailing of the Wanderer was meant to demonstrate that slave smuggling could be done—it was meant to inspire further acts of resistance to the Slave Importation Act.

Lamar was after all a Southern “firebrand,” common in the Lowcountry and represented in print by the Charleston Mercury. The firebrands advocated resuming the African slave trade: essentially, the members of this group believed that government shouldn’t interfere with the “natural” process of the market. Southerners like Lamar and Corrie, thusly, were the ancestors to those who today believe that, in the words of Italian sociologist Marco d’Eramo, “things would surely improve if only we left them to the free play of market forces.”
The voyage of the Wanderer was, in that sense, meant to demonstrate the thesis that, as Thomas Frank observed about how the ideological descendants of these forebears put it, that “it is the nature of government enterprises to fail.” The mission of the slave ship, that is, could be viewed as on a par with what Frank calls conservative cautions “against bringing top-notch talent into government service” or piling up “an Everest of debt in order to force the government into crisis.” The notion that the yacht’s trip was wholly contrived must have been lost on the Wanderer’s sponsors.

Surely, then, it isn’t difficult to explain the reasoning behind the appeal of a certain kind of South Carolinian thought and that of wealthy people today. What’s interesting about the whole episode, at least from today’s standpoint, is how it was ultimately defeated: by what, at least from one perspective, appears to be another case of arbitrage. In this case, the arbitrageur was named Abraham Lincoln, and he laid out what he was going to arbitrage long before the voyage of the Wanderer. It was in a speech in Peoria in the autumn of 1854, the speech that marked Lincoln’s return to politics after his defeat in the late 1840s after his opposition to the Mexican War. In that speech, Lincoln laid the groundwork for the defeat of slavery by describing how slavery had artificially interfered with a market—the one whose currency is votes.

The crucial passage of the Peoria speech begins when Lincoln begins to compare two states: South Carolina being one, likely not so coincidentally, and Maine being the other. Both states, Lincoln observes, are equally represented in Congress: “South Carolina has six representatives, and so has Maine; South Carolina has eight presidential electors, and so has Maine.” “Thus in the control of the government,” Lincoln concludes, “the two States are equals precisely.” But, Lincoln goes on to note, observe the numbers of their free people: “Maine has 581,813—while South Carolina has 274,567.” Somehow, then, the Southern voter “is more than double of any one of us in this crowd” in terms of control of the federal government: “it is an absolute truth, without an exception,” Lincoln said, “that there is no voter in any slave State, but who has more legal power in the government than any voter in any free State.” There was, in sum, a discrepancy in value—or what economists might call an “inefficiency.”

The reason for that discrepancy was, as Lincoln also observed, “in the Constitution”—by which he referred to what’s become known as the “Three-Fifths Compromise,” or Article One, Section 2, Paragraph 3: “Representatives and direct Taxes shall be apportioned among the several States … according to their respective Numbers, which shall be determined by adding to the whole Number of free Persons … [and] three fifths of all other Persons.” By this means, Southern states received representation in the federal government in excess of the number of their free inhabitants: in addition to the increase in wealth obtained by the reproduction of their slaves, then, slaveowners also benefitted politically.

In an article for the New York Times’ series Disunion (“The Census of Doom”), which is blogging the Civil War as it happened, Adam Goodheart observes that over the decade between the 1850 United States Census, however, as and the 1860 edition of same, the population of the North had exploded by 41 percent, while that of the South had only grown by 27 percent. (By comparison, Goodheart points out, between 2000 and 2010 the United States population grew by just 9.7 percent.) To take one state as an example, in less than 25 years one Northern state—Wisconsin—had grown by nearly 6400 (sic) percent. Wisconsin would, of course, go heavily for Lincoln in the presidential election—Lincoln would be the first president ever elected without the support of a single Southern state. (He wasn’t even on the ballot in most.) One Northern newspaper editor, Goodheart notes, smugly observed that “The difference in the relative standing of the slave states and the free, between 1850 and 1860, inevitably shows where the future greatness of our country is to be.” Lincoln’s election confirmed the fact that the political power held by the Southern states since the nation’s founding, with the help of an electoral concession, had been broken by a wash of new Northern voters.

If read in that light, then, the Thirteenth and Fourteenth Amendments to the Constitution, which ended both slavery and the Three Fifths Clause, could be understood as a kind of price correction: the two amendments effectively ended the premium that the Constitution had until then placed on Southern votes. Lincoln becomes a version of Brad Pitt’s character in the movie of Michael Lewis’ most famous book—Billy Beane in Moneyball. Just as Billy Beane saw—or was persuaded to see—that batting average was overvalued and on-base percentage was undervalued, thus creating an arbitrage possibility for players who walked a lot, Lincoln saw that Southern votes were too highly valued and Northern ones too undervalued, and that (sooner or later) the two had to converge towards what economists would call “fundamental value.”

That concept is something that golf teaches well. In golf, there are no differences in value to exploit: each shot has just the same fundamental value. On our first tee that day, which was the tenth hole at Oldfield Country Club, my golfer actually didn’t blow his first shot out-of-bounds—though I had fully expected that to happen. He did come pretty close though: it flew directly into the trees, a slicing, left-to-right block. I took off after everyone had teed off: clearly the old guy who was marshaling the hole wasn’t going to be of much help. But I found the ball easily enough, and my player pitched out and ended up making a great par save. The punch-out shot from the trees counted just the same as an approach shot might have, or as a second putt.

Understanding that notion of fundamental value taught by golf—among other possible human acts—allows the further understanding that the “price correction” undertaken by Lincoln wasn’t simply a one-time act: the value of an American vote still, today, varies across the nation. According to the organization FairVote, as of 2003 a vote in Wyoming was more than three times more valuable than, say, my vote as a resident of the state of Illinois. Even today—as the Senate’s own website notes—“senators from the twenty-six smallest states, who (according to the 2000 census) represent 17.8% of the nation’s population, constitute a majority of the Senate.” It’s a fact that the men of the Secession Golf Club might just as well people ignored—because it just may be why 93 percent of the wealth since the Great Recession has gone to the wealthy.

To take a small example of how the two points might be connected, a recent New Yorker piece has pointed out that “in the fifth year of his Presidency, Obama has failed to place even a single judge on the D.C. Circuit, considered the second most important court in the nation” because the Senate has refused to confirm any of his nominees. This despite the fact that there are now four vacancies out of eleven seats. Why? Because the Senate’s rules allow a minority of Senators—or even just one, in the case of what’s known as the “hold”—to interfere with the will of the majority: an advantage Republican senators have not hesitated to seize.

Nearly twenty years after the publication of Bend Sinister, Nabokov chose to write an introduction in which he endeavored to explain the novel’s name. “This choice of title,” he wrote, “was an attempt to suggest an outline broken by refraction, a distortion in the mirror of being, a wrong turn taken by life, a sinistral and sinister world.” If there are wrong turns, of course, that would suggest that there are right ones; if there are “distortions,” then there are clarities: that is, there is an order to which events will (eventually, sooner or later) return. It’s a suggestion that is not fashionable these days: Nabokov himself isn’t read much today for his own beliefs so much as for the confirmation his novels can provide for one or another thesis. But if he is right—if golf’s belief in “fundamental value” is right—then there must necessarily come some correction to this ongoing problem of the value of a vote.

The location of the new Fort Sumter, however, remains unknown.