The End Of The Beginning

The essential struggle in America … will be between city men and yokels.
The yokels hang on because the old apportionments give them unfair advantages. …
But that can’t last.
—H.L. Mencken. 23 July 1928.


“It’s as if,” the American philosopher Richard Rorty wrote in 1998, “the American Left could not handle more than one initiative at a time, as if it either had to ignore stigma in order to concentrate on money, or vice versa.” Penn State literature professor Michael Bérubé sneered at Rorty at the time, writing that Rorty’s problem is that he “construes leftist thought as a zero-sum game,” as if somehow

the United States would have passed a national health-care plan, implemented a family-leave policy, and abolished ‘right to work’ laws if only … left-liberals in the humanities hadn’t been wasting our time writing books on cultural hybridity and popular music.

Bérubé then essentially asked Rorty, “where’s the evidence?”—knowing, of course, that it is impossible to prove a counterfactual, i.e. what didn’t happen. But even in 1998, there was evidence to think that Rorty was not wrong: that, by focusing on discrimination rather than on inequality, “left-liberals” have, as Rorty accused then, effectively “collaborated with the Right.” Take, for example, what are called “majority-minority districts,” which are designed to increase minority representation, and thus combat “stigma”—but have the effect of harming minorities.

A “majority-minority district,” according to Ballotpedia, “is a district in which a minority group or groups comprise a majority of the district’s total population.” They were created in response to Section Two of the Voting Rights Act of 1965, which prohibited drawing legislative districts in a fashion that would “improperly dilute minorities’ voting power.”  Proponents of their use maintain that they are necessary in order to prohibit what’s sometimes called “cracking,” or diluting a constituency so as to ensure that it is not a majority in any one district. It’s also claimed that “majority-minority” districts are the only way to ensure minority representation in the state legislatures and Congress—and while that may or may not be true, it is certainly true that after drawing such districts there were more minority members of Congress than there were before: according to the Congressional Research Service, prior to 1969 (four years after passage) there were less than ten black members of Congress, a number that then grew until, after the 106th Congress (1999-01), there have consistently been between 39 and 44 African-American members of Congress. Unfortunately, while that may have been good for individual representatives, it may not be all that great for their constituents.

That’s because while “majority-minority” districts may increase the number of black and minority congressmen and women, they may also decrease the total numbers of Democrats in Congress. As The Atlantic put the point in 2013: after the redistricting process following the Census of 1990, the “drawing of majority-minority districts not only elected more minorities, it also had the effect of bleeding minority voters out of all the surrounding districts”—making them virtually impregnably Republican. In 2012, for instance, Barack Obama won 44 Congressional districts by more than 50 percent of the vote, while Mitt Romney won only eight districts by such a large percentage. Figures like these could seem overwhelmingly in favor of the Democrats, of course—until it is realized that, by winning congressional seats by such huge margins in some districts, Democrats are effectively losing votes in others.

That’s why—despite the fact that he lost the popular vote—in 2012 Romney’s party won 226 of 435 Congressional districts, while Obama’s party won 209. In this past election, as I’ve mention in past posts, Republicans won 55% of the seats (241) despite getting 49.9% of the vote, while Democrats won 44% of the seats despite getting 47.3% of the vote. That might not seem like a large difference, but it is suggestive when these percentages always point in a single direction: going back to 1994, the year of the “Contract With America,” Republicans have consistently outperformed their share of the popular vote, while Democrats have consistently underperformed theirs.

From the perspective of the Republican party, that’s just jake, despite being—according to a lawsuit filed by the NAACP in North Carolina—due to “an intentional and cynical use of race.” Whatever the ethics of the thing, it’s certainly had major results. “In 1949,” as Ari Berman pointed out in The Nation not long ago, “white Democrats controlled 103 of 105 House seats in the former Confederacy,” while the last white Southern congressman not named Steve Cohen exited the House in 2014. Considered all together, then, as “majority-minority districts” have increased, the body of Southern congressmen (and women) has become like an Oreo: a thin surface of brown Democrats on the outside, thickly white and Republican on the inside—and nothing but empty calories.

Nate Silver, to be sure, discounted all this worry as so much ado about nothing in 2013: “most people,” he wrote then, “are putting too much weight on gerrymandering and not enough on geography.” In other words, “minority populations, especially African-Americans, tend to be highly concentrated in certain geographic areas,” so much so that it would a Herculean task “not to create overwhelmingly minority (and Democratic) districts on the South Side of Chicago, in the Bronx or in parts of Los Angeles or South Texas.” Furthermore, even if that could be accomplished such districts would violate “nonpartisan redistricting principles like compactness and contiguity.” But while Silver is right on the narrow ground he contests, it merely begs the question: why should geography have anything to do with voting? Silver’s position essentially ensures that African-American and other minority votes count for less. “Majority minority districts” imply that minority votes do not have as much effect on policy as votes in other kinds of districts: they create, as if the United States were some corporation with common and preferred shares, two kinds of votes.

Like discussions about, for example, the Electoral College—in which a vote in Wyoming is much more valuable than one in California—Silver’s position in other words implies that minority votes will remain less valuable than other votes because a vote in a “majority-minority” district will have less probability of electing a congressperson who is a member of a majority in Congress. What does it matter to African-Americans if one of their number is elected to Congress, if Congress can do nothing for them?  To Silver, there isn’t any issue with majority-minority districts because they reflect their underlying proportions of people—but what matters is whether whoever’s elected can get policies that benefit them.

Right here, in other words, we get to the heart of the dispute between the deceased Rorty and his former student Bérubé: the difference between procedural and substantive justice. To some left-liberal types like Michael Bérubé, that might appear just swell: to coders in the Valley (represented by California’s 17th, the only majority-Asian district in the continental United States) or cultural-studies theorists in Boston, what might be important is simply the numbers of minority representatives, not the ability to pass a legislative agenda that’s fair for all Americans. It all might seem like no skin off their nose. (More ominously, it conceivably might even be in their economic interests: the humanities and the arts after all are intellectually well-equipped for a politics of appearances—but much less so for a politics of substance.) But ultimately this also affects them, and for a similar reason: urban professionals are, after all, urban—which means that their votes are, like majority-minority districts, similarly concentrated.

“Urban Democrat House members”—as The Atlantic also noted in 2013—“win with huge majorities, but winning a district with 80 percent doesn’t help the party gain any more seats than winning with 60 percent.” As Silver put the same point, “white voters in cities with high minority populations tend to be quite liberal, yielding more redundancy for Democrats.” Although these percentages might appear heartening to some of those within such districts, they ought to be deeply worrying: individual votes are not translating into actual political power. The more geographically concentrated Democrats are the less and less capable their party becomes of accomplishing its goals. While winning individual races by huge margins might be satisfying to some, no one cares about running up the score in a junior varsity game.

What “left-liberal” types ought to be contesting, in other words, isn’t whether Congress has enough black and other minority people in it, but instead the ridiculous, anachronistic idea that voting power should be tied to geography. “People, not land or trees or pastures vote,” Chief Justice of the Supreme Court Earl Warren wrote in 1964; in that case, Wesberry v. Sanders, the Supreme Court ruled that, as much as possible, “one man’s vote in a Congressional election is to be worth as much as another’s.” By shifting discussion to procedural issues of identity and stigma, “majority-minority districts” obscure that much more substantive question of power. Like some gaggle of left-wing Roy Cohns, people like Michael Bérubé want to talk about who people are. His opponents ought to reply by saying they’re interested in what people could be—and building a real road to get there.


To Hell Or Connacht

And I looked, and behold a pale horse, and his name that sat on him was Death,
and Hell followed with him.
Revelations 6:8. 

In republics, it is a fundamental principle, that the majority govern, and that the minority comply with the general voice.
—Oliver Ellsworth.

In all Republics the voice of a majority must prevail.
—Andrew Jackson.


“They are at the present eating, or have already eaten, their seed potatoes and seed corn, to preserve life,” goes the sentence from the Proceedings of the Mansion House Committee for the Relief of Distress in Ireland During the Months of January and February, 1880. Not many are aware, but the Great Hunger of 1845-52 (or, in Gaelic, an Gorta Mór) was not the last Irish potato famine; by the autumn of 1879, the crop had failed and starvation loomed for thousands—especially in the west of the country, in Connacht. (Where, Oliver Cromwell had said two centuries before, was one choice for Irish Catholics to go if they did not wish to be murdered by Cromwell’s New Model Army—the other being Hell.) But this sentence records the worst fear: it was because the Irish had been driven to eat their seed potatoes in the winter of 1846 that the famine that had been brewing since 1845 became the Great Hunger in the year known as “Black ’47”: although what was planted in the spring of 1847 largely survived to harvest, there hadn’t been enough seeds to plant in the first place. Hence, everyone who heard that sentence from the Mansion House Committee in 1880 knew what it meant: the coming of that rider on a pale horse spoken of in Revelations. It’s a history lesson I bring up to suggest that “eating your seed corn” also explains the coming of another specter that many American intellectuals may have assumed lay in the past: Donald Trump.

There are two hypotheses about the rise of Donald Trump to the presumptive candidacy of the Republican Party. The first—that of many Hillary Clinton Democrats—is that Trump is tapping into a reservoir of racism that is simply endemic to the United States: in this view, “’murika” is simply a giant cesspool of hate waiting to break out at any time. But that theory is an ahistorical one: why should a Trump-like candidate—that is, one sustained by racism—only become the presumptive nominee of a major party now? “Since the 1970s support for public and political forms of discrimination has shrunk significantly” says one voice on the subject (Anna Maria Barry-Jester’s, surveying many sociological studies for FiveThirtyEight). If the studies Barry-Jester highlights are correct, and yet levels of racism remain precisely the same as in the past, then that must mean that the American public is not getting less racist—but instead merely getting better at hiding it. That then raises the question: if the level of racism still remains as high as in the past, why wasn’t it enough to propel, say, former Alabama governor George Wallace to a major party nomination in 1968 or 1972? In other words, why Trump now, rather than George Wallace then? Explaining Trump’s rise as due to racism has a timing problem: it’s difficult to think that, somehow, racism has become more acceptable today than it was forty or more years ago.

Yet, if not racism, then what is fueling Trump? Journalist and gadfly Thomas Frank suggests an answer: the rise of Donald Trump is not the result of racism, but of efforts to fight racism—or rather, the American Left’s focus on racism at the expense of economics. To wildly overgeneralize: Trump is not former Republican political operative Karl Rove’s fault, but rather Fannie Lou Hamer’s.

Although little known today, Fannie Lou Hamer was once famous as a leader of the Mississippi Freedom Democratic Party’s delegation to the 1964 Democratic Party Convention. On arrival Hamer addressed the convention’s Credentials Committee to protest the seating of Mississippi’s “regular” Democratic delegation on the grounds that Mississippi’s official delegation, an all-white slate of delegates, had only become the “official” delegation by suppressing the votes of the state’s 400,000 black people—which had the disadvantageous quality, from the national party’s perspective, of being true. What’s worse, when the “practical men” sent to negotiate with her—especially Senator Hubert Humphrey of Minnesota—asked her to step down her challenge on the pragmatic grounds that her protest risked losing the entire South for President Lyndon Johnson in the upcoming general election, Hamer refused: “Senator Humphrey,” Hamer rebuked him; “I’m going to pray to Jesus for you.” With that, Hamer rejected the hardheaded, practical calculus that informed Humphrey’s logic; in doing so, she set a example that many on the American Left have followed since—an example that, to follow Frank’s argument, has provoked the rise of Trump.

Trump’s success, Frank explains, is not the result of cynical Republican electoral exploitation, but instead because of policy choices made by Democrats: choices that not only suggest that cynical Republican choices can be matched by cynical Democratic ones, but that Democrats have abandoned the key philosophical tenet of their party’s very existence. First, though, the specific policy choices: one of them is the “austerity diet” Jimmy Carter (and Carter’s “hand-picked” Federal Reserve chairman, Paul Volcker), chose for the nation’s economic policy at the end of the 1970s. In his latest book, Listen, Liberal: or, Whatever Happened to the Party of the People?, Frank says that policy “was spectacularly punishing to the ordinary working people who had once made up the Democratic base”—an assertion Frank is hardly alone in repeating, because as noted not-radical Fortune magazine has observed, “Volcker’s policies … helped push the country into recession in 1980, and the unemployment rate jumped from 6% in August 1979, the month of Volcker’s appointment, to 7.8% in 1980 (and peaked at 10.8 % in 1982).” And Carter was hardly the last Democratic president who made economic choices contrary to the interests of what might appear to be the Democratic Party’s constituency.

The next Democratic president, Bill Clinton, after all put the North American Free Trade Agreement through Congress: an agreement that had the effect (as the Economic Policy Institute has observed) of “undercut[ing] the bargaining power of American workers” because it established “the principle that U.S. corporations could relocate production elsewhere and sell back into the United States.” Hence, “[a]s soon as NAFTA became law,” the EPI’s Jeff Faux wrote in 2013, “corporate managers began telling their workers that their companies intended to move to Mexico unless the workers lowered the cost of their labor.” (The agreement also allowed companies to extort tax breaks from state and municipal coffers by threatening to move, with the attendant long-term costs—including an inability to fight for workers.) In this way, Frank says, NAFTA “ensure[d] that labor would be too weak to organize workers from that point forward”—and NAFTA has also become the basis for other trade agreements, such as the Trans-Pacific Partnership backed by another Democratic administration: Barack Obama’s.

That these economic policies have had the effects described is, perhaps, debatable; what is not debatable, however, is that economic inequality has grown in the United States. As the Pew Research Center reports, “in real terms the average wage peaked more than 40 years ago,” and as Christopher Ingraham of the Washington Post reported last year, “the fact that the top 20 percent of earners rake in over 50 percent of the total earnings in any given year” has become something of a cliché in policy circles. Ingraham also reports that “the wealthiest 10 percent of U.S. households have captured a whopping 76 percent of all the wealth in America”—a “number [that] is considerably higher than in other rich nations.” These figures could be multiplied; they represent a reality that even Republican candidates other than Trump—who for the most part was the only candidate other than Bernie Sanders to address these issues—began to respond to during the primary season over the past year.

“Today,” said Senator and then-presidential candidate Ted Cruz in January—repeating the findings of University of California, Berkeley economist Emmanuel Saez—“the top 1 percent earn a higher share of our national income than any year since 1928.” While the cause of these realities are still argued over—Cruz for instance sought to blame, absurdly, Obamacare—it’s nevertheless inarguable that the country has become radically remade economically over recent decades.

That reformation has troubling potential consequences, if they have not already themselves become real. One of them has been adequately described by Nobel Prize-winning economist Joseph Stiglitz: “as more money becomes concentrated at the top, aggregate demand goes into a decline.” What Stiglitz means is this: say you’re Mitt Romney, who had a 2010 income of $21.7 million. “Even if Romney chose to live a much more indulgent lifestyle” than he actually does, Stiglitz says, “he would only spend a fraction of that sum in a typical year to support himself and his wife in their several homes.” “But take the same amount of money and divide it among 500 people,” Stiglitz continues, “say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all of the money gets spent.” That expenditure represents economic activity: as should surely, but apparently isn’t to many people, be self-evident, a lot more will happen economically if 500 people split twenty million dollars than if one person has all of it.

Stiglitz, of course, did not invent this argument: it used to be bedrock for Democrats. As Frank points out, the same theory was advanced by the Democratic Party’s presidential nominee—in 1896. As expressed by William Jennings Bryan at the 1896 Democratic Convention, the Democratic idea is, or used to be, this one:

There are two ideas of government. There are those who believe that, if you will only legislate to make the well-to-do prosperous, their prosperity will leak through on those below. The Democratic idea, however, has been that if you legislate to make the masses prosperous, their prosperity will find its way up through every class which rests upon them.

To many, if not most, members of the Democratic Party today, this argument is simply assumed to fit squarely with Fannie Lou Hamer’s claim for representation at the 1964 Democratic Convention: on the one hand, economic justice for working people; on the other, political justice for those oppressed on account of their race. But there are good reasons to think that Hamer’s claim for political representation at the 1964 convention puts Bryan’s (and Stiglitz’) argument in favor of a broadly-based economic policy in grave doubt—which might explain just why so many of today’s campus activists against racism, sexism, or homophobia look askance at any suggestion that they demonstrate, as well, against neoliberal economic policies, and hence perhaps why the United States has become more and more unequal in recent decades.

After all, the focus of much of the Democratic Party has been on Fannie Lou Hamer’s question about minority representation, rather than majority representation. A story told recently by Elizabeth Kolbert of The New Yorker in a review of a book entitled Ratf**ked: The True Story Behind the Secret Plan to Steal America’s Democracy, by David Daley, demonstrates the point. In 1990, it seems, Lee Atwater—famous as the mastermind behind George H.W. Bush’s presidential victory in 1988 and then-chairman of the Republican National Committee—made an offer to the Congressional Black Caucus, as a result of which the “R.N.C. [Republican National Committee] and the Congressional Black Caucus joined forces for the creation of more majority-black districts”—that is, districts “drawn so as to concentrate, or ‘pack,’ African-American voters.” The bargain had an effect: Kolbert mentions the state of Georgia, which in 1990 had nine Democratic congressmen—eight of whom were white. “In 1994,” however, Kolbert notes, “the state sent three African-Americans to Congress”—while “only one white Democrat got elected.” 1994 was, of course, also the year of Newt Gingrich’s “Contract With America” and the great wave of Republican congressmen—the year Democrats lost control of the House for the first time since 1952.

The deal made by the Congressional Black Caucus in other words, implicitly allowed by the Democratic Party’s leadership, enacted what Fannie Lou Hamer demanded in 1964: a demand that was also a rejection of a political principle known as “majoritarianism”—the right of majorities to rule. It’s a point that’s been noticed by those who follow such things: recently, some academics have begun to argue against the very idea of “majority rule.” Stephen Macedo—perhaps significantly, the Laurance S. Rockefeller Professor of Politics and the University Center for Human Values at Princeton University—recently wrote, for instance, that majoritarianism “lacks legitimacy if majorities oppress minorities and flaunt their rights.” Hence, Macedo argues, “we should stop talking about ‘majoritarianism’ as a plausible characterization of a political system that we would recommend” on the grounds that “the basic principle of democracy” is not that it protects the interests of the majority but instead something he calls “political equality.” In other words, Macedo asks: “why should we regard majority rule as morally special?” Why should it matter, in other words, if one candidate should get more votes than another? Some academics, in short, have begun to wonder publicly about why we should even bother holding elections.

What is so odd about Macedo’s arguments to a student of American history, of course, is that he is merely echoing certain older arguments—like this one, from the nineteenth century: “It is not an uncommon impression, that the government of the United States is a government based simply on population; that numbers are its only element, and a numerical majority its only controlling power,” this authority says. But that idea is false, the writer goes on to say: “No opinion can be more erroneous.” The United States is, instead, “a government of the concurrent majority,” and “population, mere numbers,” are, “strictly speaking, excluded.” It’s an argument that, as it is spieled out, might sound plausible; after all, the structure of the government of the United States does have a number of features that are, “strictly speaking,” not determined solely by population: the Senate and the Supreme Court, for example, are pieces of the federal government that are, in conception and execution, nearly entirely opposed to the notion of “numerical majority.” (“By reference to the one person, one vote standard,” Francis E. Lee and Bruce I. Oppenheimer observe for instance in Sizing Up the Senate: The Unequal Consequences of Equal Representation, “the Senate is the most malapportioned legislature in the world.”) In that sense, then, one could easily imagine Macedo having written the above, or these ideas being articulated by Fannie Lou Hamer or the Congressional Black Caucus.

Except, of course, for one thing: the quotes in the above paragraph were taken from the writings of John Calhoun, the former Senator, Secretary of War, and Vice President of the United States—which, in one sense, might seem to give the weight of authority to Macedo’s argument against majoritarianism. At least, it might if not for a couple of other facts about Calhoun: not only did he personally own dozens of slaves (at his plantation, Fort Hill; now the site of Clemson University), he is also well-known as the most formidable intellectual defender of slavery in American history. His most cunning arguments after all—laid out in such works as the Fort Hill Address and the Disquisition on Government—are against majoritarianism and in favor of slavery; indeed, to Calhoun they are much the same: anti-majoritarianism is more or less the same as being pro-slavery. (A point that historians like Paul Finkelman of the University of Tulsa have argued is true: the anti-majoritarian features of the U.S. Constitution, these historians say, were originally designed to protect slavery—a point that might sound outré except for the fact that it was made at the time of the Constitutional Convention itself by none other than James Madison.) And that is to say that Stephen Macedo and Fannie Lou Hamer are choosing a very odd intellectual partner—while the deal between the RNC and the Congressional Black Caucus demonstrates that those arguments are having very real effects.

What’s really significant, in short, about Macedo’s “insights” about majoritarianism is that, as a possessor of a named chair at one of the most prestigious universities in the world, his work shows just how a concern, real or feigned, for minority rights can be used as a means of undermining the very idea of democracy itself. It’s in this way that activists against racism, sexism, homophobia and other pet campus causes can effectively function as what Lenin called “useful idiots”: by dismantling the agreements that have underwritten the existence of a large and prosperous proportion of the population for nearly a century, “intellectuals” like Macedo may be helping to dismantle economically the American middle class. If the opinion of the majority of the people does not matter politically, after all, it’s hard to think that their opinion could matter in any other way—which is to say that arguments like Macedo’s are thusly a kind of intellectual strip-mining operation: they consume the intellectual resources of the past in order to provide a short-term gain for a small number of operators.

They are, in sum, eating their seed-corn.

In that sense, despite the puzzled brows of many of the country’s talking heads, the Trump phenomenon makes a certain kind of potted sense—even if it appears utterly irrational to the elite. Although they might not express themselves in terms that those with elite educations find palatable—in a fashion that, significantly, suggests a return to those Victorian codes of “breeding” and “politesse” that elites have always used against what used to be called the “lower classes”—there really may be an ideological link between a Democratic Party governed by those with elite educations and the current economic reality faced by the majority of Americans. That reality may be the result of the elites’ loss of faith in what even Calhoun called the “fundamental principle, the great cardinal maxim” of democratic government: “that the people are the source of all power.” So, while the organs of elite opinion like The New York Times or other outlets might continue to crank out stories decrying the “irrationality” of Donald Trump’s supporters, it may be that Trumps’ fans (Trumpettes?) are in fact in possession of a deeper rationality than that of those criticizing them. What their votes for Trump may signal is a recognition that, if the Republican Party has become the party of the truly rich, “the 1%,” the Democratic Party has ceased to be the party of the majority and has instead become the party of the professional class: the “10%.” Or, as Frank says, in swapping Republicans and Democrats the nation “merely exchange[s] one elite for another: a cadre of business types for a collection of high-achieving professionals.” Both, after all, disbelieve in the virtues of democracy; what may (or may not) be surprising, while also deeply terrifying, is that supposed “intellectuals” have apparently come to accept that there is no difference between Connacht—and the Other Place.



**Update: In the hours since I first posted this, I’ve come across two different recent articles in magazines with “New York” in their titles: in one, for The New Yorker, Jill Lepore—a professor of history at Harvard in her day job—argues that “more democracy is very often less,” while the other, written by Andrew Sullivan for New York magazine, is entitled “Democracies End When They Are Too Democratic.” Draw conclusions where you will.

Art Will Not Save You—And Neither Will Stanley


But I was lucky, and that, I believe, made all the difference.
—Stanley Fish. “My Life Report” 31 October 2011, New York Times. 


Pfc. Bowe Bergdahl, United States Army, is the subject of the new season of Serial, the National Public Radio show that tells “One story. Week by week.” as the advertising tagline has it. NPR is doing a show about Bergdahl because of what Bergdahl chose to do on the night of 30 June 2009: as Serial reports, that night he walked off his “small outpost in eastern Afghanistan and into hostile territory,” where he was captured by Taliban guerrillas and held prisoner for nearly five years. Bergdahl’s actions have led some to call him a deserter and a traitor; as a result of leaving his unit Bergdahl faces a life sentence from a military court. But the line Bergdahl crossed when he stepped beyond the concertina wire and into the desert of Paktika Province was far greater than the line between a loyal soldier and a criminal. When Bowe Bergdahl wandered into the wilderness, he also crossed the line between the sciences and the humanities—and demonstrated why the political hopes some people place in the humanities is not only illogical, but arguably holding up actual political progress.

Bergdahl can be said to have crossed that line because what happens to him when he is tried by a military court regarding what happened will, likely, turn on what the intent behind his act was: in legal terms, this is known as mens rea, which is Latin for “guilty mind.” Intent is one of the necessary components prosecutors must prove to convict Bergdahl for desertion: according to Article 85 of the Uniform Code of Military Justice, to be convicted of desertion Bergdahl must be shown to have had the “intent to remain away” from his unit “permanently.” It’s this matter of intent that demonstrates the difference between the humanities and the sciences.

The old devil, Stanley Fish, once demonstrated that border in an essay in the New York Times designed to explain what it is that literary critics, and other people who engage in interpretation, do, and how it differs from other lines of work:

Suppose you’re looking at a rock formation and see in it what seems to be the word ‘help.’ You look more closely and decide that, no, what you’re seeing is an effect of erosion, random marks that just happen to resemble an English word. The moment you decide that nature caused the effect, you will have lost all interest in interpreting the formation, because you no longer believe that it has been produced intentionally, and therefore you no longer believe that it’s a word, a bearer of meaning.

To put it another way, matters of interpretation concern agents who possess intent: any other kind of discussion is of no concern to the humanities. Conversely, the sciences can be said to concern all those things not produced by an agent, or more specifically an agent who intended to convey something to some other agent.

It’s a line that seems clear enough, even in what might be marginal cases: when a beaver builds a dam, surely he intends to build that dam, but it also seems inarguable that the beaver intends nothing more to be conveyed to other beavers than, “here is my dam.” More questionable cases might be when, say, a bird or some other animal performs a “mating dance”: surely the bird intends his beloved to respond, but still it would seem ludicrous to put a scholar of, say, Jane Austen’s novels to the task of recovering the bird’s message. That would certainly be overkill.

Yes yes, you will impatiently say, but what has that to do with Bergdahl? The answer, I think, might be this: if Bergdahl’s lawyer had a scientific, instead of a humanistic, sort of mind, he might ask how many soldiers were stationed in Afghanistan during Bergdahl’s time there, and how many overall. The reason a scientist would ask that question about, say, a flock of birds he was studying is because, to a scientist, the overall numbers matter. The reason why they matter demonstrates just what the difference between science and the humanities is, but also why the faith some place in the political utility of the humanities is ridiculous.

The reason why the overall numbers of the flock would matter to a scientist is because sample size matters: a behavior that one bird in a flock of twelve birds exhibited is probably not as significant as a behavior that one bird in a flock of millions exhibited. As Nassim Taleb put it in his book, Fooled By Randomness, how impressive it is if a monkey has managed to type a verbatim copy of the Iliad “Depends On The Number of Monkeys.” “If there are five monkeys in the game,” Taleb elaborates, “I would be rather impressed with the Iliad writer”—but if, on the other hand, “there are a billion to the power one billion monkeys I would be less impressed.” Or to put it in another context, the “greater the number of businessmen, the greater the likelihood of one of them performing in a stellar manner just by luck.” What matters to a scientist, in other words, isn’t just what a given bird does—it’s how big the flock was in the first place.

To a lawyer, of course, none of that would be significant: the court that tries Bergdahl will not view that question as a relevant one in determining whether he is guilty of the crime of desertion. That is because, as a discipline concerned with interpretation, such a question will have been ruled out of court, as we say, before the court has even met: to consider how many birds in the flock there were when one of them behaved strangely, in other words, is to have a priori ceased to consider that bird as an agent because when one asks how many other birds there are, the implication is that what matters more is simply the role of chance rather than any intent on the part of the bird. Any lawyer that brought up the fact that Bergdahl was the only one out of so many thousands of soldiers to have done what he did, without taking up the matter of Bergdahl’s intent, would not be acting as a lawyer.

By the way, in case you’re wondering, roughly 65,000 soldiers were in Afghanistan by early October of 2009, behind the “surge” ordered by President Barack Obama shortly after taking office. The number, according to a contemporary story by The Washington Post, would be “more than double the number there when Bush left office,” which is to say that when Bergdahl left his tiny outpost at the end of June that year, the military was in the midst of a massive buildup of troops. The sample size, in Taleb’s terms, was growing rapidly at that time—with what effects on Bergdahl’s situation, if any, I await enlightenment, if there be any.

Whether that matters or not in terms of Bergdahl’s story—in Serial or anywhere else—remains to be seen; as a legal matter it would be very surprising if any military lawyer brought it up. What that, in turn, suggests is that the caution with which Stanley Fish has greeted many in the profession of literary study regarding the application of such work to actual political change is thoroughly justified: “when you get to the end” of the road many of those within the humanities have been traveling at least since the 1960s or 70s, Fish has remarked for instance, “nothing will have changed except the answers you might give to some traditional questions in philosophy and literary theory.” It’s a warning of crisis that even now may be reaching its peak as the nation realizes that, after all, the great political story of our time has not been about the minor league struggles within academia, but rather the story of how a small number of monkeys have managed to seize huge proportions of the planet’s total wealth: as Bernie Sanders, the political candidate, tweeted recently in a claim rated “True” by Politifact, “the Walton family of Walmart own more wealth than the bottom 40 percent of America.”

In that story, the intent of the monkeys hardly matters.

High Anxiety

Now for our mountain sport …

Act III, Scene 3

High Hampton

Wade Hampton Golf Club Sign

Entrances to Wade Hampton Golf Club and High Hampton Inn and Country Club, North Carolina

Walt Whitman once said, as anyone who saw Bull Durham knows, that baseball would function to draw America together after the Civil War: the game, the poet said, would “repair our losses and be a blessing to us.” Many Americans have not lost this belief in the redemptive power of sports: as recently as 2011 John Boehner, then-Speaker of the House of Representatives, played a much-ballyhooed round of golf with President Barack Obama—along with many other outlets, Golf Digest presented the event as presaging a new era of American unity: the “pair can’t possibly spend four hours keeping score, conceding putts, complimenting drives, filling divots, retrieving pond balls, foraging for Pro V1s and springing for Kit Kats off the snack cart,” argued the magazine, “without finding greater common ground.” Golf would thusly be the antidote to what the late Columbia University history professor Richard Hofstadter, in 1964, called the “paranoid style”: the “heated exaggeration, suspiciousness, and conspiratorial fantasy” that Hofstadter found to be a common theme in American politics then and whose significance has seemingly only grown since. Yet, while the surface approval of the “golf summit” seemed warranted because golf is, after all, a game that cannot really be played without trust in your opponents—it’s only on the assumption that everyone is honest that the game can even work—as everyone knows by now the summit failed: Boehner was, more or less, forced out of office this summer by those members of his party who, Boehner said, got “bent out of shape” over his golf with the president. While golf might, in other words, furnish a kind of theoretical model for harmonious bipartisanship, in practice it has proved largely useless for preventing political polarization—a result that anyone who has traveled Highway 107 in western North Carolina might have realized. Up there, among the Great Smoky Mountains, there sits a counterexample to the dream of political consensus: the Wade Hampton Golf Club.

Admittedly, that a single golf club could be strong enough evidence as to smack down the flights of fancy of a Columbia University professor like Hofstadter—and a Columbia University alumni like Barack Obama—might appear a bit much: there’s a seeming disconnect between the weightiness of the subject matter and the evidential value of an individual golf club. What could the existence of the Wade Hampton Golf Club add (or detract) from Hofstadter’s assertions about the dominance of this “paranoid style,” examples of which range from the anti-Communist speeches of Senator Joseph McCarthy in the 1950s to the anti-Catholic, “nativist” movements of the 1830s and 1840s to the Populist denunciations of Wall Street during the 1890s? Yet, the existence of the Wade Hampton Golf Club does constitute strong evidence against one of the pieces of evidence Hofstadter adduces for his argument—and in doing so unravels not only the rest of Hofstadter’s spell like a kitten does a ball of string, but also the fantasy of “bipartisanship.”

One of the examples of “paranoia” Hofstadter cited, in other words, was the belief held by “certain spokesmen of abolitionism who regarded the United States as being in the grip of a slaveholders’ conspiracy”—a view that, Hofstadter implied, was not much different than the contemporary belief that fluoridation was a Soviet plot. But a growing number of historians now believe that Hofstadter was wrong about those abolitionists: according to historian Leonard Richards of the University of Massachusetts, for instance, there’s a great deal of evidence for “the notion that a slaveholding oligarchy ran the country—and ran it for their own advantage” in the years prior to the Civil War. The point is more than an academic one: if it’s all just a matter of belief, then the idea of bipartisanship makes a certain kind of sense; all that matters is whether those we elect can “get along.” But if not, then that would suggest that what matters is building the correct institutions, rather than electing the right people.

Again, that seems like rather more question than the existence of a golf club in North Carolina seems capable of answering. The existence of the Wade Hampton Golf Club however tends to reinforce Richards’ view if, for nothing else, on its name alone: the very biography of the man the golf club was named for, Wade Hampton III, lends credence to Richards’ notion about the real existence of a slave-owning, oligarchical conspiracy because Hampton was after all not only a Confederate general during the Civil War, but also the possessor (according to the website for the Civil War Trust, which attempts to preserve Civil War battlefields) of “one of the largest collections of slaves in the South.” Hampton’s career, in other words, demonstrates just how entwined slaveowners were with the “cause” of the South—and if secession was largely the result of a slave-owning conspiracy during the winter of 1860, it becomes a great deal easier to think that said conspiracy did not spring fully grown only then.

Descended from an obscenely wealthy family whose properties stretched from near Charleston in South Carolina’s Lowcountry to Millwood Plantation near the state capital of Columbia and all the way to the family’s summer resort of “High Hampton” in the Smokies—upon the site of which the golf club is now built—Wade Hampton was intimately involved with the Southern cause: not only was he one of the richest men in the South, but at the beginning of the war he organized and financed a military unit (“Hampton’s Legion”) that would, among other exploits, help win the first big battle of the war, near the stream of Bull Run. By the end of the war Hampton became, along with Nathan Bedford Forrest, the only man without prior military experience to achieve the rank of lieutenant general. In that sense, Hampton was exceptional—only eighteen other Confederate officers achieved that rank—but in another he was representative: as recent historical work shows, much of the Confederate army had direct links to slavery.

As historian Joseph T. Glatthaar has put the point in his General Lee’s Army: From Victory to Collapse, “more than one in every four volunteers” for the Confederate army in the first year of the war “lived with parents who were slaveholders”—as compared with the general population of the South, in which merely one in every twenty white persons owned slaves. If non-family members are included, or if economic connections like those to whom soldiers rented land or sold crops prior to the war are allowed, then “the vast majority of the volunteers of 1861 had a direct connection to slavery.” And if the slaveowners could create an army that could hold off the power of the United States for four years, it seems plausible they might have joined together prior to outright hostilities—which is to say that Hofstadter’s insinuations about the relative sanity of “certain” abolitionists (among them, Abraham Lincoln) don’t have the same value as they may once have.

After all, historians have determined that the abolitionists were certainly right when they suspected the motives of the slaveowners. “By itself,” wrote Roger Ransom of the University of California not long ago, “the South’s economic investment in slavery could easily explain the willingness of Southerners to risk war … [in] the fall of 1860.” “On the eve of the war,” as another historian noted in the New York Times, “cotton comprised almost 60 percent of America’s exports,” and the slaves themselves, as yet another historian—quoted by Ta-Nehisi Coates in The Atlantic—has observed, were “the largest single financial asset in the entire U.S. economy, worth more than all manufacturing and railroads combined.” Collectively, American slaves were worth 3.5 billion dollars—at a time when the entire budget for the federal government was less than eighty million dollars. Quite literally, in other words, American slaveowners could buy the entire U.S. government roughly forty three times over.

Slaveowners thusly had, in the words of a prosecutor, both means and motive to revolt against the American government; what’s really odd about the matter, however, is that Americans have ever questioned it. The slaveowners themselves fully admitted the point at the time: in South Carolina’s “Declaration of the Immediate Causes which Adduce and Justify the Secession of South Carolina from the Federal Union,” for instance, the state openly lamented the election of a president “whose opinions and purposes are hostile to slavery.” And not just South Carolina: “Seven Southern states had seceded in 1861,” as the dean of American Civil War historians James McPherson has put observed, “because they feared the incoming Lincoln administration’s designs on slavery.” When those states first met together at Montgomery, Alabama, in February of 1861 it took them only four days to promulgate what the New York Times called “a provisional constitution that explicitly recognized racial slavery”; in a March 1861 speech Alexander Stephens, who would become the vice president of the Confederate States of America, argued that slavery was the “cornerstone” of the new government. Slavery was, as virtually anyone who has seriously studied the matter has concluded, the cause motivating the Southern armies.

If so—if, that is, the slaveowners created an army so powerful that it could hold off the power of the United States for four years, simply in order to protect their financial interests in slave-owning—it then seems plausible they might have joined together prior to the beginning of outright hostilities. Further, if there was a “conspiracy” to begin the Civil War, then the claim that there was one in the years and decades before the war becomes just that much more believable. And if that possibility is tenable, then so is the claim by Richards and other historians—themselves merely following a notion that Abraham Lincoln himself endorsed in the 1850s—that the American constitution formed “a structural impediment to the full expression of Northern voting power” (as one reviewer has put it)—and that thusly the answer to political problems is not “bipartisanship,” or in other words, the election of friendlier politicians, but rather structural reform.

Such, at least, might be the lesson anyone might draw from the career of Wade Hampton III, Confederate general—in light of which it’s suggestive that the Wade Hampton Golf Club is not some relic of the nineteenth century. Planning for the club began, according to the club’s website, in 1982; the golf course was not completed until 1987, when it was named “Best New Private Course” by Golf Digest. More suggestive still, however, is the fact that under the original bylaws, “in order to be a member of the club, you [had] to own property or a house bordering the club”—rules that resulted, as one golfer has noted, in a club of “120 charter and founding members, all from below the Mason-Dixon Line: seven from Augusta, Georgia and the remainder from Florida, Alabama, and North Carolina.” “Such folks,” as Bradley Klein once wrote in Golfweek, “would have learned in elementary school that Wade Hampton III, 1818-1902, who owned the land on which the club now sits, was a prominent Confederate general.” That is, in order to become a member of Wade Hampton Golf Club you probably knew a great deal about the history of Wade Hampton III—and you were pretty ok with that.

The existence of the Wade Hampton Golf Club does not, to be sure, demonstrate a continuity between the slaveowners of the Old South and the present membership of the club that bears Hampton’s name. It is, however, suggestive to think that if it is true, as many Civil War historians now say, that prior to 1860 there was a conspiracy to maintain an oligarchic form of government, then what are we to make of a present in which—as former Secretary of Labor Robert Reich recently observed—“the richest one-hundreth of one percent of Americans now hold over 11 percent of the nation’s total wealth,” a proportion greater than at any time since before 1929 and the start of the Great Depression? Surely, one can only surmise, the answer is easier to find than a mountain hideaway far above the Appalachian clouds, and requires no poetic vision to see.