High Anxiety

Now for our mountain sport …

Cymbeline 
Act III, Scene 3

High Hampton

Wade Hampton Golf Club Sign

Entrances to Wade Hampton Golf Club and High Hampton Inn and Country Club, North Carolina

Walt Whitman once said, as anyone who saw Bull Durham knows, that baseball would function to draw America together after the Civil War: the game, the poet said, would “repair our losses and be a blessing to us.” Many Americans have not lost this belief in the redemptive power of sports: as recently as 2011 John Boehner, then-Speaker of the House of Representatives, played a much-ballyhooed round of golf with President Barack Obama—along with many other outlets, Golf Digest presented the event as presaging a new era of American unity: the “pair can’t possibly spend four hours keeping score, conceding putts, complimenting drives, filling divots, retrieving pond balls, foraging for Pro V1s and springing for Kit Kats off the snack cart,” argued the magazine, “without finding greater common ground.” Golf would thusly be the antidote to what the late Columbia University history professor Richard Hofstadter, in 1964, called the “paranoid style”: the “heated exaggeration, suspiciousness, and conspiratorial fantasy” that Hofstadter found to be a common theme in American politics then and whose significance has seemingly only grown since. Yet, while the surface approval of the “golf summit” seemed warranted because golf is, after all, a game that cannot really be played without trust in your opponents—it’s only on the assumption that everyone is honest that the game can even work—as everyone knows by now the summit failed: Boehner was, more or less, forced out of office this summer by those members of his party who, Boehner said, got “bent out of shape” over his golf with the president. While golf might, in other words, furnish a kind of theoretical model for harmonious bipartisanship, in practice it has proved largely useless for preventing political polarization—a result that anyone who has traveled Highway 107 in western North Carolina might have realized. Up there, among the Great Smoky Mountains, there sits a counterexample to the dream of political consensus: the Wade Hampton Golf Club.

Admittedly, that a single golf club could be strong enough evidence as to smack down the flights of fancy of a Columbia University professor like Hofstadter—and a Columbia University alumni like Barack Obama—might appear a bit much: there’s a seeming disconnect between the weightiness of the subject matter and the evidential value of an individual golf club. What could the existence of the Wade Hampton Golf Club add (or detract) from Hofstadter’s assertions about the dominance of this “paranoid style,” examples of which range from the anti-Communist speeches of Senator Joseph McCarthy in the 1950s to the anti-Catholic, “nativist” movements of the 1830s and 1840s to the Populist denunciations of Wall Street during the 1890s? Yet, the existence of the Wade Hampton Golf Club does constitute strong evidence against one of the pieces of evidence Hofstadter adduces for his argument—and in doing so unravels not only the rest of Hofstadter’s spell like a kitten does a ball of string, but also the fantasy of “bipartisanship.”

One of the examples of “paranoia” Hofstadter cited, in other words, was the belief held by “certain spokesmen of abolitionism who regarded the United States as being in the grip of a slaveholders’ conspiracy”—a view that, Hofstadter implied, was not much different than the contemporary belief that fluoridation was a Soviet plot. But a growing number of historians now believe that Hofstadter was wrong about those abolitionists: according to historian Leonard Richards of the University of Massachusetts, for instance, there’s a great deal of evidence for “the notion that a slaveholding oligarchy ran the country—and ran it for their own advantage” in the years prior to the Civil War. The point is more than an academic one: if it’s all just a matter of belief, then the idea of bipartisanship makes a certain kind of sense; all that matters is whether those we elect can “get along.” But if not, then that would suggest that what matters is building the correct institutions, rather than electing the right people.

Again, that seems like rather more question than the existence of a golf club in North Carolina seems capable of answering. The existence of the Wade Hampton Golf Club however tends to reinforce Richards’ view if, for nothing else, on its name alone: the very biography of the man the golf club was named for, Wade Hampton III, lends credence to Richards’ notion about the real existence of a slave-owning, oligarchical conspiracy because Hampton was after all not only a Confederate general during the Civil War, but also the possessor (according to the website for the Civil War Trust, which attempts to preserve Civil War battlefields) of “one of the largest collections of slaves in the South.” Hampton’s career, in other words, demonstrates just how entwined slaveowners were with the “cause” of the South—and if secession was largely the result of a slave-owning conspiracy during the winter of 1860, it becomes a great deal easier to think that said conspiracy did not spring fully grown only then.

Descended from an obscenely wealthy family whose properties stretched from near Charleston in South Carolina’s Lowcountry to Millwood Plantation near the state capital of Columbia and all the way to the family’s summer resort of “High Hampton” in the Smokies—upon the site of which the golf club is now built—Wade Hampton was intimately involved with the Southern cause: not only was he one of the richest men in the South, but at the beginning of the war he organized and financed a military unit (“Hampton’s Legion”) that would, among other exploits, help win the first big battle of the war, near the stream of Bull Run. By the end of the war Hampton became, along with Nathan Bedford Forrest, the only man without prior military experience to achieve the rank of lieutenant general. In that sense, Hampton was exceptional—only eighteen other Confederate officers achieved that rank—but in another he was representative: as recent historical work shows, much of the Confederate army had direct links to slavery.

As historian Joseph T. Glatthaar has put the point in his General Lee’s Army: From Victory to Collapse, “more than one in every four volunteers” for the Confederate army in the first year of the war “lived with parents who were slaveholders”—as compared with the general population of the South, in which merely one in every twenty white persons owned slaves. If non-family members are included, or if economic connections like those to whom soldiers rented land or sold crops prior to the war are allowed, then “the vast majority of the volunteers of 1861 had a direct connection to slavery.” And if the slaveowners could create an army that could hold off the power of the United States for four years, it seems plausible they might have joined together prior to outright hostilities—which is to say that Hofstadter’s insinuations about the relative sanity of “certain” abolitionists (among them, Abraham Lincoln) don’t have the same value as they may once have.

After all, historians have determined that the abolitionists were certainly right when they suspected the motives of the slaveowners. “By itself,” wrote Roger Ransom of the University of California not long ago, “the South’s economic investment in slavery could easily explain the willingness of Southerners to risk war … [in] the fall of 1860.” “On the eve of the war,” as another historian noted in the New York Times, “cotton comprised almost 60 percent of America’s exports,” and the slaves themselves, as yet another historian—quoted by Ta-Nehisi Coates in The Atlantic—has observed, were “the largest single financial asset in the entire U.S. economy, worth more than all manufacturing and railroads combined.” Collectively, American slaves were worth 3.5 billion dollars—at a time when the entire budget for the federal government was less than eighty million dollars. Quite literally, in other words, American slaveowners could buy the entire U.S. government roughly forty three times over.

Slaveowners thusly had, in the words of a prosecutor, both means and motive to revolt against the American government; what’s really odd about the matter, however, is that Americans have ever questioned it. The slaveowners themselves fully admitted the point at the time: in South Carolina’s “Declaration of the Immediate Causes which Adduce and Justify the Secession of South Carolina from the Federal Union,” for instance, the state openly lamented the election of a president “whose opinions and purposes are hostile to slavery.” And not just South Carolina: “Seven Southern states had seceded in 1861,” as the dean of American Civil War historians James McPherson has put observed, “because they feared the incoming Lincoln administration’s designs on slavery.” When those states first met together at Montgomery, Alabama, in February of 1861 it took them only four days to promulgate what the New York Times called “a provisional constitution that explicitly recognized racial slavery”; in a March 1861 speech Alexander Stephens, who would become the vice president of the Confederate States of America, argued that slavery was the “cornerstone” of the new government. Slavery was, as virtually anyone who has seriously studied the matter has concluded, the cause motivating the Southern armies.

If so—if, that is, the slaveowners created an army so powerful that it could hold off the power of the United States for four years, simply in order to protect their financial interests in slave-owning—it then seems plausible they might have joined together prior to the beginning of outright hostilities. Further, if there was a “conspiracy” to begin the Civil War, then the claim that there was one in the years and decades before the war becomes just that much more believable. And if that possibility is tenable, then so is the claim by Richards and other historians—themselves merely following a notion that Abraham Lincoln himself endorsed in the 1850s—that the American constitution formed “a structural impediment to the full expression of Northern voting power” (as one reviewer has put it)—and that thusly the answer to political problems is not “bipartisanship,” or in other words, the election of friendlier politicians, but rather structural reform.

Such, at least, might be the lesson anyone might draw from the career of Wade Hampton III, Confederate general—in light of which it’s suggestive that the Wade Hampton Golf Club is not some relic of the nineteenth century. Planning for the club began, according to the club’s website, in 1982; the golf course was not completed until 1987, when it was named “Best New Private Course” by Golf Digest. More suggestive still, however, is the fact that under the original bylaws, “in order to be a member of the club, you [had] to own property or a house bordering the club”—rules that resulted, as one golfer has noted, in a club of “120 charter and founding members, all from below the Mason-Dixon Line: seven from Augusta, Georgia and the remainder from Florida, Alabama, and North Carolina.” “Such folks,” as Bradley Klein once wrote in Golfweek, “would have learned in elementary school that Wade Hampton III, 1818-1902, who owned the land on which the club now sits, was a prominent Confederate general.” That is, in order to become a member of Wade Hampton Golf Club you probably knew a great deal about the history of Wade Hampton III—and you were pretty ok with that.

The existence of the Wade Hampton Golf Club does not, to be sure, demonstrate a continuity between the slaveowners of the Old South and the present membership of the club that bears Hampton’s name. It is, however, suggestive to think that if it is true, as many Civil War historians now say, that prior to 1860 there was a conspiracy to maintain an oligarchic form of government, then what are we to make of a present in which—as former Secretary of Labor Robert Reich recently observed—“the richest one-hundreth of one percent of Americans now hold over 11 percent of the nation’s total wealth,” a proportion greater than at any time since before 1929 and the start of the Great Depression? Surely, one can only surmise, the answer is easier to find than a mountain hideaway far above the Appalachian clouds, and requires no poetic vision to see.

Advertisements

Miracles Alone

They say miracles are past; and we have our
philosophical persons, to make modern and familiar, things supernatural and causeless.
All’s Well That Ends Well Act II, scene 3  

“If academic writing is to become expansive again,” wrote Joshua Rothman in The New Yorker a year ago, in one of the more Marxist sentences to appear in a mainstream publication lately, “academia will probably have to expand first.” What Rothman is referring to was the minor controversy set off by a piece by Nicholas Kristof in the New York Times entitled “Professors, We Need You!”—a rant attacking the “unintelligibility” of contemporary academic writing blah blah blah. Rothman’s take on the business—as a former graduate student himself—is that the increasing obscurity of the superstructure of academic writing is the result of an ever-smaller base: “the audience for academic work has been shrinking,” he says, and so building “a successful academic career” requires “serially impress[ing] very small groups of people,” like journal editors, hiring committees, etc. So, to Rothman, turning academic writing around would mean an expanding university system: that is, one in which it wasn’t terribly difficult to get a job. To put it another way, it’s to say that in order to make academics visible to the people, it would probably help to allow the people to become academics.

To very many current academics, however, that’s precisely off the table, because their work involves questioning the assumption necessary to power Rothman’s whole proposal: to write for large numbers of people requires the writing not to need some enormous amount of training in order to be read. A lot of academics in today’s humanities departments would “historicize” that assumption by saying that it only came into being with the Protestant Reformation at the beginning of the modern era, which held that the Bible could be read, and understood, by anyone—not just a carefully chosen set of acolytes capable of translating the holy mysteries to the laity, as in Roman Catholic practice. Academics of this sort might then make reference, as Benedict Anderson did in his Imagined Communities, to “print capitalism”—how the growth of newspapers and other printed materials demonstrated how writing untethered from a clerical caste could generate huge profits. And so on.

The defenses of obscure and difficult writing offered by such academics as Judith Butler, however, do not always take that turn: very often, difficult writing is defended on the grounds that such esoteric kinds of efforts “can help point the way to a more socially just world,” because “language plays an important role in shaping and altering our common or ‘natural’ understanding of social and political realities.” That, one supposes, might be true—and it’s certainly true that what’s known as the “cultural left” has, as the philosopher Richard Rorty once remarked, made all of us more sensitive to the peculiar ways in which language can influence the ways in which people perceive other people. But it’s also true that such a kind of thinking fails to think through the entire meaning of standing against intelligibility.

Most obviously, though this point is often obscured, it means standing against the idea of what is known as the doctrine of “naturalism,” a notion defined by the Stanford Encyclopedia of Philosophy as “asserting that reality has no place for ‘supernatural’ or other ‘spooky’ kinds of entity.” At least since Mark Twain adopted naturalism to literature by saying that “the personages of a tale shall confine themselves to possibilities and let miracles alone,” a baseline belief in naturalism has been what created the kind of widely literate public Kristof’s piece requires. Mysteries, that is, can only be understood by someone initiated into them: hence, to proceed without initiates requires outlawing mystery.

As should be obvious but apparently isn’t, it’s only absent a belief in mystery that anyone could, in Richard Rorty’s words, “think of American citizenship as an opportunity for action”—rather than, as Rorty laments so much of this so-called “cultural left” has become, possessed by the “spirit of detached spectatorship.” Difficult writing, in other words, might be able to do something for small groups, but it cannot, by definition, help larger ones—which is to say that it is probably no accident that Judith Butler should have left just what she meant by “socially just” undefined, because by the logic of her argument it almost certainly does not include the vast majority of America’s, or the world’s, people.

“In the early decades of” the twentieth century, Richard Rorty once wrote, “when an intellectual stepped back from his or her country’s history and looked at it through skeptical eyes, the chances were that he or she was about to propose a new political initiative.” That tradition is, it seems, nearly lost: today’s “academic Left,” Rorty wrote then, “has no projects to propose to America, no vision of a country to be achieve by building a consensus on the need for specific reforms.” For Rorty, however, that seems blamable on the intellectuals themselves—a kind of “blaming the victim” or traison des clercs that is itself a betrayal of the insights of naturalism: according to those notions, it’s no more possible that large numbers of smart people should have inexplicably given up on their political efforts completely than a flaming shrubbery could talk.

It’s that possibility that the British literary critic Terry Eagleton appears to have considered when, in his The Illusions of Postmodernism, he suggests that the gesture of denying that “there is any significant distinction between discourse and reality”—a denial specifically aimed at naturalism’s attempt to rule out the mysterious—may owe more to “the deadlocked political situation of a highly specific corner of the globe” than it does to the failures of the intellectuals. What I presume Eagleton is talking about is what Eric Alterman, writing in The Atlantic, called “the conundrum of a system that, as currently constructed, gives the minority party no strategic stake in sensible governance.” Very many of the features of today’s American government, that is, are designed not to produce good government, but rather to enable a minority to obstruct the doings of the majority—the famous “checks and balances.”

While American civic discourse often celebrates those supposed features, as I’ve written before the work of historians like Manisha Sinha and Leonard Richards shows that in fact they are due, not to the foresight of the Founding Fathers, but instead in order to protect the richest minority of the then-newborn republic: the slaveowners. It isn’t any accident that, as Alterman says, it “has become easier and easier for a determined minority to throw sand in the gears of the legislative process”: the very structure of the Senate, for example, allows “the forty Republican senators … [who] represent barely a third of the US population” to block any legislation, even excluding the more obscure senatorial tools, like the filibuster and the hold. These devices, as the work of historians shows, were originally developed in order to protect slavery; as Lawrence Goldstone put the point in the New Republic recently, during the Constitutional Convention of 1787, “slaveholders won a series of concessions,” among them “the makeup of the Senate” and the method of electing a president. These hangovers linger on, defending interests perhaps less obviously evil than the owners of slaves, but interests by and large not identical with those of the average citizen: today, those features are all check and no balance.

Such an explanation, I think, is more likely than Rorty’s stance of casting blame on people like Judith Butler, as odious as her beliefs really are. It might explain better how for instance, as the writer Seymour Krim described in his essay, “The American Novel Made Me,” intellectuals began “in the mid 50s [1950s] to regard the novel as a used-up medium,” so that the “same apocalyptic sense of possibility that we once felt in the U.S. novel now went into its examination”: what Krim calls “the game” of “literary criticism.” In that game, what matters isn’t the description of reality itself, but rather the methods of description by which “reality” is recorded: in line with Rorty’s idea of the intellectual turn against reality, not so much the photograph so much as the inner workings of the camera. Yet while that pursuit might appear to  some as a ridiculous and objectively harmful pursuit, blaming people, even smart people, for having become involved in such efforts because you have blocked their real path to advancement is like blaming butter for melting in the sun.

What all of this may show, in other words, is that for academic writing to become expansive again, as Joshua Rothman wishes, it may require far more than just academia to expand, though almost certainly that may be part of it. What it will also require is a new band of writers and politicians, recommitted to the tenets of naturalism and determined, as Krim said about “the American realistic novel of the mid to late 1930s,” to be “‘truthful’ in recreating American life.” To Kristof or Rothman, that’s a task unlikely even to be undertaken in our lifetimes, much less accomplished. Yet it ought to be acknowledged that Kristof and Rothman’s own efforts imply that a hunger exists that may not know its name—that a wanderer is abroad, holding aloft a lantern flickering not because of a rising darkness, but an onrushing dawn.

 

For Miracles Are Ceased

Turn him to any cause of policy,
The Gordian knot of it he will unloose …
Henry V

 

For connoisseurs of Schadenfreude, one of the most entertaining diversions of the past half-century or so is the turf war fought out in the universities between the sciences and the humanities now that, as novelist R. Scott Bakker has written, “at long last the biological sciences have gained the tools and techniques required to crack problems that had hitherto been the exclusive province of the humanities.” A lot of what’s happened in the humanities since the 1960s—the “canon wars,” the popularization of Continental philosophy, the establishment of various sorts of “studies”—could be described as a disciplinary battle with the sciences, and not the “political” war that it is often advertised as; under that description, the vaunted outreach of the humanities to previously-underserved populations stops looking entirely so noble and more like the efforts, a century ago, of robber baron industrialists to employ minority scabs against striking workers. It’s a comparison in fact that is not only not meant flippantly, but suggests that the history of the academy since the 1960s stops looking like the glorious march towards inclusion its proponents sometimes portray it as—and rather more like the initial moves of an ideological war designed to lay the foundation for the impoverishment of all America.

According to University of Illinois at Chicago professor of literature Walter Benn Michaels, after all, today’s humanistic academy has largely become the “human resources department of neoliberalism.” Michaels’ work suggests, in fact, that the “real” purpose of the professoriate promoting the interests of women and minorities has not been for the sheer justice of the cause, but rather to preserve their own antiquated and possibly ridiculous methods of “scholarship.” But that bargain however—if there was one—may perhaps be said to have had unintended consequences: among them, the reality that some CEOs enjoy pay thousands of times that of the average worker.

Correlation is not causation, of course, but it does seem inarguable that, as former Secretary of Labor Robert Reich wrote recently in Salon, Americans have forgotten the central historical lesson of the twentieth century: that a nation’s health (and not just its economic health) depends on consumer demand. As Reich wrote, contrary to those who argue in favor of some form of “trickle down” economics, “America’s real job creators are consumers, whose rising wages generate jobs and growth.” When workers get raises, they have “enough purchasing power to buy what expanding businesses [have] to offer.” In short (pardon, Secretary Reich), “broadly shared prosperity isn’t just compatible with a healthy economy that benefits everyone—it’s essential to it.” But Americans have, it seems, forgotten that lesson: as many, many observers have demonstrated, American wages have largely been stagnant since the early 1970s.

Still, that doesn’t mean the academy is entirely to blame: for the most part, it’s only because of the work of academics that the fact of falling wages is known to any certainty—though it’s also fair to say that the evidence can be gathered by a passing acquaintance with reality. Yet it’s also true that, as New York University professor of physics Alan Sokal averred some two decades ago, much of the work of the humanities since the 1960s has been devoted towards undermining, in the name of one liberatory vision or another, the “stodgy” belief “that there exists an external world, [and] that there exist objective truths about it.” Such work has arguably had a version of the political effect often bombastically claimed for it—undoubtedly, there are many more people from previously unrepresented groups in positions of authority throughout American society today than there were before.

Yet, as the Marxist scholars often derided by their “postmodernist” successors knew—and those successors appear to ignore—every advance has its cost, and interpreted dialectically the turn of the humanities away from scientific naturalism has two possible motives: the first, as mentioned, the possibility that territory once the exclusive province of the humanities has been invaded by the sciences, and that much of the behavior of professors of the humanities can be explained by fear that “the traditional humanities are about to be systematically debunked” by what Bakker calls “the tremendous, scientifically-mediated transformations to come.” In the wake of the “ongoing biomechanical renovation of the human,” Bakker says, it’s become a serious question whether “the idiom of the humanities can retain cognitive legitimacy.” If Bakker’s suggestion is correct, then the flight of the humanities from the sciences can be interpreted as something akin to the resistance of old-fashioned surgeons to the practice of washing their hands.

There is, however, another possible interpretation: one that accounts for the similarity between the statistical evidence of rising inequality since the 1970s gathered by many studies and the evidence in favor of the existence of global warming—a comparison not made lightly. In regards to both, there’s a case to be made that many of the anti-naturalistic doctrines developed in the academy have conspired with the mainstream media’s tendency to ignore reality to prevent, rather than aid, political responses—a conspiracy that itself is only encouraged by the current constitutional structure of the American state, which according to some academic historians (of the non-“postmodern” sort) was originally designed with precisely the intention of both ignoring and preventing action about another kind of overwhelming, but studiously ignored, reality.

In early March, 1860, not-yet presidential candidate Abraham Lincoln addressed an audience at New Haven, Connecticut; “the question of Slavery,” he said during that speech, “is the question, the all absorbing topic of the day.” Yet it was also the case, Lincoln observed, that while in private this was the single topic of many conversations, in public it was taboo: according to slavery’s defenders, Lincoln said, opponents of slavery “must not call it wrong in the Free States, because it is not there, and we must not call it wrong in the Slave States because it is there,” while at the same time it should not be called “wrong in politics because that is bringing morality into politics,” and also that it should not be called “wrong in the pulpit because that is bringing politics into religion.” In this way, even as slavery’s defenders could admit that slavery was wrong, they could also deny that there was any “single place … where this wrong thing can properly be called wrong!” Thus, despite the fact that slavery was of towering importance it was also to be disregarded.

There were, of course, entirely naturalistic reasons for that premeditated silence: as documented by scholars like Leonard Richards and Garry Wills, the structure of American government itself is due to a bargain between the free and the slave states—a bargain that essentially ceded control of the federal machinery to the South in exchange for their cooperation. The evidence is compelling: “between Washington’s election and the Compromise of 1850,” as Richards has noted for example, “slaveholders controlled the presidency for fifty years, the Speaker [of the House]’s chair for forty-one years, and the chairmanship of House Ways and Means [the committee that controls the federal budget] for forty-two years.” By controlling such key offices, according to these scholars, slaveowners could prevent the federal government from taking any action detrimental to their interests.

The continuing existence of structures originally designed to ensure Southern control—among them the Supreme Court and the Senate, institutions well-known to constitutional scholars for being offerings to society’s “aristocratic” interests even if the precise nature of that interest is never explicitly identified as such—even beyond the existence of slavery, in turn, may perhaps explain, naturalistically, the relative failure of naturalistic, scientific thinking in the humanities over the past several decades—even as the public need for such thinking has only increased. Such, at least, is what might be termed the “positive” interpretation of humanistic antagonism toward science: not so much an interested resistance to progress but instead a principled reaction to a continuing drag on not just the political interests of Americans, but perhaps even to the progress of knowledge and truth itself.

What’s perhaps odd, to be sure, is that no one from the humanities has dared to make this case publicly—excluding only a handful of historians and law professors, most of them far from the scholarly centers of excitement. On the contrary, jobs in the humanities generally go to people who urge, like European lecturer in art history and sociology Anselm Joppe, some version of a “radical separation from the world of politics and its institutions of representation and delegation,” and ridicule those who “still flock to the ballot box”—often connected, as Joppe’s proposals are, to a ban on television and an opposition to both genetically modified food and infrastructure investment. Still, even when—as Richards and Wills and others have—academics have made their case in a responsible way, none has connected that struggle to the larger issues of the humanities generally. Of course, to make such connections—to make such a case—would require such professors to climb down from the ivory tower that is precisely the perch that enables them to do the sort of thinking that I have attempted to present here, inevitably exhibiting innumerable, and perhaps insuperable, difficulties. Yet, without such attempts, it’s difficult to see how either the sciences or the humanities can be preserved—to speak nothing of the continuing existence of the United States.

Still, there is one “positive” possibility: if none of them do, then the opportunities for Schadenfreude will become nearly limitless.

Several And A Single Place

 

What’s the matter,
That in these several places of the city
You cry against the noble senate?
Coriolanus 

 

The explanation, says labor lawyer Thomas Geoghegan, possesses amazing properties: he can, the one-time congressional candidate says, “use it to explain everything … because it seems to work on any issue.” But before trotting out what that explanation is, let me select an issue that might appear difficult to explain: gun control, and more specifically just why, as Christopher Ingraham of the Washington Post wrote in July, “it’s never the right time to discuss gun control.” “In recent years,” as Ingraham says, “politicians and commentators from across the political spectrum have responded to mass shootings with an invocation of the phrase ‘now is not the time,’ or a close variant.” That inability even to discuss gun control is a tremendously depressing fact, at least insofar as you have sympathy for the needless waste of lives gun deaths are—until you realize that we Americans have been here before. And that demonstrates, just maybe, that Thomas Geoghegan has a point.

Over a century and a half ago, Americans were facing another issue that, in the words of one commentator, “must not be discussed at all.” It was so grave an issue, in fact, that very many Americans found “fault with those who denounce it”—a position that this commenter found odd: “You say that you think [it] is wrong,” he observed, “but you denounce all attempts to restrain it.” That’s a pretty strange position, because who thinks something is wrong, but yet is “not willing to deal with [it] as a wrong?” What other subject could be called a wrong, but should not be called “wrong in politics because that is bringing morality into politics,” and conversely should not be called “wrong in the pulpit because that is bringing politics into religion.” To sum up, this commenter said, “there is no single place, according to you, where this wrong thing can properly be called wrong!”

The place where this was said was New Haven, Connecticut; the time, March of 1860; the speaker, a failed senatorial candidate now running for president for a brand-new political party. His name was Abraham Lincoln.

He was talking about slavery.

*                                            *                                        *

To many historians these days, much about American history can be explained by the fact that, as historian Leonard Richards of the University of Massachusetts put it in his 2000 book, The Slave Power: The Free North and Southern Domination, 1780-1860, so “long as there was an equal number of slave and free states”—which was more or less official American policy until the Civil War—“the South needed just one Northern vote to be an effective majority in the Senate.” That meant that controlling “the Senate, therefore, was child’s play for southern leaders,” and so “time and again a bill threatening the South [i.e., slavery above all else] made its way through the House only to be blocked in the Senate.” It’s a stunningly obvious point, at least in retrospect—at least for this reader—but I’d wager that few, if any, Americans have really thought through the consequences of this fact.

Geoghegan for example has noted that—as he put it in 1998’s The Secret Lives of Citizens: Pursuing the Promise of American Life—even today the Senate makes it exceedingly difficult to pass legislation: as he wrote, at present only “two-fifths of the Senate, or forty-one senators, can block any bill.” That is, it takes at least sixty senatorial votes to overcome the threat known as the “filibuster,” the invocation of which requires a supermajority to overcome. The filibuster however is not the only anti-majoritarian feature of the Senate, which is also equipped with such quaint customs as the “secret hold” and the quorum call and so forth, each of which can be used to delay a bill’s hearing—and so buy time to squelch potential legislation. Yet, these radically disproportionate senatorial powers merely mask the basic proportionate inequality at the heart of the Senate as an institution itself.

As political scientists Frances Lee and Bruce Oppenheimer point out in their Sizing Up the Senate: The Unequal Consequences of Equal Representation, the Senate is, because it makes small states the equal of large ones, “the most malapportioned legislature in the democratic world.” As Geoghegan has put the point, “the Senate depart[s] too much from one person, one vote,” because (as of the late 1990s) “90 percent of the population base as represented in the Senate could vote yes, and the bill would still lose.” Although Geoghegan wrote that nearly two decades ago, that is still largely true today: in 2013, Dylan Matthews of The Washington Post observed that while the “smallest 20 states amount to 11.27 percent of the U.S. population,” their senators “can successfully filibuster [i.e., block] legislation.” Thus, although the Senate is merely one antidemocratic feature of the U.S. Constitution, it’s an especially egregious one that, by itself, largely prevented a serious discussion of slavery in the years before the Civil War—and today prevents the serious discussion of gun control.

The headline of John Bresnahan’s 2013 article in Politico about the response to the Sandy Hook massacre, for example, was “Gun control hits brick wall in Senate.” Bresnahan quoted Nevadan Harry Reid, the Senate Majority Leader at the time, as saying that “the overwhelming number of Senate Republicans—and that is a gross understatement—are ignoring the voices of 90 percent of the American people.” The final vote was 54-46: in other words, the majority of the Senate was in favor of controls, but because the pro-control senators did not have a supermajority, the measure failed. In short, the measure was a near-perfect illustration of how the Senate can kill a measure that 90 percent of Americans favor.

And you know? Whatever you think about gun control, as an issue, if 90 percent of Americans want something, and what prevents them is not just a silly rule—but the same rule that protected slavery—well then, as Abraham Lincoln might tell us, that’s a problem.

It’s a problem because far from the Senate being—as George Washington supposedly said to Thomas Jefferson—the saucer that cools off politics, it’s actually a pressure cooker that exacerbates issues, rather than working them out. Imagine, say, had the South not had the Senate to protect its “peculiar institution” in the years leading to the Civil War: gradually, immigration to the North would have slowly turned the tide in Congress, which may have led to a series of small pieces of legislation that, eventually, would have abolished slavery.

Perhaps that may not have been a good thing: Ta Nehisi Coates, of The Atlantic, has written that every time he thinks of the 600,000-plus deaths that occurred as a result of the Civil War, he feels “positively fucking giddy.” That may sound horrible to some, of course, but there is something to the notion of “redemptive violence” when it comes to that war; Coates for instance cites the contemporary remarks of Private Thomas Strother, United States Colored Troops, in the Christian Recorder, the 19th century paper of the African Methodist Episcopal Church:

To suppose that slavery, the accursed thing, could be abolished peacefully and laid aside innocently, after having plundered cradles, separated husbands and wives, parents and children; and after having starved to death, worked to death, whipped to death, run to death, burned to death, lied to death, kicked and cuffed to death, and grieved to death; and worst of all, after having made prostitutes of a majority of the best women of a whole nation of people … would be the greatest ignorance under the sun.

“Were I not the descendant of slaves, if I did not owe the invention of my modern self to a bloody war,” Coates continues, “perhaps I’d write differently.” Maybe in some cosmic sense Coates is wrong, and violence is always wrong—but I don’t think I’m in a position to judge, particularly since I, as in part the descendant of Irish men and women in America, am aware that the Irish themselves may have codified that sort of “blood sacrifice theory” in the General Post Office of Dublin during Easter Week of 1916.

Whatever you think of that, there is certainly something to the idea that, because slaves were the single biggest asset in the entire United States in 1860, there was little chance the South would have agreed to end slavery without a fight. As historian Steven Deyle has noted in his Carry Me Back: The Domestic Slave Trade in American Life, the value of American slaves in 1860 was “equal to about seven times the total value of all currency in circulation in the country, three times the value of the entire livestock population, twelve times the value of the entire U.S. cotton crop and forty-eight times the total expenditure of the federal government”—certainly a value much more than it takes to start a war. But then had slavery not had, in effect, government protection during those antebellum years, it’s questionable whether slaves ever might have become such valuable commodities in the first place.

Far from “cooling” things off, in other words, it’s entirely likely that the U.S. Senate, and other anti-majoritarian features of the U.S. Constitution, actually act to enflame controversy. By ensuring that one side does not need to come to the bargaining table, in fact, all such oddities merely postpone—they do not prevent—the day of reckoning. They  build up fuel, ensuring that when the day finally arrives, it is all the more terrible. Or, to put it in the words of an old American song: these American constitutional idiosyncrasies merely trample “out the vintage where the grapes of wrath are stored.”

That truth, it seems, marches on.

Extra! Extra! White Man Wins Election!

 

Whenever you find yourself on the side of the majority,
it is time to pause and reflect
.
—Mark Twain

One of the more entertaining articles I’ve read recently appeared in the New York Times Magazine last October; written by Ruth Padawer and entitled “When Women Become Men At Wellesley,” it’s about how the newest “challenge,” as the terminology goes, facing American women’s colleges these days is the rise of students “born female who identified as men, some of whom had begun taking testosterone to change their bodies.” The beginning of the piece tells the story of “Timothy” Boatwright, a woman who’d decided she felt more like a man, and how Boatwright had decided to run for the post of “multicultural affairs coordinator” at the school, with the responsibility of “promoting a ‘culture of diversity’ among students and staff and faculty members.” After three “women of color” dropped out of the race for various unrelated reasons, that meant that Boatwright would be the only candidate still in the race—which meant that Wellesley, a woman’s college remember, would have as its next “diversity” official a white man. Yet according to Padawer this result wasn’t necessarily as ridiculous as it might seem: “After all,” the Times reporter said, “at Wellesley, masculine-of-center students are cultural minorities.” In the race to produce more and “better” minorities, then, Wellesley has produced a win for the ages—a result that, one might think, would cause reasonable people to stop and consider: just what is it about American society that is causing Americans constantly to redescribe themselves as one kind of “minority” or another? Although the easy answer is “because Americans are crazy,” the real answer might be that Americans are rationally responding to the incentives created by their political system: a system originally designed, as many historians have begun to realize, to protect a certain minority at the expense of the majority.

That, after all, is a constitutional truism, often repeated like a mantra by college students and other species of cretin: the United States Constitution, goes the zombie-like repetition, was designed to protect against the “tyranny of the majority”—even though that exact phrase was only first used by John Adams in 1788, a year after the Constitutional Convention. It is however true that Number 10 of the Federalist Papers does mention “the superior force of an interested and overbearing majority”—yet what those who discuss the supposed threat of the majority never seem to mention is that, while it is true that the United States Constitution is constructed with many, and indeed nearly a bewildering, variety of protections for the “minority,” the minority that was being protected at the moment of the Constitution’s writing was not some vague and theoretical interest: the authors of the Constitution were not professors of political philosophy sitting around a seminar room. Instead, the United States Constitution was, as political scientist Michael Parenti has put it, “a practical response to immediate material conditions”—in other words, the product of political horse-trading that resulted in a document that protected a very particular, and real, minority; one with names and families and, more significantly, a certain sort of property.

That property, as historians today are increasingly recognizing, was slavery. It isn’t for nothing that, as historian William Lee Miller has observed, not only was it that “for fifty of [the nation’s] first sixty four [years], the nation’s president was a slaveholder,” but also that the “powerful office of the Speaker of the House was held by a slaveholder for twenty-eight of the nation’s first thirty-five years,” and that the president pro tem of the Senate—one of the more obscure, yet still powerful, federal offices—“was virtually always a slaveholder.” Both Chief Justices of the Supreme Court through the first five decades of the nineteenth century, John Marshall and Roger Taney, were slaveholders, as were very many federal judges and other, lesser, federal office holders. As historian Garry Wills, author of Lincoln At Gettysburg among other volumes, has written, “the management of the government was disproportionately controlled by the South.” The reason why all of this was so was, as it happens, very ably explained at the time by none other than … Abraham Lincoln.

What Lincoln knew was that there was a kind of “thumb on the scale” when Northerners like the two Adams’, John and John Quincy, were weighed in national elections—a not-so-mysterious force that denied those Northern, anti-slavery men second terms as president. Lincoln himself explained what that force was in the speech he gave at Peoria, Illinois that signaled his return to politics in 1854. There, Lincoln observed that

South Carolina has six representatives, and so has Maine; South Carolina has eight presidential electors, and so has Maine. This is precise equality so far; and, of course they are equal in Senators, each having two. Thus in the control of the government, the two States are equals precisely. But how are they in the number of their white people? Maine has 581,813—while South Carolina has 274,567. Maine has twice as many as South Carolina, and 32,679 over. Thus each white man in South Carolina is more than the double of any man in Maine.

What Lincoln is talking about here is the notorious “Three-Fifths Compromise”: Article I, Section 2, Paragraph 3 of the United States Constitution. According to that proviso, slave states were entitled to representation in Congress according to the ratio of “three fifths of all other persons”—those being counted by that ratio being, of course, Southern slaves. And what the future president—the first president, it might be added, to be elected without the assistance of that ratio (a fact that would have, as I shall show, its own consequences)—was driving at was the effect this mathematical ratio was having on the political landscape of the country.

As Lincoln remarked in the same Peoria speech, the Three-Fifths Compromise meant that “five slaves are counted as being equal to three whites,” which meant that, as a practical matter, “it is an absolute truth, without an exception, that there is no voter in any slave State, but who has more legal power in the government, than any voter in any free State.” To put it more plainly, Lincoln said that the three-fifths clause “in the aggregate, gives the slave States, in the present Congress, twenty additional representatives.” Since the Constitution gave the same advantage in the Electoral College as it gave in the Congress, the reason for results like, say, the Adams’ lack of presidential staying power isn’t that hard to discern.

“One of those who particularly resented the role of the three-fifths clause in warping electoral college votes,” notes Miller, “was John Adams, who would probably have been reelected president over Thomas Jefferson in 1800 if the three-fifths ratio had not augmented the votes of the Southern states.” John Quincy himself had part of two national elections, 1824 and 1828, that had been skewed by what was termed at the time the “federal ratio”—which is to say that the reason why both Adams’ were one-term presidents likely had rather more with the form of the American government than with the content of their character, despite the representations of many historians after the fact.

Adams himself was quite aware of the effect of the “federal ratio.” The Hartford Convention of 1815, led by New Englanders like Adams, had recommended ending the advantage of the Southern states within the Congress, and in 1843 John Quincy’s son Charles Francis Adams caused the Massachusetts’ legislature to pass a measure that John Quincy would himself introduce to the U.S. Congress, “a resolution proposing that the Constitution be amended to eliminate the three-fifths ratio,” as Miller has noted. There were three more such attempts in 1844, three years before Lincoln’s arrival, all of which were soundly defeated, as Miller observes, by totals “skewed by the feature the proposed amendment would abolish.” The three-fifths ratio was not simply a bete noir of the Adams’ personally; all of New England was aware of that the three-fifths ratio protected the interests of the South in the national government—it’s one reason why, prior to the Civil War, “states’ rights” was often thought of as a Northern issue rather than a Southern one.

That the South itself recognized the advantages the United States Constitution gave them, specifically by that document’s protections of “minority”—in other words, slaveowner—interests, can be seen by reference to the reasons the South gave for starting the Civil War. South Carolina’s late 1860 declaration of secession, for example (the first such declaration) outright said that the state’s act of secession was provoked by the election of Abraham Lincoln—in other words, by the fact of the election of a presidential candidate who did not need the electoral votes of the South.

Hence, South Carolina’s declaration said that a “geographical line has been drawn across the Union, and all the States north of that line have united in the election of a man to the high office of President of the United States whose opinions and purposes are hostile to slavery.” The election had been enabled, the document went on to say, “by elevating to citizenship, persons who, by the supreme law of the land, are incapable of becoming citizens, and their votes have been used to inaugurate a new policy, hostile to the South.” Presumably, this is a veiled reference to the population gained by the Northern states over the course of the nineteenth century—a trend that was not only steadily weakening the advantage the South had initially enjoyed at the expense of the North at the time the Constitution had been enacted, but had only accelerated during the 1850s.

As one Northern newspaper observed in 1860, in response to the early figures being released by the United States Census Bureau at that time, the “difference in the relative standing of the slave states and the free, between 1850 and 1860, inevitably shows where the future greatness of our country is to be.” To Southerners the data had a different meaning: as Adam Goodheart noted in a piece for the New York Times’ series on the Civil War, Disunion, “the editor of the New Orleans Picayune noted that states like Michigan, Wisconsin, Iowa and Illinois would each be gaining multiple seats in Congress” while Southern states like Virginia, South Carolina and Tennessee would be losing seats. To the Southern slaveowners who would drive the road to secession during the winter of 1860, the fact that they were on the losing end of a demographic war could not have been far from mind.

Historian Leonard L. Richards of the University of Massachusetts, for example, has noted that when Alexis de Tocqueville traveled the American South in the early 1830s, he discovered that Southern leaders were “noticeably ‘irritated and alarmed’ by their declining influence in the House [of Representatives].” By the 1850s, those population trends were only accelerating: concerning the gains in population the Northern states were realizing by foreign immigration—presumably the subject of South Carolina’s complaint about persons “incapable of becoming citizens”—Richards cites Senator Stephen Adams of Mississippi, who “blamed the South’s plight”—that is, its declining population relative to the North—“on foreign immigration.” As Richards says, it was obvious to anyone paying attention to the facts that if “this trend continued, the North would in fifteen years have a two to one majority in the House and probably a similar majority in the Senate.” It seems unlikely to think that the most intelligent of Southern leaders could not have been cognizant of these primordial facts.

Their intellectual leaders, above all John Calhoun, had after all designed a political theory to justify the Southern, i.e. “minority,” dominance of the federal government. In Calhoun’s A Disquisition on Government, the South Carolinian Senator argued that a government “under the control of the numerical majority” would tend toward “oppression and abuse of power”—it was to correct this tendency, he writes, that the constitution of the United States made its different branches “the organs of the distinct interests or portions of the community; and to clothe each with a negative on the others.” It is, in other words, a fair description of the constitutional doctrine known as the “separation of powers,” a doctrine that Calhoun barely dresses up as something other than what it is: a brief for the protection of the right to own slaves. Every time, in other words, anyone utters the phrase “protecting minority rights” they are, wittingly or not, invoking the ideas of John Calhoun.

In any case, such a history could explain just why it is that Americans are so eager to describe themselves as a “minority,” of whatever kind. After all, it was the purpose of the American government initially to protect a particular minority, and so in political terms it makes sense to describe oneself as such in order to enjoy the protections that, initially built into the system, have become so endemic to American government: for example, the practice of racial gerrymandering, which has the perhaps-beneficial effect of protecting a particular minority—at the probable expense of the interests of the majority. Such a theory might perhaps also explain something else: just how it is, as professor Walter Benn Michaels of the University of Illinois at Chicago has remarked, that after “half a century of anti-racism and feminism, the U.S. today is a less equal society than was the racist, sexist society of Jim Crow.” Or, perhaps, how the election of—to use that favorite tool of American academics, quote marks to signal irony—a “white man” at a women’s college can, somehow, be a “victory” for whatever the American “left” is now. The real irony, of course, is that, in seeking to protect African-Americans and other minorities, that supposed left is merely reinforcing a system originally designed to protect slavery.

Outrageous Fashion

 

In difficult times, fashion is always outrageous.
—Elsa Schiaparelli.

images

The kid “wearing a bolo tie, a regular tie, Native American beads, a suit coat worn under a flannel shirt, and socks but no shoes,” as Mother Jones described one protestor’s outfit, wasn’t the worst of Occupy Wall Street’s stylistic offenses against civilization—for Thomas Frank, founder of the small magazine The Baffler, the stylistic issues of the protests went much deeper than sartorial choice. To Frank, the real crime of the movement was that it used “high-powered academic disputation as a model for social protest”: Occupy, he argues, chose “elevated jargonese” over actual achievements. To some, such criticisms might sound ridiculous—how can anyone dispute matters of style when serious issues are at stake? But in fact matters of style are the only thing at stake: the stylistic choices of Occupy, and movements like it, ultimately only fuel precisely the kinds of exploitation Occupy is supposedly meant to protest. There are real goals—chief of which being a reorganization of the American government on more democratic lines—an American left could conceivably achieve in the United States today. If only, that is, were these movements to sacrifice their style.

To say such things is, of course, super-uncool. In order to contrast itself against such unhipness, the style of Occupy takes two forms: the first being the kind of academese Frank castigates. Here is one sentence Frank cites, from an Occupier objecting to someone else complaining about how none of the Occupiers would claim to speak for the whole movement: “I would agree, an individualism that our society has definitely had inscribed upon it and continues to inscribe upon itself, ‘I can only speak for myself,’ the “only” is operative there, and of course these spaces are being opened up …” And so on. It should be recognized that this is actually a comparatively understandable sentence against some produced by the Occupiers.

The other rhetorical style practiced by the Occupiers is a virtually sub-verbal kind of soup. Here for instance is the first sentence of an article entitled “How Occupy Wall Street Began,” on the website occupytheory.org: “One of the protests that have been practiced in different countries is the Occupy Wall Street Movement.” This is not, as any competent speaker would recognize, even English, much less effective writing designed to persuade a national audience. The counterargument, of course, is that it gives the writer—who is not named—something to do, and appeals to other sub-literates. But while those goals are perhaps worthy enough, they are both incredibly myopic and hyperopic at once.

They are nearsighted in the sense that while creating jobs is nearly always laudable, one might imagine that telling the story of the movement’s origins is a task important enough to delegate to someone capable of telling it. They are farsighted—in this case, not a compliment—in the sense that while being “inclusive” is to be sure important, people who are at best para-literate are not likely to be people in positions of authority, and hence capable of making decisions in the here-and-now. Perhaps someday, many years from now, such things might matter. But as the economist John Maynard Keynes remarked, in the long-run we are all dead—which is to say that none of this would matter had Occupy achieved any results.

“There are no tangible results from the Occupy movement,” the “social entrepreneur” Tom Watson ruefully concluded in Forbes magazine a year after the end of the Zuccotti Park occupation—no legislation, no new leaders, no new national organization. By contrast, Frank notes that in the same timespan the Tea Party—often thought of as a populist movement like Occupy, only with opposite goals—managed to elect a majority in Congress, and even got Paul Ryan, the archconservative congressman who seems to misunderstand basic mathematics, on the 2012 presidential ticket. The Tea Party, in other words, chose to make real inroads to power—a point that, presumably, Occupiers might counter by observing that the Tea Party is an organization, at least in part, funded by wealthy interests. It never seems to occur to Occupiers that such interests are funding those efforts precisely because the Tea Party does serve their interests—that is, that the Tea Party takes a clear position that funding A will have political result B.

For the Occupiers and their sympathies, however, “the ‘changes’ that Occupy failed to secure” are “not really part of the story,” says Frank. “What matters” to the Occupiers, he writes, “is the carnival—all the democratic and nonhierarchical things that went on in Zuccotti Park.” Should anyone object that—shockingly—sitting in a park for two months does not appear to have done anything tangible for anybody, you’ve just exposed yourself as a part of the problem, man—not to mention been unveiled as incredibly uncool.

As Frank points out, however, “here we come to the basic contradiction of the campaign”: to “protest Wall Street in 2011” was to protest “deregulation and tax-cutting—by a philosophy of liberation as anarchic in its rhetoric as Occupy was in reality.” Want anarchy and anti-hierarchy? That’s just what corporate America wants, too. Nothing, I’m sure, delighted the boardrooms of Goldman Sachs or Chase more than to see, or read about, the characters of Zuccotti Park refusing to allow what Frank calls the “humorless, doctrinaire adults … back in charge” by refusing to produce demands.

Frank’s charge thereby echoes an argument that’s been ongoing in American academia for some time: “Something more insidious than provincialism has come to prominence in the American academy,” the prominent philosopher Martha Nussbaum charged some time ago—“the virtually complete turning from the material side of life, toward a type of verbal and symbolic politics.” Nussbaum was complaining about trends she saw in feminist scholarship; James Miller, a political scientist, more broadly described years ago how many “radical professors distrust the demand for ‘linguistic transparency,’ charging that it cripples one’s ability ‘to think the world more radically.’” The other side claims, alternately, “that plain talk is politically perfidious—reinforcing, rather than radically challenging, the cultural status quo.” Hence, the need for complex, difficult sentences—a stylistic thesis wholly believed in, it seems, by the Occupiers.

Yet, what are the consequences of such stylistic choices? I’d suggest that one of them is that certain academic arguments that might have a chance of breaking through to the mainstream, and then making a real difference to actual American lives, are being overlooked in the name of what Frank calls “a gluey swamp of academic talk and pointless antihierarchical posturing.” One of these arguments is the one that is being carefully constructed by historians Manisha Sinha and Leonard Richards at the University of Massachusetts in books like Richards’ The Slave Power: The Free North and Southern Domination 1780-1860 and Sinha’s The Counter-revolution of Slavery: Politics and Ideology in Antebellum South Carolina. Such books enable a naturalistic, commonsense explanation for much of the political structure of American life—and thus enable something to be done about it.

Richards’ book makes clear how “the slaveholders of the South” ran the United States before the Civil War by virtue of anti-majoritarian features built into the Constitution; Manisha Sinha’s account demonstrates how those features could have been imported into the Constitution by way of features already part of the structure of the government of South Carolina. Prior to the Civil War, for instance, Sinha notes how one South Carolinian described how the “government of South Carolina was an ‘oligarchy’ modeled after the ‘rotten borough system’ of England”—and placed next to accounts of the writing of the Constitution, Sinha’s detailed description of South Carolina’s government calls into question the prominence South Carolinian leaders during the debates in Philadelphia during the Constitution Summer of 1787.

South Carolinians like the younger and elder Charles Pinckneys and Major Pierce Butler had an overwhelming influence over the writing of the Constitution: as David O. Stewart remarks in his history of the writing of the Constitution, The Summer of 1787, “the [South] Carolinians came to Philadelphia with an appetite for work, and they would exercise an outsized influence.” It’s impossible of course in a paragraph or even an essay to summarize the details of such books, or the story they tell—the point is I shouldn’t have to: they are being ignored despite the fact that they could overwhelmingly do far more good to more Americans than a dozen occupations of Zuccotti Park.

Books like these can do so because, as Abraham Lincoln knew how to do, they tell a comprehensible story—and thus provide a means by which to restructure the American government more democratically. That was Lincoln’s technique in his speech of June 16, 1858: “If we could first know where we are, and whither we are tending,” he said, “we could then better judge what to do, and how to do it.” The speech is a model of rhetorical efficiency: it tells the audience—the people—what Lincoln is going to do in his speech;  it shows that he will begin at the beginning and proceed to the end; and above all, that he will do so transparently, directly in front of the audience. The speech may be known to you: it is usually called “House Divided.”

Lincoln, undoubtedly, wore a plain Brooks Brothers suit.