Best Intentions

L’enfer est plein de bonnes volontés ou désirs
—St. Bernard of Clairvaux. c. 1150 A.D.

And if anyone knows Chang-Rae Lee,” wrote Penn State English professor Michael Bérubé back in 2006, “let’s find out what he thinks about Native Speaker!” The reason Bérubé gives for doing that asking is, first, that Lee wrote the novel under discussion, Native Speaker—and second, that Bérubé “once read somewhere that meaning is identical with intention.” But this isn’t the beginning of an essay about Native Speaker. It’s actually the end of an attack on a fellow English professor: the University of Illinois at Chicago’s Walter Benn Michaels, who (along with with Steven Knapp, now president of George Washington University), wrote the 1982 essay “Against Theory”—an essay that  argued that “the meaning of a text is simply identical to the author’s intended meaning.” Bérubé’s closing scoff then is meant to demonstrate just how politically conservative Michaels’ work is— earlier in the same piece, Bérubé attempted to tie Michaels’ work to Arthur Schlesinger, Jr.’s The Disuniting of America, a book that, because it argued that “multiculturalism” weakened a shared understanding of the United States, has much the same status among some of the intelligentsia that Mein Kampf has among Jews. Yet—weirdly for a critic who often insists on the necessity of understanding historical context—it’s Bérubé’s essay that demonstrates a lack of contextual knowledge, while it’s Michaels’ view—weirdly for a critic who has echoed Henry Ford’s claim that “History is bunk”—that demonstrates a possession of it. In historical reality, that is, it’s Michaels’ pro-intention view that has been the politically progressive one, while it’s Bérubé’s scornful view that shares essentially everything with traditionally conservative thought.

Perhaps that ought to have been apparent right from the start. Despite the fact that, to many English professors, the anti-intentionalist view has helped to unleash enormous political and intellectual energies on behalf of forgotten populations, the reason it could do so was that it originated from a forgotten population that, to many of those same professors, deserves to be forgotten: white Southerners. Anti-intentionalism, after all, was a key tenet of the critical movement called the New Criticism—a movement that, as Paul Lauter described in a presidential address to the American Studies Association in 1994, arose “largely in the South” through the work of Southerners like John Crowe Ransom, Allen Tate, and Robert Penn Warren. Hence, although Bérubé, in his essay on Michaels, insinuates that intentionalism is politically retrograde (and perhaps even racist), it’s actually the contrary belief that can be more concretely tied to a conservative politics.

Ransom and the others, after all, initially became known through a 1930 book entitled I’ll Take My Stand: The South and the Agrarian Tradition, a book whose theme was a “central attack on the impact of industrial capitalism” in favor of a vision of a specifically Southern tradition of a society based around the farm, not the factory. In their vision, as Lauter says, “the city, the artificial, the mechanical, the contingent, cosmopolitan, Jewish, liberal, and new” were counterposed to the “natural, traditional, harmonious, balanced, [and the] patriachal”: a juxtaposition of sets of values that wouldn’t be out of place in a contemporary Republican political ad. But as Lauter observes, although these men were “failures in … ‘practical agitation’”—i.e., although I’ll Take My Stand was meant to provoke a political revolution, it didn’t—“they were amazingly successful in establishing the hegemony of their ideas in the practice of the literature classroom.” Among the ideas that they instituted in the study of literature was the doctrine of anti-intentionalism.

The idea of anti-intentionalism itself, of course, predates the New Criticism: writers like T.S. Eliot (who grew up in St. Louis) and the University of Cambridge don F.R. Leavis are often cited as antecedents. Yet it did not become institutionalized as (nearly) official doctrine of English departments  (which themselves hardly existed) until the 1946 publication of W.K. Wimsatt and Monroe Beardsley’s “The Intentional Fallacy” in The Sewanee Review. (The Review, incidentally, is a publication of Sewanee: The University of the South, which was, according to its Wikipedia page, originally founded in Tennessee in 1857 “to create a Southern university free of Northern influences”—i.e., abolitionism.) In “The Intentional Fallacy,” Wimsatt and Beardsley explicitly “argued that the design or intention of the author is neither available nor desirable as a standard for judging the success of a work of literary art”—a doctrine that, in the decades that followed, did not simply become a key tenet of the New Criticism, but also largely became accepted as the basis for work in English departments. In other words, when Bérubé attacks Michaels in the guise of acting on behalf of minorities, he also attacks him on behalf of the institution of English departments—and so just who the bully is here isn’t quite so easily made out as Bérubé makes it appear.

That’s especially true because anti-intentionalism wasn’t just born and raised among conservatives—it has also continued to be a doctrine in conservative service. Take, for instance, the teachings of conservative Supreme Court justice Antonin Scalia, who throughout his career championed a method of interpretation he called “textualism”—by which he meant (!) that, as he said in 1995, it “is the law that governs, not the intent of the lawgiver.” Scalia argued his point throughout his career: in 1989’s Green v. Bock Laundry Mach. Co., for instance, he wrote that the

meaning of terms on the statute books ought to be determined, not on the basis of which meaning can be shown to have been understood by the Members of Congress, but rather on the basis of which meaning is … most in accord with context and ordinary usage … [and is] most compatible with the surrounding body of law.

Scalia thusly argued that interpretation ought to proceed from a consideration of language itself, apart from those who speak it—a position that would place him, perhaps paradoxically from Michael Bérubé’s position, among the most rarified heights of literary theorists: it was after all the formidable German philosopher Martin Heidegger—a twelve-year member of the Nazi Party and sometime-favorite of Bérubé’s—who wrote the phrase “Die Sprache spricht”: “Language [and, by implication, not speakers] speaks.” But, of course, that may not be news Michael Bérubé wishes to hear.

Like Odysseus’ crew, there’s a simple method by which Bérubé could avoid hearing the point: all of the above could be dismissed as an example of the “genetic fallacy.” First defined by Morris Cohen and Ernest Nagel in 1934’s An Introduction to Logic and Scientific Method, the “genetic fallacy” is “the supposition that an actual history of any science, art, or social institution can take the place of a logical analysis of its structure.” That is, the arguments above could be said to be like the argument that would dismiss anti-smoking advocates on the grounds that the Nazis were also anti-smoking: just because the Nazi were against smoking is no reason not to be against smoking also. In the same way, just because anti-intentionalism originated among conservative Southerners—and also, as we saw, committed Nazis—is no reason to dismiss the thought of anti-intentionalism. Or so Michael Bérubé might argue.

That would be so, however, only insofar as the doctrine of anti-intentionalism were independent from the conditions from which it arose: the reasons to be against smoking, after all, have nothing to do with anti-Semitism or the situation of interwar Germany. But in fact the doctrine of anti-intentionalism—or rather, to put things in the correct order, the doctrine of intentionalism—has everything to do with the politics of its creators. In historical reality, the doctrine enunciated by Michaels—that intention is central to interpretation—was in fact created precisely in order to resist the conservative political visions of Southerners. From that point of view, in fact, it’s possible to see the Civil War itself as essentially fought over this principle: from this height, “slavery” and “states’ rights” and the rest of the ideas sometimes advanced as reasons for the war become mere details.

It was, in fact, the very basis upon which Abraham Lincoln would fight the Civil War—though to see how requires a series of steps. They are not, however, especially difficult ones: in the first place, Lincoln plainly said what the war was about in his First Inaugural Address. “Unanimity is impossible,” as he said there, while “the rule of a minority, as a permanent arrangement, is wholly inadmissable.” Not everyone will agree all the time, in other words, yet the idea of a “wise minority” (Plato’s philosopher-king or the like) has been tried for centuries—and been found wanting; therefore, Lincoln continued, by “rejecting the majority principle, anarchy or despotism in some form is all that is left.” Lincoln thereby concluded that “a majority, held in restraint by constitutional checks and limitations”—that is, bounds to protect the minority—“is the only true sovereign of a free people.” Since the Southerners, by seceding, threatened this idea of government—the only guarantee of free government—therefore Lincoln was willing to fight them. But where did Lincoln obtain this idea?

The intellectual line of descent, as it happens, is crystal clear: as Wills writes, “Lincoln drew much of his defense of the Union from the speeches of [Daniel] Webster”: after all, the Gettysburg Address’ famous phrase, “government of the people, by the people, for the people” was an echo of Webster’s Second Reply to Hayne, which contained the phrase “made for the people, made by the people, and answerable to the people.” But if Lincoln got his notions of the Union (and thusly, his reasons for fighting the war) from Webster, then it should also be noted that Webster got his ideas from Supreme Court Justice Joseph Story: as Theodore Parker, the Boston abolitionist minister, once remarked, “Mr. Justice Story was the Jupiter Pluvius [Raingod] from whom Mr. Webster often sought to elicit peculiar thunder for his speeches and private rain for his own public tanks of law.” And Story, for his part, got his notions from another Supreme Court justice: James Wilson, who—as Linda Przybyszewski notes in passing in her book, The Republic According to John Marshall Harlan (a later Supreme Court justice)—was “a source for Joseph Story’s constitutional nationalism.” So in this fashion Lincoln’s arguments concerning the constitution—and thus, the reasons for fighting the war—ultimately derived from Wilson.

 

JamesWilson
Not this James Wilson.

Yet, what was that theory—the one that passed by a virtual apostolic succession from Wilson to Story to Webster to Lincoln? It was derived, most specifically, from a question Wilson had publicly asked in 1768, in his Considerations on the Nature and Extent of the Legislative Authority of the British Parliament. “Is British freedom,” Wilson had there asked, “denominated from the soil, or from the people, of Britain?” Nineteen years later, at the Constitutional Convention of 1787, Wilson would echo the same theme: “Shall three-fourths be ruled by one-fourth? … For whom do we make a constitution? Is it for men, or is it for imaginary beings called states?” To Wilson, the answer was clear: constitutions are for people, not for tracts of land, and as Wills correctly points out, it was on that doctrine that Lincoln prosecuted the war.

James Wilson (1742-1798)
This James Wilson.

Still, although all of the above might appear unobjectionable, there is one key difficulty to be overcome. If, that is, Wilson’s theory—and Lincoln’s basis for war—depends on a theory of political power derived from people, and not inanimate objects like the “soil,” that requires a means of distinguishing between the two—which perhaps is why Wilson insisted, in his Lectures on Law in 1790 (the very first such legal works in the United States), that “[t]he first and governing maxim in the interpretation of a statute is to discover the meaning of those who made it.” Or—to put it another way—the intention of those who made it. It’s intention, in other words, that enables Wilson’s theory to work—as Knapp and Michaels well-understand in “Against Theory.”

The central example of “Against Theory,” after all, is precisely about how to distinguish people from objects. “Suppose that you’re walking along a beach and you come upon a curious sequence of squiggles in the sand,” Michaels and his co-author ask. These “squiggles,” it seems, appear to be the opening lines of Wordsworth’s “A Slumber”: “A slumber did my spirit seal.” That wonder, then, is reinforced by the fact that, in this example, the next wave leaves, “in its wake,” the next stanza of the poem. How to explain this event, Knapp and Michaels ask?

There are, they say, only two alternatives: either to ascribe “these marks to some agent capable of intentions,” or to “count them as nonintentional effects of mechanical processes,” like some (highly unlikely) process of erosion or wave action or the like. Which, in turn, leads up to the $64,000 question: if these “words” are the result of “mechanical processes” and not the actions of an actor, then “will they still seem to be words?”

The answer, of course, is that they will not: “They will merely seem to resemble words.” Thus, to deprive (what appear to be) the words “of an author is to convert them into accidental likenesses of language.” Intention and meaning are, in this way, identical to each other: no intention, no meaning—and vice versa. Similarly, I suggest, to Lincoln (and his intellectual antecedents), the state is identical to its people—and vice versa. Which, clearly, then suggests that those who deny intention are, in their own fashion—and no matter what they say—secessionists.

If so, then that would, conversely, make those who think—along with Knapp and Michaels—that it is intention that determines meaning, and—along with Lincoln and Wilson—that it is people that constitutes states, then it would follow that those who thought that way really could—unlike the sorts of “radicals” Bérubé is attempting to cover for—construct the United States differently, in a fashion closer to the vision of James Wilson as interpreted by Abraham Lincoln. There are, after all, a number of things about the government of the United States that still lend themselves to the contrary theory, that power derives from the inanimate object of the soil: the Senate, for one. The Electoral College, for another. But the “radical” theory espoused by Michael Bérubé and others of his ilk does not allow for any such practical changes in the American constitutional architecture. In fact, given its collaboration—a word carefully chosen—with conservatives like Antonin Scalia, it does rather the reverse.

Then again, perhaps that is the intention of Michael Bérubé. He is, after all, an apparently-personable man who nevertheless asked, in a 2012 essay in the Chronicle of Higher Education explaining why he resigned the Paterno Family Professorship in Literature at Pennsylvania State University, us to consider just how horrible the whole Jerry Sandusky scandal was—for Joe Paterno’s family. (Just “imagine their shock and grief” at finding out that the great college coach may have abetted a child rapist, he asked—never mind the shock and grief of those who discovered that their child had been raped.) He is, in other words, merely a part-time apologist for child rape—and so, I suppose, on his logic we ought to give a pass to his slavery-defending, Nazi-sympathizing, “intellectual” friends.

They have, they’re happy to tell us after all, only the best intentions.

No Hurry

The man who is not in a hurry will always see his way clearly; haste blunders on blindly.
—Titus Livius (Livy). Ab Urbe Condita. (From the Foundation of the City.) Book 22.

Just inland from the Adriatic coast, northwest of Bari, lies the little village of Canne. In Italian, the name means “reeds”; a non-descript name for a non-descript town. But the name has outlived at least one language, and will likely outlive another, all due to one August day more than 2000 years ago, where two ways of thinking collided; the conversation marked by that day has continued until now, and likely will outlive us all. One line of that conversation was taken up recently by a magazine likely as obscure as the village to most readers: Parameters, the quarterly publication of the U.S. Army War College. The article that continues the conversation whose earliest landmark may be found near the little river of  Ofanto is entitled “Intellectual Capital: A Case for Cultural Change,” and the argument of the piece’s three co-authors—all professors at West Point—is that “recent US Army promotion and command boards may actually penalize officers for their conceptual ability.” It’s a charge that, if true, ought first to scare the hell out of Americans (and everyone else on the planet), because it means that the single most fearsome power on earth is more or less deliberately being handed over to morons. But it ought, second, to scare the hell out of people because it suggests that the lesson first taught at the sleepy Italian town has still not been learned—a lesson suggested by two words I withheld from the professors’ charge sheet.

Those words? “Statistical evidence”: as in, “statistical evidence shows that recent US Army promotion and command boards …” What the statistical evidence marshaled by the West Pointers shows, it seems, is that

officers with one-standard-deviation higher cognitive abilities had 29 percent, 18 percent, and 32 percent lower odds, respectively, of being selected early … to major, early to lieutenant colonel, and for battalion command than their one-standard-deviation lower cognitive ability peers.

(A “standard deviation,” for those who don’t know—and the fact that you don’t is part of the story being told here—is a measure of how far from the mean, or average, a given set of data tends to spread: a low standard means that the data tends to cluster pretty tightly, like a river in mountainous terrain, whereas a high measure means that the data spreads widely, like river’s delta.) The study controlled for gender, ethnicity, year group, athleticism, months deployed, military branch, geographic region, and cumulative scores as cadets—and found that “if two candidates for early promotion or command have the same motivation, ethnicity, gender, length of Army experience, time deployed, physical ability, and branch, and both cannot be selected, the board is more likely to select the officer with the lower conceptual ability.” In other words, in the Army, the smarter you are, the less likely you are to advance quickly—which, obviously, may affect just how far you are likely to go at all.

That may be so, you might say, but maybe it’s just that smarter people aren’t very “devoted,” or “loyal” (or whatever sort of adjective one prefers), at least according to the military. This dichotomy even has a name in such circles: “Athens” vs. “Sparta.” According to the article, “Athens represents an institutional preference for intellectual ability, critical thinking, education, etc.,” while conversely “Sparta represents an institutional preference for motivation, tactical-ability, action-bias, diligence, intensity, physicality, etc.” So maybe the military may not be promoting as many “Athenians” as “Spartans”—but maybe the military is a more “Spartan” organization than others. Maybe this study is just a bunch of Athenians whining about not being able to control every aspect of life.

Yet, if thought about, that’s a pretty weird way to conceptualize things: why should “Athens” be opposed to “Sparta” at all? In other words, why should it happen that the traits these names attempt to describe are distributed in zero-sum packages? Why should it be that people with “Spartan” traits should not also possess “Athenian” traits, and vice versa? The whole world supposedly divides along just these lines—but I think any of us knows someone who is neither of these, and if so then it seems absurd to think that possessing a “Spartan” trait implies a lack of a corresponding “Athenian” one. As the three career Army officers say, “motivation levels and cognitive ability levels are independent of each other.” Just because someone is intelligent does not mean they are likely to be unmotivated; indeed, it makes more sense to think just the opposite.

Yet, apparently, the upper levels of the U.S. military think differently: they seem to believe that devotion to duty precludes intelligence, and vice versa. We know this not because of stereotypes about military officials, but instead because of real data about how the military allocates its promotions. In their study, the three career Army officers report that they

found significant evidence that regardless of what motivation/diligence category officers were in (low, medium, or high) there was a lower likelihood the Army would select the officers for early promotion or battalion command the higher their cognitive ability, despite the fact that the promotion and selection boards had no direct information indicating each officer’s cognitive ability. (Emp. added).

This latter point is so significant that I highlight it: it demonstrates that the Army is—somehow—selecting against intelligence even when it, supposedly, doesn’t know whether a particular candidate has it or not. Nonetheless, the boards are apparently able to suss it out (which itself is a pretty interesting use of intelligence) in order to squash it, and not only that, squash it no matter how devoted a given officer might be. In sum, these boards are not selecting against intelligence because they are selecting for devotion, or whatever, but instead are just actively attempting to promote less-intelligent officers.

Now, it may then be replied, that may be so—but perhaps fighting wars is not similar to doing other types of jobs. Or as the study puts it: perhaps “officers with higher intellectual abilities may actually make worse junior officers than their average peers.” If so, as the three career Army officers point out, such a situation “would be diametrically opposed to the … academic literature” on leadership, which finds a direct relationship between cognitive ability and success. Even so, however, perhaps war is different: the “commander of a top-tier special operations selection team,” the three officers say, reported that his team rejected candidates who scored too high on a cognitive ability test, on the grounds that such candidates “‘take too long to make a decision’”—despite the fact that, as the three officers point out, “research has shown that brighter people come up with alternatives faster than their average-conceptual-level peers.” Thinking that intelligence inhibits action, in other words, would make war essentially different from virtually every other human activity.

Of course, had that commander been in charge of recruitment during the U.S. Civil War, that would have meant not employing an alcoholic, cashiered (fired) former lieutenant later denounced as “an unimaginative butcher in war and a corrupt, blundering drunkard in peace,” a man who failed in all the civilian jobs he undertook, as a farmer and even a simple store clerk, and came close to bankruptcy several times over the course of his life. That man was Ulysses S. Grant—the man about whom Abraham Lincoln would say, when his critics pointed to his poor record, “I cannot spare this man; he fights!” (In other words, he did not hesitate to act.) Grant would, as is known, eventually accept his adversary, Lee’s, surrender at Appomattox Court House; hence, a policy that runs the risk of not finding Grant in time appears, at best, pretty cavalier.

Or, as the three career Army officers write, “if an organization assumes an officer cannot be both an Athenian and a Spartan, and prefers Spartans, any sign of Athenians will be discouraged,” and so therefore “when the Army needs senior officers who are Athenians, there will be only Spartans remaining.” The opposite view somehow thinks that smart people will still be around when they are needed—but when they are needed, they are really needed. Essentially, this view is more or less to say that the Army should not worry about its ammunition supply, because if something ever happened to require a lot of ammunition the Army could just go get more. Never mind the fact that, at such a moment, everyone else is probably going to want some ammunition too. It’s a pretty odd method of thinking that treats physical objects as more important than the people who use them—after all, as we know, guns don’t kill people, people do.

Still, the really significant thing about Grant is not he himself, but rather that he represented a particular method of thinking: “I propose to fight it out on this line, if it takes all summer,” Grant wrote to Abraham Lincoln in May 1864; “Hold on with a bulldog grip, and chew and choke as much as possible,” Lincoln replied to Grant a few months later. Although Grant is, as above, sometimes called a “butcher” who won the Civil War simply by firing more bodies at the Confederacy than the Southerners could shoot, he clearly wasn’t the idiot certain historians have made him out to be: the “‘one striking feature about Grant’s [written] orders,’” as another general would observe later, was that no “‘matter how hurriedly he may write them in the field, no one ever had the slightest doubt as to their meaning, or even has to read them over a second time to understand them.’” Rather than being unintelligent, Grant had a particular way of thinking: as one historian has observed, “Grant regard[ed] his plans as tests,” so that Grant would “have already considered other options if something doesn’t work out.” Grant had a certain philosophy, a method of both thinking and doing things—which he more or less thinks of as the same thing. But Grant did not invent that method of thinking. It  was already old when a certain Roman Senator conceived of a single sentence that, more or less, captured Grant’s philosophy—a sentence that, in turn, referred to a certain village near the Adriatic coast.

The road to that village is, however, a long one; even now we are just more than halfway there. The next step taken upon it was by a man named Quintus Fabius Maximus Verrocusus—another late bloomer, much like Grant. According to Plutarch, whose Parallel Lives sought to compare the biographies of famous Greeks and Romans, as a child Fabius was known for his “slowness in speaking, his long labour and pains in learning, his deliberation in entering into the sports of other children, [and] his easy submission to everybody, as if he had no will of his own,” traits that led many to “esteem him insensible and stupid.” Yet, as he was educated he learned to make his public speeches—required of young aristocratic Romans—without “much of popular ornament, nor empty artifice,” and instead with a “great weight of sense.” And also like Grant, who in the last year of the war faced a brilliant opponent general in Robert E. Lee, Fabius would eventually face an ingenious military leader who desired nothing more than to meet his adversary in battle—where that astute mind could destroy the Roman army in a single day, and, so possibly win the freedom of his nation.

That adversary was Hannibal Barca, the man who had marched his army, including his African war elephants, across the Alps into Italy. Hannibal was a Carthaginian, a Phoenician city on the North African coast that had already fought one massive war with Rome (the First Punic War) and had now, through Hannibal’s invasion, embarked on a second. Carthage was about as rich and powerful as Rome was itself, so by invading Hannibal posed a mortal threat to the Italians—not least because Hannibal had quite a reputation as a general already. Hence Fabius, who by this time had himself been selected to oppose the invader, “deemed it best not to meet in the field a general whose army had been tried in many encounters, and whose object was a battle,” and instead attempted to “let the force and vigour of Hannibal waste away and expire, like a flame, for want of fuel,” as Plutarch put the point. Instead of attempting to meet Hannibal in a single battle, where the African might out-general him, Fabius attempted to wear him—an invader far from his home base—down.

For some time things continued like this: Hannibal ranged about Italy, attempting to provoke Fabius into battle, while the Roman followed meekly at a distance; according to his enemies, as if he were Hannibal’s servant. Meanwhile, according to Plutarch, Hannibal himself sought to encourage that idea: burning the countryside around Rome, the Carthaginian made sure to post armed guards around Fabius’ estates in order to suggest that the Roman was in his pay. Eventually, these stratagems had their effect, and after a further series of misadventures, Fabius retired from command—just the event Hannibal awaited.

The man who became commander after Fabius was Varro, and it was he who led the Romans to the small village near the Adriatic coast. What happened near that village more than 2000 years ago might be summed by an image that might familiar to viewers of the television show, Game of Thrones:

Battle-850x560

On the television show the chaotic mass in the middle is the tiny army of the character Jon Snow, whereas the orderly lines about the perimeter is the much-vaster army of Ramsay Bolton. But in historical reality, the force in the center that is being surrounded by the opposing force was actually the larger of the two—the Roman army. It was the smaller of the two armies, the Carthaginian one, that stood at the periphery. Yet, somehow, the outcome was more or less the same: the mass of soldiers on the outside of that circle destroyed the force of soldiers on the inside, despite there being more of them; a fact that was so surprising that not only is it still remembered, but it was also the subject of not one, but two remarks that are also still remembered today.

The first of these is a remark made just before the battle itself—a remark that came in reply to the comment of one of Hannibal’s lieutenants, an officer named Gisgo, on the disparity in size between the two armies. The intent of Gisgo’s remark was, it would seem, something to the effect of, “you’re sure this is going to work, right?” To which Hannibal replied: “another thing that has escaped your notice, Gisgo, is even more amazing—that although there are so many of them, there is not one among them called Gisgo.” That is to say, Gisgo is a unique individual, and so the numbers do not matter … etc., etc. We can all fill in the arguments from there: the power of the individual, the singular force of human creativity, and so on. In the case of the incident outside Cannae, those platitudes happened to be true—Hannibal really was a kind of tactical genius. But he also happened not to be facing Fabius that day.

Fabius himself was not the sort of person who could sum up his thought in a pithy (and trite) remark, but I think that the germ of his idea was distilled some centuries after the battle by another Roman senator. “Did all the Romans who fell at Cannae”—the ancient name for the village now known as Canne—“have the same horoscope?” asked Marcus Cicero, in a book entitled De Divinatione. The comment is meant as a deflationary pinprick, designed to explode the pretensions of the followers of Hannibal—a point revealed by a subsequent sentence: “Was there ever a day when countless numbers were not born?” The comment’s point, in other words, is much the same Cicero made in another of his works, when he tells a story about the atheistic philosopher Diagoras. Reproaching his atheism, a worshipper directed Diagoras to the many painted tablets in praise of the gods at the local temple—tablets produced by storm survivors who had taken a vow to have such a tablet painted while enveloped by the sea’s power. Diagoras replied, according to Cicero, that is merely so “because there are no pictures anywhere of those who have been shipwrecked.” In other words: check your premises, sportsfans: what you think may be the result of “creativity,” or some other malarky, may simply be due to the actions of chance—in the case of Hannibal, the fact that he happened not to be fighting Fabius.

Or, more specifically, to a statistical concept called the Law of Large Numbers. First explicitly described by the mathematician Jacob Bernoulli in 1713, this is the law that holds—in Bernoulli’s words—that “it is not enough to take one or another observation for […] reasoning about an event, but that a large number of them are needed.” In a crude way, this law is what critics of Grant refer to when they accuse him of being a “butcher”: that he simply applied the larger numbers of men and material available to the Union side to the war effort. It’s also what the enemies of the man who ought to have been on the field at Cannae—but wasn’t—said about him also: that Fabius fought what military strategists call a “war of attrition” rather than a “war of maneuver.” At that time, and since, many turn their nose up at such methods: in ancient times, they were thought to be ignoble, unworthy—which was why Varro insisted on rejecting what he might have called an “old man strategy” and went on the attack that August day. Yet, they were precisely the means by which, two millennia apart, two very similar men saved their countries from very similar threats.

Today, of course, very many people on the American “Left” say that what they call “scientific” and “mathematical” thought is the enemy. On the steps of the University of California’s Sproul Hall, more than fifty years ago, the Free Speech Movement’s Mario Savio denounced “the operation of the machine”; some years prior to that German Marxist Theodore Adorno and his co-worker Max Horkheimer had condemned the spread of such thought as, more or less, the pre-condition necessary for the Holocaust: “To the Enlightenment,” the two sociologists wrote, “that which does not reduce to numbers, and ultimately to the one, becomes illusion.” According to Bruce Robbins of Columbia University, “the critique of Enlightenment rationality is what English departments were founded on,” while it’s also been observed that, since the 1960s, “language, symbolism, text, and meaning came to be seen as the theoretical foundation for the humanities.” But as I have attempted to show, the notions conceived of by these writers as belonging to a particular part of the Eurasian landmass at a particular moment of history may not be so particular after all.

Leaving those large-scale considerations aside, however, returns us to the discussion concerning promotions in the U.S. military—where the assertions of the three career officers apparently cannot be allowed to go unchallenged. A reply to the three career officers’ article from a Parameters editorial board member, predictably enough, takes them to task for not recognizing that “there are multiple kinds of intelligence,” and instead suggesting that there is “only one particular type of intelligence”—you know, just the same smear used by Adorno and Horkheimer. The author of that article, Anna Simons (a professor at the U.S. Naval Postgraduate School), further intimates that the three officers do not possess “a healthy respect for variation”—i.e., “diversity.” Which, finally, brings us to the point of all this: what is really happening within the military is that, in order to promote what is called “diversity,” standards have to be amended in such a fashion as not only to include women and minorities, but also dumb people.

In other words, the social cost of what is known as “inclusiveness” is simultaneously a general “dumbing-down” of the military: promoting women and minorities also means rewarding not-intelligent people—and, because statistically speaking there simply are more dumb people than not, that also means suppressing smart people who are like Grant, or Fabius. It never appears to occur to anyone that, more or less, talking about “variation” and the like is what the enemies of Grant—or, further back, the enemies of Fabius—said also. But, one supposes, that’s just how it goes in the United States today: neither Grant nor Fabius were called to service until their countrymen had been scared pretty badly. It may be, in other words, that the American military will continue to suppress people with high cognitive abilities within their ranks—apparently, 9/11 and its consequences were not enough like the battle fought near the tiny Italian village to change American views on these matters. Statistically speaking, after all, 9/11 only killed 0.001% of the U.S. population, whereas Cannae killed perhaps a third of the members of the Roman Senate. That, in turn, raises the central question: If 9/11 was not enough to convince Americans that something isn’t right, well—

What will?

 

Size Matters

That men would die was a matter of necessity; which men would die, though, was a matter of circumstance, and Yossarian was willing to be the victim of anything but circumstance.
Catch-22.
I do not pretend to understand the moral universe; the arc is a long one, my eye reaches but little ways; I cannot calculate the curve and complete the figure by the experience of sight; I can divine it by conscience. And from what I see I am sure it bends towards justice.
Things refuse to be mismanaged long.
—“Of Justice and the Conscience.

 

monte-carlo-casino
The Casino at Monte Carlo

 

 

Once, wrote the baseball statistician Bill James, there was “a time when Americans” were such “an honest, trusting people” that they actually had “an unhealthy faith in the validity of statistical evidence”–but by the time James wrote in 1985, things had gone so far the other way that “the intellectually lazy [had] adopted the position that so long as something was stated as a statistic it was probably false.” Today, in no small part because of James’ work, that is likely no longer as true as it once was, but nevertheless the news has not spread to many portions of academia: as University of Virginia historian Sophia Rosenfeld remarked in 2012, in many departments it’s still fairly common to hear it asserted—for example—that all “universal notions are actually forms of ideology,” and that “there is no such thing as universal common sense.” Usually such assertions are followed by a claim for their political utility—but in reality widespread ignorance of statistical effects is what allowed Donald Trump to be elected, because although the media spent much of the presidential campaign focused on questions like the size of Donald Trump’s … hands, the size that actually mattered in determining the election was a statistical concept called sample size.

First mentioned by the mathematician Jacob Bernoulli made in his 1713 book, Ars Conjectandi, sample size is the idea that “it is not enough to take one or another observation for such a reasoning about an event, but that a large number of them are needed.” Admittedly, it might not appear like much of an observation: as Bernoulli himself acknowledged, even “the most stupid person, all by himself and without any preliminary instruction,” knows that “the more such observations are taken into account, the less is the danger of straying from the goal.” But Bernoulli’s remark is the very basis of science: as an article in the journal Nature put the point in 2013, “a study with low statistical power”—that is, few observations—“has a reduced chance of detecting a true effect.” Sample sizes need to be large enough to be able to eliminate chance as a possible factor.

If that isn’t known it’s possible to go seriously astray: consider an example drawn from the work of Israeli psychologists Amos Tversky (MacArthur “genius” grant winner) and (Nobel Prize-winning) Daniel Kahneman—a study “of two toys infants will prefer.” Let’s say that in the course of research our investigator finds that, of “the first five infants studied, four have shown a preference for the same toy.” To most psychologists, the two say, this would be enough for the researcher to conclude that she’s on to something—but in fact, the two write, a “quick computation” shows that “the probability of a result as extreme as the one obtained” being due simply to chance “is as high as 3/8.” The scientist might be inclined to think, in other words, that she has learned something—but in fact her result has a 37.5 percent chance of being due to nothing at all.

Yet when we turn from science to politics, what we find is that an American presidential election is like a study that draws grand conclusions from five babies. Instead of being one big sample—as a direct popular national election would be—presidential elections are broken up into fifty state-level elections: the Electoral College system. What that means is that American presidential elections maximize the role of chance, not minimize it.

The laws of statistics, in other words, predict that chance will play a large role in presidential elections—and as it happens, Tim Meko, Denise Lu and Lazaro Gamio reported for The Washington Post three days after the election that “Trump won the presidency with razor-thin margins in swing states.” “This election was effectively decided,” the trio went on to say, “by 107,000 people”—in an election in which more than 120 million votes were cast, that means that election was decided by less than a tenth of one percent of the total votes. Trump won Pennsylvania by less than 70,000 votes of nearly 6 million, Wisconsin by less than 30,000 of just less than three million, and finally Michigan by less than 11,000 out of 4.5 million: the first two by just more than one percent of the total vote each—and Michigan by a whopping .2 percent! Just to give you an idea of how insignificant these numbers are by comparison with the total vote cast, according to the Michigan Department of Transportation it’s possible that a thousand people in the five largest counties were involved in car crashes—which isn’t even to mention people who just decided to stay home because they couldn’t find a babysitter.

Trump owes his election, in short, to a system that is vulnerable to chance because it is constructed to turn a large sample (the total number of American voters) into small samples (the fifty states). Science tells us that small sample sizes increase the risk of random chance playing a role, American presidential elections use a smaller sample size than they could, and like several other presidential elections, the 2016 election did not go as predicted. Donald Trump could, in other words, be called “His Accidency” with even greater justice than John Tyler—the first vice-president to be promoted due to the death of his boss in office—was. Yet, why isn’t that point being made more publicly?

According to John Cassidy of The New Yorker, it’s because Americans haven’t “been schooled in how to think in probabilistic terms.” But just why that’s true—and he’s essentially making the same point Bill James did in 1985, though more delicately—is, I think, highly damaging to many of Clinton’s biggest fans: the answer is, because they’ve made it that way. It’s the disciplines where many of Clinton’s most vocal supporters make their home, in other words, that are most directly opposed to the type of probabilistic thinking that’s required to see the flaws in the Electoral College system.

As Stanford literary scholar Franco Moretti once observed, the “United States is the country of close reading”: the disciplines dealing with matters of politics, history, and the law within the American system have, in fact, more or less been explicitly constructed to prevent importing knowledge of the laws of chance into them. Law schools, for example, use what’s called the “case method,” in which a single case is used to stand in for an entire body of law: a point indicated by the first textbook to use this method, Christopher Langdell’s A Selection of Cases on the Law of Contracts. Other disciplines, such as history, are similar: as Emory University’s Mark Bauerlein has written, many such disciplines depend for their very livelihood upon “affirming that an incisive reading of a single text or event is sufficient to illustrate a theoretical or historical generality.” In other words, it’s the very basis of the humanities to reject the concept of sample size.

What’s particularly disturbing about this point is that, as Joe Pinsker documented in The Atlantic last year, the humanities attract a wealthier student pool than other disciplines—which is to say that the humanities tend to be populated by students and faculty with a direct interest in maintaining obscurity around the interaction between the laws of chance and the Electoral College. That doesn’t mean that there’s a connection between the architecture of presidential elections and the fact that—as Geoffrey Harpham, former president and director of the National Humanities Center, has observed—“the modern concept of the humanities” (that is, as a set of disciplines distinct from the sciences) “is truly native only to the United States, where the term acquired a meaning and a peculiar cultural force that it does not have elsewhere.” But it does perhaps explain just why many in the national media have been silent regarding that design in the month after the election.

Still, as many in the humanities like to say, it is possible to think that the current American university and political structure is “socially constructed,” or in other words could be constructed differently. The American division between the sciences and the humanities is not the only way to organize knowledge: as the editors of the massive volumes of The Literary and Cultural Reception of Darwin in Europe pointed out in 2014, “one has to bear in mind that the opposition of natural sciences … and humanities … does not apply to the nineteenth century.” If that opposition that we today find so omnipresent wasn’t then, it might not be necessary now. Hence, if the choice of the American people is between whether they ought to get a real say in the affairs of government (and there’s very good reason to think they don’t), or whether a bunch of rich yahoos spend time in their early twenties getting drunk, reading The Great Gatsby, and talking about their terrible childhoods …well, I know which side I’m on. But perhaps more significantly, although I would not expect that it happens tomorrow, still, given the laws of sample size and the prospect of eternity, I know how I’d bet.

Or, as another sharp operator who’d read his Bernoulli once put the point:

The arc of the moral universe is long, but it bends towards justice.”

 

Baal

Just as ancient Greek and Roman propagandists insisted, the Carthaginians did kill their own infant children, burying them with sacrificed animals and ritual inscriptions in special cemeteries to give thanks for favours from the gods, according to a new study.
The Guardian, 21 January 2014.

 

Just after the last body fell, at three seconds after 9:40 on the morning of 14 December, the debate began: it was about, as it always is, whether Americans ought to follow sensible rules about guns—or whether they ought to be easier to obtain than, say, the right to pull fish out of the nearby Housatonic River. There’s been a lot of words written about the Sandy Hook killings since the day that Adam Lanza—the last body to fall—killed 20 children and six adults at the elementary school he once attended, but few of them have examined the culpability of some of the very last people one might expect with regard to the killings: the denizens of the nation’s universities. After all, it’s difficult to accuse people who themselves are largely in favor of gun control of aiding and abetting the National Rifle Association—Pew Research reported, in 2011, that more than half of people with more than a college degree favored gun control. And yet, over the past several generations a doctrine has gained ground that, I think, has not only allowed academics to absolve themselves of engaging in debate on the subject of gun control, but has actively harmed the possibility of accomplishing it.

Having said that, of course, it is important to acknowledge that virtually all academics—even those who consider themselves “conservative” politically—are in favor of gun control: when for example Texas passed a law legalizing carrying guns on college campus recently Daniel S. Hamermesh, a University of Texas emeritus professor of economics (not exactly a discipline known for its radicalism), resigned his position, citing a fear for his own and his students’ safety. That’s not likely accidental, because not only do many academics oppose guns in their capacities as citizens, but academics have a special concern when it comes to guns: as Firmin DeBrabander, a professor of philosophy at the Maryland Institute College of Art argued in the pages of Inside Higher Ed last year, against laws similar to Texas’, “guns stand opposed” to the “pedagogical goals of the classroom” because while in the classroom “individuals learn to talk to people of different backgrounds and perspectives,” guns “announce, and transmit, suspicion and hostility.” If anyone has a particular interest in controlling arms, in other words, it’s academics, being as their work is particularly designed to foster what DeBrabander calls “open and transformative exchange” that may air “ideas [that] are offensive.” So to think that academics may in fact be an obstacle towards achieving sensible policies regarding guns might appear ridiculous on the surface.

Yet there’s actually good reason to think that academic liberals bear some responsibility for the United States’ inability to regulate guns like every other industrialized—I nearly said, “civilized”—nation on earth. That’s because changing gun laws would require a specific demands for action, and as political science professor Adolph Reed, Jr. of the University of Pennsylvania put the point not long ago in Harper’s, these days the “left has no particular place it wants to go.” That is, to many on campus and off, making specific demands of the political sphere is itself a kind of concession—or in other words, as journalist Thomas Frank remarked a few years ago about the Occupy Wall Street movement, today’s academic left teaches that “demands [are] a fetish object of literal-minded media types who stupidly crave hierarchy and chains of command.” Demanding changes to gun laws is, after all, a specific demand, and to make specific demands is, from this sophisticated perspective, a kind of “sell out.”

Still, how did the idea of making specific demands become a derided form of politics? After all, the labor movement (the eight-hour day), the suffragette movement (women’s right to vote) or the civil rights movement (an end to Jim Crow) all made specific demands. How then has American politics arrived at the diffuse and essentially inarticulable argument of the Occupy movement—a movement within which, Elizabeth Jacobs claimed in a report for the Brookings Institute while the camp in Zuccotti Park still existed, “the lack of demands is a point of pride?” I’d suggest that one possible way the trick was turned was through a 1967 article written by one Robert Bellah, of Harvard: an article that described American politics, and its political system, as a “civil religion.” By describing American politics in religious rather than secular terms, Bellah opened the way towards what some have termed the “non-politics” of Occupy and other social movements—and incidentally, allow children like Adam Lanza’s victims to die.

In “Civil Religion in America,” Bellah—who received his bachelor’s from Harvard in 1950, and then taught at Harvard until moving to the University of California at Berkeley in 1967, where he continued until the end of his illustrious career—argued that “few have realized that there actually exists alongside of and rather clearly differentiated from the churches an elaborate and well-institutionalized civil religion in America.” This “national cult,” as Bellah terms it, has its own holidays: Thanksgiving Day, Bellah says, “serves to integrate the family into the civil religion,” while “Memorial Day has acted to integrate the local community into the national cult.” Bellah also remarks that the “public school system serves as a particularly important context for the cultic celebration of the civil rituals” (a remark that, incidentally, perhaps has played no little role in the attacks on public education over the past several decades). Bellah also argues that various speeches by American presidents like Abraham Lincoln and John F. Kennedy are also examples of this “civil religion” in action: Bellah spends particular time with Lincoln’s Gettysburg Address, which he notes that poet Robert Lowell observed is filled with Christian imagery, and constitutes “a symbolic and sacramental act.” In saying so, Bellah is merely following a longstanding tradition regarding both Lincoln and the Gettysburg Address—a tradition that, however, that does not have the political valence that Bellah, or his literal spiritual followers, might think it does.

“Some think, to this day,” wrote Garry Wills of Northwestern University in his magisterial Lincoln at Gettysburg: The Words that Remade America, “that Lincoln did not really have arguments for union, just a kind of mystical attachment to it.” It’s a tradition that Wills says “was the charge of Southerners” against Lincoln at the time: after the war, Wills notes, Alexander Stephens—the only vice president the Confederate States ever had—argued that the “Union, with him [Lincoln], in sentiment rose to the sublimity of a religious mysticism.” Still, it’s also true that others felt similarly: Wills points out that the poet Walt Whitman wrote that “the only thing like passion or infatuation” in Lincoln “was the passion for the Union of these states.” Nevertheless, it’s a dispute that might have fallen by the historical wayside if it weren’t for the work of literary critic Edmund Wilson, who called his essay on Lincoln (collected in a relatively famous book Patriotic Gore: Studies in the Literature of the American Civil War) “The Union as Religious Mysticism.” That book, published in 1962, seems to have at least influenced Lowell—the two were, if not friends, at least part of the same New York City literary scene—and through Lowell Bellah, seems plausible.

Even if there was no direct route from Wilson to Bellah, however, it seems indisputable that the notion—taken from Southerners—concerning the religious nature of Lincoln’s arguments for the American Union became widely transmitted through American culture. Richard Nixon’s speechwriter, William Safire—since a longtime columnist for the New York Times—was familiar with Wilson’s ideas: as Mark Neely observed in his The Fate of Liberty: Abraham Lincoln and the Civil Liberties, on two occasions in Safire’s novel Freedom, “characters comment on the curiously ‘mystical’ nature of Lincoln’s attachment to the Union.” In 1964, the theologian Reinhold Niebuhr published an essay entitled “The Religion of Abraham Lincoln,” while in 1963 William J. Wolfe of the Episcopal Theological School of Cambridge, Massachusetts claimed that “Lincoln is one of the greatest theologians in America,” in the sense “of seeing the hand of God intimately in the affairs of nations.” Sometime in the early 1960s and afterwards, in other words, the idea took root among some literary intellectuals that the United States was a religious society—not one based on an entirely secular philosophy.

At least when it comes to Lincoln, at any rate, there’s good reason to doubt this story: far from being a religious person, Lincoln has often been described as non-religious or even an atheist. His longtime friend Jesse Fell—so close to Lincoln that it was he who first suggested what became the famous Lincoln-Douglas debates—for instance once remarked that Lincoln “held opinions utterly at variance with what are usually taught in the church,” and Lincoln’s law partner William Herndon—who was an early fan of Charles Darwin’s—said that the president also was “a warm advocate of the new doctrine.” Being committed to the theory of evolution—if Lincoln was—doesn’t mean, of course, that the president was therefore anti-religious, but it does mean that the notion of Lincoln as religious mystic has some accounting to do: if he was, it apparently was in no very simple way.

Still, as mentioned the view of Lincoln as a kind of prophet did achieve at least some success within American letters—but, as Wills argues in Lincoln at Gettysburg, that success has in turn obscured what Lincoln really argued concerning the structure of American politics. As Wills remarks for instance, “Lincoln drew much of his defense of the Union from the speeches of [Daniel] Webster, and few if any have considered Webster a mystic.” Webster’s views, in turn, descend from a line of American thought that goes back to the Revolution itself—though its most significant moment was at the Constitutional Convention of 1787.

Most especially, to one James Wilson, a Scottish emigrant, delegate to the Constitutional Convention of 1787, and later one of the first justices of the Supreme Court of the United States. If Lincoln got his notions of the Union from Webster, then Webster got his from Supreme Court Justice Joseph Story: as Wills notes, Theodore Parker, the Boston abolitionist minister, once remarked that “Mr. Justice Story was the Jupiter Pluvius [Raingod] from whom Mr. Webster often sought to elicit peculiar thunder for his speeches and private rain for his own public tanks of law.” Story, for his part, got his notion from Wilson: as Linda Przybyscewski notes in passing in her book, The Republic According to John Marshall Harlan (a later justice), Wilson was “a source for Joseph Story’s constitutional nationalism.” And Wilson’s arguments concerning the constitution—which he had a strong hand in making—were hardly religious.

At the constitutional convention, one of the most difficult topics to confront the delegates was the issue of representation: one of the motivations for the convention itself, after all, was the fact that under the previous terms of government, the Articles of Confederation, each state, rather than each member of the Continental Congress, possessed a vote. Wilson had already, in 1768, attacked the problem of representation as being one of the foremost reasons for the Revolution itself—the American colonies were supposed, by British law, to be fully as much British subjects as a Londoner or Mancunian, but yet had no representation in Parliament: “Is British freedom,” Wilson therefore asked in his Considerations on the Nature and Extent of the Legislative Authority of the British Parliament, “denominated from the soil, or from the people, of Britain?” That question was very much the predecessor of the question Wilson would ask at the convention: “For whom do we make a constitution? Is it for men, or is it for imaginary beings called states?” To Wilson, the answer was clear: constitutions are for people, not for tracts of land.

Wilson also made an argument that would later be echoed by Lincoln: he drew attention to the disparities of population between the several states. At the time of the convention, Pennsylvania—just as it is today—was a much more populous state than New Jersey was, a difference that made no difference under the Articles of Confederation, under which all states had the same number of votes: one. “Are not the citizens of Pennsylvania,” Wilson therefore asked the Convention, “equal to those of New Jersey? Does it require 150 of the former to balance 50 of the latter?” This argument would later be echoed by Lincoln, who, in order to illustrate the differences between free states and slave states, would—in October of 1854, at Peoria, in the speech that would mark his political comeback—note that

South Carolina has six representatives, and so has Maine; South Carolina has eight presidential electors, and so has Maine. This is precise equality so far; and, of course they are equal in Senators, each having two. Thus in the control of the government, the two States are equals precisely. But how are they in the number of their white people? Maine has 581,813—while South Carolina has 274,567. Maine has twice as many as South Carolina, and 32,679 over. Thus each white man in South Carolina is more than the double of any man in Maine.

The point of attack for both men, in other words, was precisely the same: the matter of representation in terms of what would later be called a “one man, one vote” standard. It’s an argument that hardly appears “mystical” in nature: since the matter turns, if anything, upon ratios of numbers to each other, it seems more aposit to describe the point of view adopted here as, if anything, “scientific”—if it weren’t for the fact that even the word “scientific” seems too dramatic a word for a matter that appears to be far more elemental.

Were Lincoln or Wilson alive today, then, it seems that the first point they might make about the gun control debate is that it is a matter about which the Congress is greatly at variance with public opinion: as Carl Bialik reported for FiveThirtyEight this past January, whenever Americans are polled “at least 70 percent of Americans [say] they favor background checks,” and furthermore that an October 2015 poll by CBS News and the New York Times “found that 92 percent of Americans—including 87 percent of Republicans—favor background checks for all gun buyers.” Yet, as virtually all Americans are aware, it has become essentially impossible to pass any sort of sensible legislation through Congress: a fact dramatized this spring by a “sit-down strike” in Congress by congressmen and congresswomen. What Lincoln and Wilson might further say about the point is that the trouble can’t be solved by such a “religious” approach: instead, what they presumably would recommend is that what needs to change is a system that inadequately represents the people. That isn’t the answer that’s on offer from academics and others on the American left, however. Which is to say that, soon enough, there will be another Adam Lanza to bewail—another of the sacrifices, one presumes, that the American left demands Americans must make to what one can only call their god.

High Anxiety

Now for our mountain sport …

Cymbeline 
Act III, Scene 3

High Hampton

Wade Hampton Golf Club Sign

Entrances to Wade Hampton Golf Club and High Hampton Inn and Country Club, North Carolina

Walt Whitman once said, as anyone who saw Bull Durham knows, that baseball would function to draw America together after the Civil War: the game, the poet said, would “repair our losses and be a blessing to us.” Many Americans have not lost this belief in the redemptive power of sports: as recently as 2011 John Boehner, then-Speaker of the House of Representatives, played a much-ballyhooed round of golf with President Barack Obama—along with many other outlets, Golf Digest presented the event as presaging a new era of American unity: the “pair can’t possibly spend four hours keeping score, conceding putts, complimenting drives, filling divots, retrieving pond balls, foraging for Pro V1s and springing for Kit Kats off the snack cart,” argued the magazine, “without finding greater common ground.” Golf would thusly be the antidote to what the late Columbia University history professor Richard Hofstadter, in 1964, called the “paranoid style”: the “heated exaggeration, suspiciousness, and conspiratorial fantasy” that Hofstadter found to be a common theme in American politics then and whose significance has seemingly only grown since. Yet, while the surface approval of the “golf summit” seemed warranted because golf is, after all, a game that cannot really be played without trust in your opponents—it’s only on the assumption that everyone is honest that the game can even work—as everyone knows by now the summit failed: Boehner was, more or less, forced out of office this summer by those members of his party who, Boehner said, got “bent out of shape” over his golf with the president. While golf might, in other words, furnish a kind of theoretical model for harmonious bipartisanship, in practice it has proved largely useless for preventing political polarization—a result that anyone who has traveled Highway 107 in western North Carolina might have realized. Up there, among the Great Smoky Mountains, there sits a counterexample to the dream of political consensus: the Wade Hampton Golf Club.

Admittedly, that a single golf club could be strong enough evidence as to smack down the flights of fancy of a Columbia University professor like Hofstadter—and a Columbia University alumni like Barack Obama—might appear a bit much: there’s a seeming disconnect between the weightiness of the subject matter and the evidential value of an individual golf club. What could the existence of the Wade Hampton Golf Club add (or detract) from Hofstadter’s assertions about the dominance of this “paranoid style,” examples of which range from the anti-Communist speeches of Senator Joseph McCarthy in the 1950s to the anti-Catholic, “nativist” movements of the 1830s and 1840s to the Populist denunciations of Wall Street during the 1890s? Yet, the existence of the Wade Hampton Golf Club does constitute strong evidence against one of the pieces of evidence Hofstadter adduces for his argument—and in doing so unravels not only the rest of Hofstadter’s spell like a kitten does a ball of string, but also the fantasy of “bipartisanship.”

One of the examples of “paranoia” Hofstadter cited, in other words, was the belief held by “certain spokesmen of abolitionism who regarded the United States as being in the grip of a slaveholders’ conspiracy”—a view that, Hofstadter implied, was not much different than the contemporary belief that fluoridation was a Soviet plot. But a growing number of historians now believe that Hofstadter was wrong about those abolitionists: according to historian Leonard Richards of the University of Massachusetts, for instance, there’s a great deal of evidence for “the notion that a slaveholding oligarchy ran the country—and ran it for their own advantage” in the years prior to the Civil War. The point is more than an academic one: if it’s all just a matter of belief, then the idea of bipartisanship makes a certain kind of sense; all that matters is whether those we elect can “get along.” But if not, then that would suggest that what matters is building the correct institutions, rather than electing the right people.

Again, that seems like rather more question than the existence of a golf club in North Carolina seems capable of answering. The existence of the Wade Hampton Golf Club however tends to reinforce Richards’ view if, for nothing else, on its name alone: the very biography of the man the golf club was named for, Wade Hampton III, lends credence to Richards’ notion about the real existence of a slave-owning, oligarchical conspiracy because Hampton was after all not only a Confederate general during the Civil War, but also the possessor (according to the website for the Civil War Trust, which attempts to preserve Civil War battlefields) of “one of the largest collections of slaves in the South.” Hampton’s career, in other words, demonstrates just how entwined slaveowners were with the “cause” of the South—and if secession was largely the result of a slave-owning conspiracy during the winter of 1860, it becomes a great deal easier to think that said conspiracy did not spring fully grown only then.

Descended from an obscenely wealthy family whose properties stretched from near Charleston in South Carolina’s Lowcountry to Millwood Plantation near the state capital of Columbia and all the way to the family’s summer resort of “High Hampton” in the Smokies—upon the site of which the golf club is now built—Wade Hampton was intimately involved with the Southern cause: not only was he one of the richest men in the South, but at the beginning of the war he organized and financed a military unit (“Hampton’s Legion”) that would, among other exploits, help win the first big battle of the war, near the stream of Bull Run. By the end of the war Hampton became, along with Nathan Bedford Forrest, the only man without prior military experience to achieve the rank of lieutenant general. In that sense, Hampton was exceptional—only eighteen other Confederate officers achieved that rank—but in another he was representative: as recent historical work shows, much of the Confederate army had direct links to slavery.

As historian Joseph T. Glatthaar has put the point in his General Lee’s Army: From Victory to Collapse, “more than one in every four volunteers” for the Confederate army in the first year of the war “lived with parents who were slaveholders”—as compared with the general population of the South, in which merely one in every twenty white persons owned slaves. If non-family members are included, or if economic connections like those to whom soldiers rented land or sold crops prior to the war are allowed, then “the vast majority of the volunteers of 1861 had a direct connection to slavery.” And if the slaveowners could create an army that could hold off the power of the United States for four years, it seems plausible they might have joined together prior to outright hostilities—which is to say that Hofstadter’s insinuations about the relative sanity of “certain” abolitionists (among them, Abraham Lincoln) don’t have the same value as they may once have.

After all, historians have determined that the abolitionists were certainly right when they suspected the motives of the slaveowners. “By itself,” wrote Roger Ransom of the University of California not long ago, “the South’s economic investment in slavery could easily explain the willingness of Southerners to risk war … [in] the fall of 1860.” “On the eve of the war,” as another historian noted in the New York Times, “cotton comprised almost 60 percent of America’s exports,” and the slaves themselves, as yet another historian—quoted by Ta-Nehisi Coates in The Atlantic—has observed, were “the largest single financial asset in the entire U.S. economy, worth more than all manufacturing and railroads combined.” Collectively, American slaves were worth 3.5 billion dollars—at a time when the entire budget for the federal government was less than eighty million dollars. Quite literally, in other words, American slaveowners could buy the entire U.S. government roughly forty three times over.

Slaveowners thusly had, in the words of a prosecutor, both means and motive to revolt against the American government; what’s really odd about the matter, however, is that Americans have ever questioned it. The slaveowners themselves fully admitted the point at the time: in South Carolina’s “Declaration of the Immediate Causes which Adduce and Justify the Secession of South Carolina from the Federal Union,” for instance, the state openly lamented the election of a president “whose opinions and purposes are hostile to slavery.” And not just South Carolina: “Seven Southern states had seceded in 1861,” as the dean of American Civil War historians James McPherson has put observed, “because they feared the incoming Lincoln administration’s designs on slavery.” When those states first met together at Montgomery, Alabama, in February of 1861 it took them only four days to promulgate what the New York Times called “a provisional constitution that explicitly recognized racial slavery”; in a March 1861 speech Alexander Stephens, who would become the vice president of the Confederate States of America, argued that slavery was the “cornerstone” of the new government. Slavery was, as virtually anyone who has seriously studied the matter has concluded, the cause motivating the Southern armies.

If so—if, that is, the slaveowners created an army so powerful that it could hold off the power of the United States for four years, simply in order to protect their financial interests in slave-owning—it then seems plausible they might have joined together prior to the beginning of outright hostilities. Further, if there was a “conspiracy” to begin the Civil War, then the claim that there was one in the years and decades before the war becomes just that much more believable. And if that possibility is tenable, then so is the claim by Richards and other historians—themselves merely following a notion that Abraham Lincoln himself endorsed in the 1850s—that the American constitution formed “a structural impediment to the full expression of Northern voting power” (as one reviewer has put it)—and that thusly the answer to political problems is not “bipartisanship,” or in other words, the election of friendlier politicians, but rather structural reform.

Such, at least, might be the lesson anyone might draw from the career of Wade Hampton III, Confederate general—in light of which it’s suggestive that the Wade Hampton Golf Club is not some relic of the nineteenth century. Planning for the club began, according to the club’s website, in 1982; the golf course was not completed until 1987, when it was named “Best New Private Course” by Golf Digest. More suggestive still, however, is the fact that under the original bylaws, “in order to be a member of the club, you [had] to own property or a house bordering the club”—rules that resulted, as one golfer has noted, in a club of “120 charter and founding members, all from below the Mason-Dixon Line: seven from Augusta, Georgia and the remainder from Florida, Alabama, and North Carolina.” “Such folks,” as Bradley Klein once wrote in Golfweek, “would have learned in elementary school that Wade Hampton III, 1818-1902, who owned the land on which the club now sits, was a prominent Confederate general.” That is, in order to become a member of Wade Hampton Golf Club you probably knew a great deal about the history of Wade Hampton III—and you were pretty ok with that.

The existence of the Wade Hampton Golf Club does not, to be sure, demonstrate a continuity between the slaveowners of the Old South and the present membership of the club that bears Hampton’s name. It is, however, suggestive to think that if it is true, as many Civil War historians now say, that prior to 1860 there was a conspiracy to maintain an oligarchic form of government, then what are we to make of a present in which—as former Secretary of Labor Robert Reich recently observed—“the richest one-hundreth of one percent of Americans now hold over 11 percent of the nation’s total wealth,” a proportion greater than at any time since before 1929 and the start of the Great Depression? Surely, one can only surmise, the answer is easier to find than a mountain hideaway far above the Appalachian clouds, and requires no poetic vision to see.

For Miracles Are Ceased

Turn him to any cause of policy,
The Gordian knot of it he will unloose …
Henry V

 

For connoisseurs of Schadenfreude, one of the most entertaining diversions of the past half-century or so is the turf war fought out in the universities between the sciences and the humanities now that, as novelist R. Scott Bakker has written, “at long last the biological sciences have gained the tools and techniques required to crack problems that had hitherto been the exclusive province of the humanities.” A lot of what’s happened in the humanities since the 1960s—the “canon wars,” the popularization of Continental philosophy, the establishment of various sorts of “studies”—could be described as a disciplinary battle with the sciences, and not the “political” war that it is often advertised as; under that description, the vaunted outreach of the humanities to previously-underserved populations stops looking entirely so noble and more like the efforts, a century ago, of robber baron industrialists to employ minority scabs against striking workers. It’s a comparison in fact that is not only not meant flippantly, but suggests that the history of the academy since the 1960s stops looking like the glorious march towards inclusion its proponents sometimes portray it as—and rather more like the initial moves of an ideological war designed to lay the foundation for the impoverishment of all America.

According to University of Illinois at Chicago professor of literature Walter Benn Michaels, after all, today’s humanistic academy has largely become the “human resources department of neoliberalism.” Michaels’ work suggests, in fact, that the “real” purpose of the professoriate promoting the interests of women and minorities has not been for the sheer justice of the cause, but rather to preserve their own antiquated and possibly ridiculous methods of “scholarship.” But that bargain however—if there was one—may perhaps be said to have had unintended consequences: among them, the reality that some CEOs enjoy pay thousands of times that of the average worker.

Correlation is not causation, of course, but it does seem inarguable that, as former Secretary of Labor Robert Reich wrote recently in Salon, Americans have forgotten the central historical lesson of the twentieth century: that a nation’s health (and not just its economic health) depends on consumer demand. As Reich wrote, contrary to those who argue in favor of some form of “trickle down” economics, “America’s real job creators are consumers, whose rising wages generate jobs and growth.” When workers get raises, they have “enough purchasing power to buy what expanding businesses [have] to offer.” In short (pardon, Secretary Reich), “broadly shared prosperity isn’t just compatible with a healthy economy that benefits everyone—it’s essential to it.” But Americans have, it seems, forgotten that lesson: as many, many observers have demonstrated, American wages have largely been stagnant since the early 1970s.

Still, that doesn’t mean the academy is entirely to blame: for the most part, it’s only because of the work of academics that the fact of falling wages is known to any certainty—though it’s also fair to say that the evidence can be gathered by a passing acquaintance with reality. Yet it’s also true that, as New York University professor of physics Alan Sokal averred some two decades ago, much of the work of the humanities since the 1960s has been devoted towards undermining, in the name of one liberatory vision or another, the “stodgy” belief “that there exists an external world, [and] that there exist objective truths about it.” Such work has arguably had a version of the political effect often bombastically claimed for it—undoubtedly, there are many more people from previously unrepresented groups in positions of authority throughout American society today than there were before.

Yet, as the Marxist scholars often derided by their “postmodernist” successors knew—and those successors appear to ignore—every advance has its cost, and interpreted dialectically the turn of the humanities away from scientific naturalism has two possible motives: the first, as mentioned, the possibility that territory once the exclusive province of the humanities has been invaded by the sciences, and that much of the behavior of professors of the humanities can be explained by fear that “the traditional humanities are about to be systematically debunked” by what Bakker calls “the tremendous, scientifically-mediated transformations to come.” In the wake of the “ongoing biomechanical renovation of the human,” Bakker says, it’s become a serious question whether “the idiom of the humanities can retain cognitive legitimacy.” If Bakker’s suggestion is correct, then the flight of the humanities from the sciences can be interpreted as something akin to the resistance of old-fashioned surgeons to the practice of washing their hands.

There is, however, another possible interpretation: one that accounts for the similarity between the statistical evidence of rising inequality since the 1970s gathered by many studies and the evidence in favor of the existence of global warming—a comparison not made lightly. In regards to both, there’s a case to be made that many of the anti-naturalistic doctrines developed in the academy have conspired with the mainstream media’s tendency to ignore reality to prevent, rather than aid, political responses—a conspiracy that itself is only encouraged by the current constitutional structure of the American state, which according to some academic historians (of the non-“postmodern” sort) was originally designed with precisely the intention of both ignoring and preventing action about another kind of overwhelming, but studiously ignored, reality.

In early March, 1860, not-yet presidential candidate Abraham Lincoln addressed an audience at New Haven, Connecticut; “the question of Slavery,” he said during that speech, “is the question, the all absorbing topic of the day.” Yet it was also the case, Lincoln observed, that while in private this was the single topic of many conversations, in public it was taboo: according to slavery’s defenders, Lincoln said, opponents of slavery “must not call it wrong in the Free States, because it is not there, and we must not call it wrong in the Slave States because it is there,” while at the same time it should not be called “wrong in politics because that is bringing morality into politics,” and also that it should not be called “wrong in the pulpit because that is bringing politics into religion.” In this way, even as slavery’s defenders could admit that slavery was wrong, they could also deny that there was any “single place … where this wrong thing can properly be called wrong!” Thus, despite the fact that slavery was of towering importance it was also to be disregarded.

There were, of course, entirely naturalistic reasons for that premeditated silence: as documented by scholars like Leonard Richards and Garry Wills, the structure of American government itself is due to a bargain between the free and the slave states—a bargain that essentially ceded control of the federal machinery to the South in exchange for their cooperation. The evidence is compelling: “between Washington’s election and the Compromise of 1850,” as Richards has noted for example, “slaveholders controlled the presidency for fifty years, the Speaker [of the House]’s chair for forty-one years, and the chairmanship of House Ways and Means [the committee that controls the federal budget] for forty-two years.” By controlling such key offices, according to these scholars, slaveowners could prevent the federal government from taking any action detrimental to their interests.

The continuing existence of structures originally designed to ensure Southern control—among them the Supreme Court and the Senate, institutions well-known to constitutional scholars for being offerings to society’s “aristocratic” interests even if the precise nature of that interest is never explicitly identified as such—even beyond the existence of slavery, in turn, may perhaps explain, naturalistically, the relative failure of naturalistic, scientific thinking in the humanities over the past several decades—even as the public need for such thinking has only increased. Such, at least, is what might be termed the “positive” interpretation of humanistic antagonism toward science: not so much an interested resistance to progress but instead a principled reaction to a continuing drag on not just the political interests of Americans, but perhaps even to the progress of knowledge and truth itself.

What’s perhaps odd, to be sure, is that no one from the humanities has dared to make this case publicly—excluding only a handful of historians and law professors, most of them far from the scholarly centers of excitement. On the contrary, jobs in the humanities generally go to people who urge, like European lecturer in art history and sociology Anselm Joppe, some version of a “radical separation from the world of politics and its institutions of representation and delegation,” and ridicule those who “still flock to the ballot box”—often connected, as Joppe’s proposals are, to a ban on television and an opposition to both genetically modified food and infrastructure investment. Still, even when—as Richards and Wills and others have—academics have made their case in a responsible way, none has connected that struggle to the larger issues of the humanities generally. Of course, to make such connections—to make such a case—would require such professors to climb down from the ivory tower that is precisely the perch that enables them to do the sort of thinking that I have attempted to present here, inevitably exhibiting innumerable, and perhaps insuperable, difficulties. Yet, without such attempts, it’s difficult to see how either the sciences or the humanities can be preserved—to speak nothing of the continuing existence of the United States.

Still, there is one “positive” possibility: if none of them do, then the opportunities for Schadenfreude will become nearly limitless.

Several And A Single Place

 

What’s the matter,
That in these several places of the city
You cry against the noble senate?
Coriolanus 

 

The explanation, says labor lawyer Thomas Geoghegan, possesses amazing properties: he can, the one-time congressional candidate says, “use it to explain everything … because it seems to work on any issue.” But before trotting out what that explanation is, let me select an issue that might appear difficult to explain: gun control, and more specifically just why, as Christopher Ingraham of the Washington Post wrote in July, “it’s never the right time to discuss gun control.” “In recent years,” as Ingraham says, “politicians and commentators from across the political spectrum have responded to mass shootings with an invocation of the phrase ‘now is not the time,’ or a close variant.” That inability even to discuss gun control is a tremendously depressing fact, at least insofar as you have sympathy for the needless waste of lives gun deaths are—until you realize that we Americans have been here before. And that demonstrates, just maybe, that Thomas Geoghegan has a point.

Over a century and a half ago, Americans were facing another issue that, in the words of one commentator, “must not be discussed at all.” It was so grave an issue, in fact, that very many Americans found “fault with those who denounce it”—a position that this commenter found odd: “You say that you think [it] is wrong,” he observed, “but you denounce all attempts to restrain it.” That’s a pretty strange position, because who thinks something is wrong, but yet is “not willing to deal with [it] as a wrong?” What other subject could be called a wrong, but should not be called “wrong in politics because that is bringing morality into politics,” and conversely should not be called “wrong in the pulpit because that is bringing politics into religion.” To sum up, this commenter said, “there is no single place, according to you, where this wrong thing can properly be called wrong!”

The place where this was said was New Haven, Connecticut; the time, March of 1860; the speaker, a failed senatorial candidate now running for president for a brand-new political party. His name was Abraham Lincoln.

He was talking about slavery.

*                                            *                                        *

To many historians these days, much about American history can be explained by the fact that, as historian Leonard Richards of the University of Massachusetts put it in his 2000 book, The Slave Power: The Free North and Southern Domination, 1780-1860, so “long as there was an equal number of slave and free states”—which was more or less official American policy until the Civil War—“the South needed just one Northern vote to be an effective majority in the Senate.” That meant that controlling “the Senate, therefore, was child’s play for southern leaders,” and so “time and again a bill threatening the South [i.e., slavery above all else] made its way through the House only to be blocked in the Senate.” It’s a stunningly obvious point, at least in retrospect—at least for this reader—but I’d wager that few, if any, Americans have really thought through the consequences of this fact.

Geoghegan for example has noted that—as he put it in 1998’s The Secret Lives of Citizens: Pursuing the Promise of American Life—even today the Senate makes it exceedingly difficult to pass legislation: as he wrote, at present only “two-fifths of the Senate, or forty-one senators, can block any bill.” That is, it takes at least sixty senatorial votes to overcome the threat known as the “filibuster,” the invocation of which requires a supermajority to overcome. The filibuster however is not the only anti-majoritarian feature of the Senate, which is also equipped with such quaint customs as the “secret hold” and the quorum call and so forth, each of which can be used to delay a bill’s hearing—and so buy time to squelch potential legislation. Yet, these radically disproportionate senatorial powers merely mask the basic proportionate inequality at the heart of the Senate as an institution itself.

As political scientists Frances Lee and Bruce Oppenheimer point out in their Sizing Up the Senate: The Unequal Consequences of Equal Representation, the Senate is, because it makes small states the equal of large ones, “the most malapportioned legislature in the democratic world.” As Geoghegan has put the point, “the Senate depart[s] too much from one person, one vote,” because (as of the late 1990s) “90 percent of the population base as represented in the Senate could vote yes, and the bill would still lose.” Although Geoghegan wrote that nearly two decades ago, that is still largely true today: in 2013, Dylan Matthews of The Washington Post observed that while the “smallest 20 states amount to 11.27 percent of the U.S. population,” their senators “can successfully filibuster [i.e., block] legislation.” Thus, although the Senate is merely one antidemocratic feature of the U.S. Constitution, it’s an especially egregious one that, by itself, largely prevented a serious discussion of slavery in the years before the Civil War—and today prevents the serious discussion of gun control.

The headline of John Bresnahan’s 2013 article in Politico about the response to the Sandy Hook massacre, for example, was “Gun control hits brick wall in Senate.” Bresnahan quoted Nevadan Harry Reid, the Senate Majority Leader at the time, as saying that “the overwhelming number of Senate Republicans—and that is a gross understatement—are ignoring the voices of 90 percent of the American people.” The final vote was 54-46: in other words, the majority of the Senate was in favor of controls, but because the pro-control senators did not have a supermajority, the measure failed. In short, the measure was a near-perfect illustration of how the Senate can kill a measure that 90 percent of Americans favor.

And you know? Whatever you think about gun control, as an issue, if 90 percent of Americans want something, and what prevents them is not just a silly rule—but the same rule that protected slavery—well then, as Abraham Lincoln might tell us, that’s a problem.

It’s a problem because far from the Senate being—as George Washington supposedly said to Thomas Jefferson—the saucer that cools off politics, it’s actually a pressure cooker that exacerbates issues, rather than working them out. Imagine, say, had the South not had the Senate to protect its “peculiar institution” in the years leading to the Civil War: gradually, immigration to the North would have slowly turned the tide in Congress, which may have led to a series of small pieces of legislation that, eventually, would have abolished slavery.

Perhaps that may not have been a good thing: Ta Nehisi Coates, of The Atlantic, has written that every time he thinks of the 600,000-plus deaths that occurred as a result of the Civil War, he feels “positively fucking giddy.” That may sound horrible to some, of course, but there is something to the notion of “redemptive violence” when it comes to that war; Coates for instance cites the contemporary remarks of Private Thomas Strother, United States Colored Troops, in the Christian Recorder, the 19th century paper of the African Methodist Episcopal Church:

To suppose that slavery, the accursed thing, could be abolished peacefully and laid aside innocently, after having plundered cradles, separated husbands and wives, parents and children; and after having starved to death, worked to death, whipped to death, run to death, burned to death, lied to death, kicked and cuffed to death, and grieved to death; and worst of all, after having made prostitutes of a majority of the best women of a whole nation of people … would be the greatest ignorance under the sun.

“Were I not the descendant of slaves, if I did not owe the invention of my modern self to a bloody war,” Coates continues, “perhaps I’d write differently.” Maybe in some cosmic sense Coates is wrong, and violence is always wrong—but I don’t think I’m in a position to judge, particularly since I, as in part the descendant of Irish men and women in America, am aware that the Irish themselves may have codified that sort of “blood sacrifice theory” in the General Post Office of Dublin during Easter Week of 1916.

Whatever you think of that, there is certainly something to the idea that, because slaves were the single biggest asset in the entire United States in 1860, there was little chance the South would have agreed to end slavery without a fight. As historian Steven Deyle has noted in his Carry Me Back: The Domestic Slave Trade in American Life, the value of American slaves in 1860 was “equal to about seven times the total value of all currency in circulation in the country, three times the value of the entire livestock population, twelve times the value of the entire U.S. cotton crop and forty-eight times the total expenditure of the federal government”—certainly a value much more than it takes to start a war. But then had slavery not had, in effect, government protection during those antebellum years, it’s questionable whether slaves ever might have become such valuable commodities in the first place.

Far from “cooling” things off, in other words, it’s entirely likely that the U.S. Senate, and other anti-majoritarian features of the U.S. Constitution, actually act to enflame controversy. By ensuring that one side does not need to come to the bargaining table, in fact, all such oddities merely postpone—they do not prevent—the day of reckoning. They  build up fuel, ensuring that when the day finally arrives, it is all the more terrible. Or, to put it in the words of an old American song: these American constitutional idiosyncrasies merely trample “out the vintage where the grapes of wrath are stored.”

That truth, it seems, marches on.