Water to the Sea

Yet lives our pilot still. Is’t meet that he
Should leave the helm and like a fearful lad
With tearful eyes add water to the sea
And give more strength to that which hath too much,
Whiles, in his moan, the ship splits on the rock,
Which industry and courage might have saved?
Henry VI, Part III. Act V, scene iv.

Those who make many species are the ‘splitters,’ and those who make few are the ‘lumpers,’” remarked Charles Darwin in an 1857 letter to botanist J.D. Hooker; the title of University of Chicago professor Kenneth Warren’s most recent book, What Was African-American Literature?, announces him as a “lumper.” The chief argument of Warren’s book is that the claim that something called “African-American literature” is “different from the rest of American lit[ature]”—a claim that many of Warren’s colleagues, perhaps no one more so than Harvard’s Henry Louis Gates, Jr., have based their careers upon—is, in reality, a claim that, historically, many writers with large amounts of melanin would have rejected. Take the fact, Warren says, that “literary societies … among free blacks in the antebellum north were not workshops for the production of a distinct black literature but salons for producing works of literary distinction”: these were not people looking to split off—or secede—from the state of literature. Warren’s work is, thereby, aimed against those who, like so many Lears, have divided and subdivided literature by attaching so many different adjectives to literature’s noun—an attack Warren says he makes because “a literature insisting that the problem of the 21st century remains the problem of the color line paradoxically obscures the economic and political problems facing many black Americans, unless those problems can be attributed to racial discrimination.” What Warren sees, I think, is that far too much attention is being paid to the adjective in “African-American literature”—though what he may not see is that the real issue concerns the noun.

The noun being, of course, the word “literature”: Warren’s account worries the “African-American” part of “African-American literature” instead of the “literature” part. Specifically, in Warren’s view what links the adjective to the noun—or “what made African American literature a literature”—was the regime of “constitutionally-sanctioned state-enforced segregation” known as Jim Crow, which made “black literary achievement … count, almost automatically, as an effort on behalf of the ‘race’ as a whole.” Without that institutional circumstance there are writers who are black—but no “black writers.” To Warren, it’s the distinct social structure of Jim Crow, hardening in the 1890s, that creates “black literature,” instead of merely examples of writing produced by people whose skin is darker-colored than that of other writers.

Warren’s argument thereby takes the familiar form of the typical “social construction” argument, as outlined by Ian Hacking in his book, The Social Construction of What? Such arguments begin, Hacking says, when “X is taken for granted,” and “appears to be inevitable”; in the present moment, African-American literature can certainly be said—for some people—to appear to be inevitable: Harvard’s Gates, for instance, has long claimed that “calls for the creation of a [specifically “black”] tradition occurred long before the Jim Crow era.” But it’s just at such moments, Hacking says, that someone will observe that in fact the said X is “the contingent product of the social world.” Which is just what Warren does.

Warren points out that although those who argue for an ahistorical vision of an African-American literature would claim that all black writers were attempting to produce a specifically black literature, Warren notes that the historical evidence points, merely, to an attempt to produce literature: i.e., a member of the noun class without a modifying adjective. At least, until the advent of the Jim Crow system at the end of the nineteenth century: it’s only after that time, Warren says, that “literary work by black writers came to be discussed in terms of how well it served (or failed to serve) as an instrument in the fight against Jim Crow.” In the familiar terms of the hallowed social constructionism argument, Warren is claiming that the adjective is added to the noun later, as a result of specific social forces.

Warren’s is an argument, of course, with a number of detractors, and not simply Gates. In The Postethnic Literary: Reading Paratexts and Transpositions Around 2000, Florian Sedlmeier charged Warren with reducing “African American identity to a legal policy category,” and furthermore that Warren’s account “relegates the functions of authorship and literature to the economic subsystem.” It’s a familiar version of the“reductionist” charge often cited by “postmoderns” against Marxists—an accusation tiresome at best in these days.

More creatively, in a symposium of responses to Warren in the Los Angeles Review of Books, Erica Edwards attempted to one-up Warren by saying that Warren fails to recognize that perhaps the true “invention” of African-American literature was not during the Jim Crow era of legalized segregation, but instead “with the post-Jim Crow creation of black literature classrooms.” Whereas Gates, in short, wishes to locate the origin of African-American literature in Africa prior to (or concurrently with) slavery itself, and Warren instead locates it in the 1890s during the invention of Jim Crow, Edwards wants to locate it in the 1970s, when African-American professors began to construct their own classes and syllabi. Edwards’ argument, at the least, has a certain empirical force: the term “African-American” itself is a product of the civil rights movement and afterwards; that is, the era of the end of Jim Crow, not its beginnings.

Edwards’ argument thereby leads nearly seamlessly into Aldon Lynn Nielsen’s objections, published as part of the same symposium. Nielsen begins by observing that Warren’s claims are not particularly new: Thomas Jefferson, he notes, “held that while Phillis Wheatley [the eighteenth-century black poet] wrote poems, she did not write literature,” while George Schuyler, the black novelist, wrote for The Nation in 1926 that “there was not and never had been an African American literature”—for the perhaps-surprising reason that there was no such thing as an African-American. Schuyler instead felt that the “Negro”—his term—“was no more than a ‘lampblacked Anglo-Saxon.’” In that sense, Schuyler’s argument was even more committed to the notion of “social construction” than Warren is: whereas Warren questions the timelessness of the category of a particular sort of literature, Schuyler questioned the existence of a particular category of person. Warren, that is, merely questions why “African-American literature” should be distinguished—or split from—“American literature”; Schuyler—an even more incorrigible lumper than Warren—questioned why “African-Americans” ought to be distinguished from “Americans.”

Yet, if even the term “African-American,” considered as a noun itself rather than as the adjective it is in the phrase “African-American literature,” can be destabilized, then surely that ought to raise the question, for these sharp-minded intellectuals, of the status of the noun “literature.” For it is precisely the catechism of many today that it is the “liberating” features of literature—that is, exactly, literature’s supposed capacity to produce the sort of argument delineated and catalogued by Hacking, the sort of argument in which it is argued that “X need not have existed”—that will produce, and has produced, whatever “social progress” we currently observe about the world.

That is the idea that “social progress” is the product of an increasing awareness of Nietzsche’s description of language as a “mobile army of metaphors, metonyms, and anthropomorphisms”—or, to use the late American philosopher Richard Rorty’s terminology, to recognize that “social progress” is a matter of redescription by what he called, following literary critic Harold Bloom, “strong poets.” Some version of such a theory is held by what Rorty, following University of Chicago professor Allan Bloom, called “‘the Nietzscheanized left’”: one that takes seriously the late Belgian literature professor Paul de Man’s odd suggestion that “‘one can approach … the problems of politics only on the basis of critical-linguistic analysis,’” or the late French historian Michel Foucault’s insistence that he would not propose a positive program, because “‘to imagine another system is to extend our participation in the present system.’” But such sentiments have hardly been limited to European scholars.

In America, for instance, former Duke University professor of literature Jane Tompkins echoed Foucault’s position in her essay “Sentimental Power: Uncle Tom’s Cabin and the Politics of Literary History.” There, Tompkins approvingly cited novelist Harriet Beecher Stowe’s belief, as expressed in Uncle Tom, that the “political and economic measures that constitute effective action for us, she regards as superficial, mere extensions of the worldly policies that produced the slave system in the first place.’” In the view of people like Tompkins, apparently, “political measures” will somehow sprout out of the ground of their own accord—or at least, by means of the transformative redescriptive powers of “literature.”

Yet if literature is simply a matter of redescription then it must be possible to redescribe “literature” itself: which in this paragraph will be in terms of a growing scientific “literature” (!) that, since the 1930s, has examined the differences between animals and human beings in terms of what are known as “probability guessing experiment[s].” In the classic example of this research—as cited in a 2000 paper called “The Left Hemisphere’s Role in Hypothesis Formation”—if a light is flashed with a ratio of 70% red light to 30% green, animals will tend always to guess red, while human beings will attempt to anticipate which light will be flashed next: in other words, animals will “tend to maximize or always choose the option that has occurred most frequently in the past”—whereas human beings will “tend to match the frequency of previous occurrences in their guesses.” Animals will simply always guess the same answer, while human beings will attempt to divine the pattern: that is, they will make their guesses based on the assumption that the previous series of flashes were meaningful. If the previous three flashes were “red, red, green,” a human being will tend to guess that the next flash will be red, whereas an animal will simply always guess red.

That in turn implies that, since in this specific example there is in fact no pattern and merely a probabilistic ratio of green to red, animals will always outperform human beings in this sort of test: as the authors of the paper write, “choosing the most frequent option all of the time, yields more correct guesses than matching as long as p ≠ 0.5.” Or, as they also note, “if the red light occurs with a frequency of 70% and a green light occurs with a frequency of 30%, overall accuracy will be highest if the subject predicts red all the time.” It’s true, in other words, that attempting to match a pattern will result in being correct 100% of the time—if the pattern is successfully matched. That result has, arguably, consequences for the liberationist claims of social constructionist arguments in general and literature in specific.

I trust that, without much in the way of detail—which I think could be elucidated at tiresome length—it can be stipulated that, more or less, the entire liberatory project of “literature” described above, as held by such luminaries as Foucault or Tompkins, can be said to be an attempt at elaborating rules for “pattern recognition.” Hence, it’s possible to understand how training in literature might be helpful towards fighting discrimination, which after all is obviously about constructing patterns: racists are not racist towards merely 65% of all black people, or are only racist 37% of the time. Racism—and other forms of discrimination—are not probabilistic, they are deterministic: they are rules used by discriminators that are directed at everyone within the class. (It’s true that the phenomenon of “passing” raises questions about classes, but the whole point of “passing” is that individual discriminators are unaware of the class’ “true” boundaries.) So it’s easy to see how pattern-recognition might be a useful skill with which to combat racial or other forms of discrimination.

Matching a pattern, however, suffers from one difficulty: it requires the existence of a pattern to be matched. Yet, in the example discussed in “The Left Hemisphere’s Role in Hypothesis Formation”—as in everything influenced by probability—there is no pattern: there is merely a larger chance of the light being red rather than green in each instance. Attempting then to match a pattern in a situation ruled instead by probability is not only unhelpful, but positively harmful: because there is no pattern, “guessing” simply cannot perform as well as simply maintaining the same choice every time. (Which in this case would at least result in being correct 70% of the time.) In probabilistic situations, in other words, where there is merely a certain probability of a given result rather than a certain pattern, both empirical evidence and mathematics itself demonstrates that the animal procedure of always guessing the same will be more successful than the human attempt at pattern recognition.

Hence, it follows that although training in recognizing patterns—the basis of schooling in literature, it might be said—might be valuable in combatting racism, such training will not be helpful in facing other sorts of problems: as the scientific literature demonstrates, pattern recognition as a strategy only works if there is a pattern. That in turn means that literary training can only be useful in a deterministic, and not probabilistic, world—and therefore, then, the project of “literature,” so-called, can only be “liberatory” in the sense meant by its partisans if the obstacles from which human beings need liberation are pattern-based. And that’s a conclusion, it seems to me, that is questionable at best.

Take, for example, the matter of American health care. Unlike all other industrialized nations, the United States does not have a single, government-run healthcare system, despite the fact that—as Malcolm Gladwell has noted that the American labor movement knew as early as the 1940s— “the safest and most efficient way to provide insurance against ill health or old age [is] to spread the costs and risks of benefits over the biggest and most diverse group possible.”  In other words, insurance works best by lumping, not splitting. The reason why may perhaps be the same as the reason that, as the authors of “The Left Hemisphere’s Role in Hypothesis Formation” point out, it can be said that “humans choose a less optimal strategy than rats” when it comes to probabilistic situations. Contrary to the theories of those in the humanities, in other words, the reality  is that human beings in general—and Americans when it comes to health care—appear to have a basic unfamiliarity with the facts of probability.

One sign of that ignorance is, after all, the growth of casino gambling in the United States even as health care remains a hodgepodge of differing systems—despite the fact that both insurance and casinos run on precisely the same principle. As statistician and trader Nassim Taleb has pointed out, casinos “never (if they do things right) lose money”—so long as they are not run by Donald Trump—because they “simply do not let one gambler make a massive bet” and instead prefer “to have plenty of gamblers make a series of bets of limited size.” In other words, it is not possible for some high roller to bet, say, a Las Vegas casino the entire worth of the casino on a single hand of blackjack, or any other game; casinos just simply limit the stakes to something small enough that the continued existence of the business is not at risk on any one particular event, and then make sure that there are enough bets being made to allow the laws of probability in every game (which are tilted toward the casino) to ensure the continued health of the business. Insurance, as Gladwell observed above, works precisely the same way: the more people paying premiums—and the more widely dispersed they are—the less likely it is that any one catastrophic event can wipe out the insurance fund. Both insurance and casinos are lumpers, not splitters: that, after all, is precisely why all other industrialized nations have put their health care systems on a national basis rather than maintaining the various subsystems that Americans—apparently inveterate splitters—still have.

Health care, of course, is but one of the many issues of American life that, although influenced by, ultimately have little to do with, racial or other kinds of discrimination: what matters about health care, in other words, is that too few Americans are getting it, not merely that too few African-Americans are. The same is true, for instance, about incarceration: although such works as Michelle Alexander’s The New Jim Crow have argued that the fantastically-high rate of incarceration in the United States constitutes a new “racial caste system,” University of Pennsylvania professor of political science Marie Gottschalk has pointed out that “[e]ven if you released every African American from US prisons and jails today, we’d still have a mass incarceration crisis in this country.” The problem with American prisons, in other words, is that there are too many Americans in them, not (just) too many African-Americans—or any other sort of American.

Viewing politics through a literary lens, in sum—as a matter of flashes of insight and redescription, instantiated by Wittgenstein’s duck-rabbit figure and so on—ultimately has costs: costs that have been witnessed again and again in recent American history, from the War on Drugs to the War on Terror. As Warren recognizes, viewing such issues as health care or prisons through a literary, or more specifically racial, lens is ultimately an attempt to fit a square peg through a round hole—or, perhaps even more appositively, to bring a knife to a gun fight. Warren, in short, may as well have cited UCLA philosophy professor Abraham Kaplan’s observation, sometimes called Kaplan’s Law of the Instrument: “Give a boy a hammer and everything he meets has to be pounded.” (Or, as Kaplan put the point more delicately, it ought not to be surprising “to discover that a scientist formulates problems in a way which requires for their solution just those techniques in which he himself is especially skilled.”) Much of the American “left,” in other words, views all problems as matters of redescription and so on—a belief not far from common American exhortations to “think positively” and the like. Certainly, America is far from the post-racial utopia some would like it to be. But curing the disease is not—contrary to the beliefs of many Americans today—the same as diagnosing it.

Like it—or lump it.

Advertisements

Best Intentions

L’enfer est plein de bonnes volontés ou désirs
—St. Bernard of Clairvaux. c. 1150 A.D.

And if anyone knows Chang-Rae Lee,” wrote Penn State English professor Michael Bérubé back in 2006, “let’s find out what he thinks about Native Speaker!” The reason Bérubé gives for doing that asking is, first, that Lee wrote the novel under discussion, Native Speaker—and second, that Bérubé “once read somewhere that meaning is identical with intention.” But this isn’t the beginning of an essay about Native Speaker. It’s actually the end of an attack on a fellow English professor: the University of Illinois at Chicago’s Walter Benn Michaels, who (along with with Steven Knapp, now president of George Washington University), wrote the 1982 essay “Against Theory”—an essay that  argued that “the meaning of a text is simply identical to the author’s intended meaning.” Bérubé’s closing scoff then is meant to demonstrate just how politically conservative Michaels’ work is— earlier in the same piece, Bérubé attempted to tie Michaels’ work to Arthur Schlesinger, Jr.’s The Disuniting of America, a book that, because it argued that “multiculturalism” weakened a shared understanding of the United States, has much the same status among some of the intelligentsia that Mein Kampf has among Jews. Yet—weirdly for a critic who often insists on the necessity of understanding historical context—it’s Bérubé’s essay that demonstrates a lack of contextual knowledge, while it’s Michaels’ view—weirdly for a critic who has echoed Henry Ford’s claim that “History is bunk”—that demonstrates a possession of it. In historical reality, that is, it’s Michaels’ pro-intention view that has been the politically progressive one, while it’s Bérubé’s scornful view that shares essentially everything with traditionally conservative thought.

Perhaps that ought to have been apparent right from the start. Despite the fact that, to many English professors, the anti-intentionalist view has helped to unleash enormous political and intellectual energies on behalf of forgotten populations, the reason it could do so was that it originated from a forgotten population that, to many of those same professors, deserves to be forgotten: white Southerners. Anti-intentionalism, after all, was a key tenet of the critical movement called the New Criticism—a movement that, as Paul Lauter described in a presidential address to the American Studies Association in 1994, arose “largely in the South” through the work of Southerners like John Crowe Ransom, Allen Tate, and Robert Penn Warren. Hence, although Bérubé, in his essay on Michaels, insinuates that intentionalism is politically retrograde (and perhaps even racist), it’s actually the contrary belief that can be more concretely tied to a conservative politics.

Ransom and the others, after all, initially became known through a 1930 book entitled I’ll Take My Stand: The South and the Agrarian Tradition, a book whose theme was a “central attack on the impact of industrial capitalism” in favor of a vision of a specifically Southern tradition of a society based around the farm, not the factory. In their vision, as Lauter says, “the city, the artificial, the mechanical, the contingent, cosmopolitan, Jewish, liberal, and new” were counterposed to the “natural, traditional, harmonious, balanced, [and the] patriachal”: a juxtaposition of sets of values that wouldn’t be out of place in a contemporary Republican political ad. But as Lauter observes, although these men were “failures in … ‘practical agitation’”—i.e., although I’ll Take My Stand was meant to provoke a political revolution, it didn’t—“they were amazingly successful in establishing the hegemony of their ideas in the practice of the literature classroom.” Among the ideas that they instituted in the study of literature was the doctrine of anti-intentionalism.

The idea of anti-intentionalism itself, of course, predates the New Criticism: writers like T.S. Eliot (who grew up in St. Louis) and the University of Cambridge don F.R. Leavis are often cited as antecedents. Yet it did not become institutionalized as (nearly) official doctrine of English departments  (which themselves hardly existed) until the 1946 publication of W.K. Wimsatt and Monroe Beardsley’s “The Intentional Fallacy” in The Sewanee Review. (The Review, incidentally, is a publication of Sewanee: The University of the South, which was, according to its Wikipedia page, originally founded in Tennessee in 1857 “to create a Southern university free of Northern influences”—i.e., abolitionism.) In “The Intentional Fallacy,” Wimsatt and Beardsley explicitly “argued that the design or intention of the author is neither available nor desirable as a standard for judging the success of a work of literary art”—a doctrine that, in the decades that followed, did not simply become a key tenet of the New Criticism, but also largely became accepted as the basis for work in English departments. In other words, when Bérubé attacks Michaels in the guise of acting on behalf of minorities, he also attacks him on behalf of the institution of English departments—and so just who the bully is here isn’t quite so easily made out as Bérubé makes it appear.

That’s especially true because anti-intentionalism wasn’t just born and raised among conservatives—it has also continued to be a doctrine in conservative service. Take, for instance, the teachings of conservative Supreme Court justice Antonin Scalia, who throughout his career championed a method of interpretation he called “textualism”—by which he meant (!) that, as he said in 1995, it “is the law that governs, not the intent of the lawgiver.” Scalia argued his point throughout his career: in 1989’s Green v. Bock Laundry Mach. Co., for instance, he wrote that the

meaning of terms on the statute books ought to be determined, not on the basis of which meaning can be shown to have been understood by the Members of Congress, but rather on the basis of which meaning is … most in accord with context and ordinary usage … [and is] most compatible with the surrounding body of law.

Scalia thusly argued that interpretation ought to proceed from a consideration of language itself, apart from those who speak it—a position that would place him, perhaps paradoxically from Michael Bérubé’s position, among the most rarified heights of literary theorists: it was after all the formidable German philosopher Martin Heidegger—a twelve-year member of the Nazi Party and sometime-favorite of Bérubé’s—who wrote the phrase “Die Sprache spricht”: “Language [and, by implication, not speakers] speaks.” But, of course, that may not be news Michael Bérubé wishes to hear.

Like Odysseus’ crew, there’s a simple method by which Bérubé could avoid hearing the point: all of the above could be dismissed as an example of the “genetic fallacy.” First defined by Morris Cohen and Ernest Nagel in 1934’s An Introduction to Logic and Scientific Method, the “genetic fallacy” is “the supposition that an actual history of any science, art, or social institution can take the place of a logical analysis of its structure.” That is, the arguments above could be said to be like the argument that would dismiss anti-smoking advocates on the grounds that the Nazis were also anti-smoking: just because the Nazi were against smoking is no reason not to be against smoking also. In the same way, just because anti-intentionalism originated among conservative Southerners—and also, as we saw, committed Nazis—is no reason to dismiss the thought of anti-intentionalism. Or so Michael Bérubé might argue.

That would be so, however, only insofar as the doctrine of anti-intentionalism were independent from the conditions from which it arose: the reasons to be against smoking, after all, have nothing to do with anti-Semitism or the situation of interwar Germany. But in fact the doctrine of anti-intentionalism—or rather, to put things in the correct order, the doctrine of intentionalism—has everything to do with the politics of its creators. In historical reality, the doctrine enunciated by Michaels—that intention is central to interpretation—was in fact created precisely in order to resist the conservative political visions of Southerners. From that point of view, in fact, it’s possible to see the Civil War itself as essentially fought over this principle: from this height, “slavery” and “states’ rights” and the rest of the ideas sometimes advanced as reasons for the war become mere details.

It was, in fact, the very basis upon which Abraham Lincoln would fight the Civil War—though to see how requires a series of steps. They are not, however, especially difficult ones: in the first place, Lincoln plainly said what the war was about in his First Inaugural Address. “Unanimity is impossible,” as he said there, while “the rule of a minority, as a permanent arrangement, is wholly inadmissable.” Not everyone will agree all the time, in other words, yet the idea of a “wise minority” (Plato’s philosopher-king or the like) has been tried for centuries—and been found wanting; therefore, Lincoln continued, by “rejecting the majority principle, anarchy or despotism in some form is all that is left.” Lincoln thereby concluded that “a majority, held in restraint by constitutional checks and limitations”—that is, bounds to protect the minority—“is the only true sovereign of a free people.” Since the Southerners, by seceding, threatened this idea of government—the only guarantee of free government—therefore Lincoln was willing to fight them. But where did Lincoln obtain this idea?

The intellectual line of descent, as it happens, is crystal clear: as Wills writes, “Lincoln drew much of his defense of the Union from the speeches of [Daniel] Webster”: after all, the Gettysburg Address’ famous phrase, “government of the people, by the people, for the people” was an echo of Webster’s Second Reply to Hayne, which contained the phrase “made for the people, made by the people, and answerable to the people.” But if Lincoln got his notions of the Union (and thusly, his reasons for fighting the war) from Webster, then it should also be noted that Webster got his ideas from Supreme Court Justice Joseph Story: as Theodore Parker, the Boston abolitionist minister, once remarked, “Mr. Justice Story was the Jupiter Pluvius [Raingod] from whom Mr. Webster often sought to elicit peculiar thunder for his speeches and private rain for his own public tanks of law.” And Story, for his part, got his notions from another Supreme Court justice: James Wilson, who—as Linda Przybyszewski notes in passing in her book, The Republic According to John Marshall Harlan (a later Supreme Court justice)—was “a source for Joseph Story’s constitutional nationalism.” So in this fashion Lincoln’s arguments concerning the constitution—and thus, the reasons for fighting the war—ultimately derived from Wilson.

 

JamesWilson
Not this James Wilson.

Yet, what was that theory—the one that passed by a virtual apostolic succession from Wilson to Story to Webster to Lincoln? It was derived, most specifically, from a question Wilson had publicly asked in 1768, in his Considerations on the Nature and Extent of the Legislative Authority of the British Parliament. “Is British freedom,” Wilson had there asked, “denominated from the soil, or from the people, of Britain?” Nineteen years later, at the Constitutional Convention of 1787, Wilson would echo the same theme: “Shall three-fourths be ruled by one-fourth? … For whom do we make a constitution? Is it for men, or is it for imaginary beings called states?” To Wilson, the answer was clear: constitutions are for people, not for tracts of land, and as Wills correctly points out, it was on that doctrine that Lincoln prosecuted the war.

James Wilson (1742-1798)
This James Wilson.

Still, although all of the above might appear unobjectionable, there is one key difficulty to be overcome. If, that is, Wilson’s theory—and Lincoln’s basis for war—depends on a theory of political power derived from people, and not inanimate objects like the “soil,” that requires a means of distinguishing between the two—which perhaps is why Wilson insisted, in his Lectures on Law in 1790 (the very first such legal works in the United States), that “[t]he first and governing maxim in the interpretation of a statute is to discover the meaning of those who made it.” Or—to put it another way—the intention of those who made it. It’s intention, in other words, that enables Wilson’s theory to work—as Knapp and Michaels well-understand in “Against Theory.”

The central example of “Against Theory,” after all, is precisely about how to distinguish people from objects. “Suppose that you’re walking along a beach and you come upon a curious sequence of squiggles in the sand,” Michaels and his co-author ask. These “squiggles,” it seems, appear to be the opening lines of Wordsworth’s “A Slumber”: “A slumber did my spirit seal.” That wonder, then, is reinforced by the fact that, in this example, the next wave leaves, “in its wake,” the next stanza of the poem. How to explain this event, Knapp and Michaels ask?

There are, they say, only two alternatives: either to ascribe “these marks to some agent capable of intentions,” or to “count them as nonintentional effects of mechanical processes,” like some (highly unlikely) process of erosion or wave action or the like. Which, in turn, leads up to the $64,000 question: if these “words” are the result of “mechanical processes” and not the actions of an actor, then “will they still seem to be words?”

The answer, of course, is that they will not: “They will merely seem to resemble words.” Thus, to deprive (what appear to be) the words “of an author is to convert them into accidental likenesses of language.” Intention and meaning are, in this way, identical to each other: no intention, no meaning—and vice versa. Similarly, I suggest, to Lincoln (and his intellectual antecedents), the state is identical to its people—and vice versa. Which, clearly, then suggests that those who deny intention are, in their own fashion—and no matter what they say—secessionists.

If so, then that would, conversely, make those who think—along with Knapp and Michaels—that it is intention that determines meaning, and—along with Lincoln and Wilson—that it is people that constitutes states, then it would follow that those who thought that way really could—unlike the sorts of “radicals” Bérubé is attempting to cover for—construct the United States differently, in a fashion closer to the vision of James Wilson as interpreted by Abraham Lincoln. There are, after all, a number of things about the government of the United States that still lend themselves to the contrary theory, that power derives from the inanimate object of the soil: the Senate, for one. The Electoral College, for another. But the “radical” theory espoused by Michael Bérubé and others of his ilk does not allow for any such practical changes in the American constitutional architecture. In fact, given its collaboration—a word carefully chosen—with conservatives like Antonin Scalia, it does rather the reverse.

Then again, perhaps that is the intention of Michael Bérubé. He is, after all, an apparently-personable man who nevertheless asked, in a 2012 essay in the Chronicle of Higher Education explaining why he resigned the Paterno Family Professorship in Literature at Pennsylvania State University, us to consider just how horrible the whole Jerry Sandusky scandal was—for Joe Paterno’s family. (Just “imagine their shock and grief” at finding out that the great college coach may have abetted a child rapist, he asked—never mind the shock and grief of those who discovered that their child had been raped.) He is, in other words, merely a part-time apologist for child rape—and so, I suppose, on his logic we ought to give a pass to his slavery-defending, Nazi-sympathizing, “intellectual” friends.

They have, they’re happy to tell us after all, only the best intentions.

Caterpillars

All scholars, lawyers, courtiers, gentlemen,
They call false caterpillars and intend their death.
2 Henry VI 

 

When Company A, 27th Armored Infantry Battalion, U.S. 1st Infantry Division (“the Big Red One”), reached the forested hills overlooking the Rhine in the early afternoon of 7 March, 1945, and found the Ludendorff Bridge still, improbably, standing, they may have been surprised to find that they had not only found the last passage beyond Hitler’s Westwall into the heart of Germany—but also stumbled into a controversy that is still, seventy years on, continuing. That controversy could be represented by an essay written some years ago by the Belgian political theorist Chantal Mouffe on the American philosopher Richard Rorty: the problem with Rorty’s work, Chantal claimed, was that he believed that the “enemies of human happiness are greed, sloth, and hypocrisy, and no deep analysis is required to understand how they could be eliminated.” Such beliefs are capital charges in intellectual-land, where the stock-in-trade is precisely the kind of “deep analysis” that Rorty thought (at least according to Mouffe) unnecessary, so it’s little wonder that, for the most part, it’s Mouffe who’s had the better part of this argument—especially considering Rorty has been dead since 2007. Yet as the men of Company A might have told Mouffe—whose work is known, according to her Wikipedia article, for her “use of the work of Carl Schmitt” (a legal philosopher who joined the Nazi Party on 1 May 1933)—it’s actually Rorty’s work that explains just why they came to the German frontier; an account whose only significance lies in the fact that it may be the ascendance of Mouffe’s view over Rorty’s that explains such things as, for instance, why no one was arrested after the financial crisis of 2007-08.

That may, of course, sound like something of a stretch: what could the squalid affairs that nearly led to the crash of the world financial system have in common with such recondite matters as the dark duels conducted at academic conferences—or a lucky accident in the fog of war? But the link in fact is precisely at the Ludendorff, sometimes called “the Bridge at Remagen”—a bridge that might not have been standing for Company A to find had the Nazi state really been the complicated ideological product described by people like Mouffe, instead of the product of “ruthless gangsters, distinguishable only by their facial hair” (as Rorty, following Vladimar Nabokov, once described Lenin, Trotsky, and Stalin). That’s because, according to (relatively) recent historical work that unfortunately has not yet deeply penetrated the English-speaking world, in March 1945 the German generals who had led the Blitzkrieg in 1940 and ’41—and then headed the defense of Hitler’s criminal empire—were far more concerned with the routing numbers of their bank accounts than the routes into Germany.

As “the ring closed around Germany in February, March, and April 1945”—wrote Ohio State historian Norman Goda in 2003—“and as thousands of troops were being shot for desertion,” certain high-ranking officers who, in some cases, had been receiving extra “monthly payments” directly from the German treasury on orders of Hitler himself “deposited into banks that were located in the immediate path of the enemy quickly arranged to have their deposits shifted to accounts in what they hoped would be in safer locales.” In other words, in the face of Allied advance, Hitler’s generals—men like Heinz Guderian, who in 1943 was awarded “Deipenhof, an estate of 937 hectares (2,313 acres) worth RM [Reichmark] 1.24 million” deep inside occupied Poland—were preoccupied with defending their money, not Germany.

Guderian—who led the tanks who broke the French lines at Sedan, the direct cause of the Fall of France in May 1940—was only one of many top-level military leaders who received secretive pay-offs even before the beginning of World War II: Walther von Brauchitsch, who was Guderian’s supervisor, had for example been getting—tax-free—double his salary since 1938, while Field Marshal Erhard Milch, who quit his prewar job of running Lufthansa to join the Luftwaffe, received a birthday “gift” from Hitler each year worth more than $100,000 U.S. Both of these were just two of many high military officers to receive such six-figure “birthday gifts,” or other payments, which Goda writes not only were “secret and dependent on behavior”—that is, on not telling anyone about the payments and on submission to Hitler’s will—but also “simply too substantial to have been viewed seriously as legitimate.” All of these characteristics, as any federal prosecutor will tell you, are hallmarks of corruption.

Such corruption, of course, was not limited to the military: the Nazis were, according to historian Jonathan Petropoulos “not only the most notorious murderers in history but also the greatest thieves.” Or as historian Richard J. Evans has noted, “Hitler’s rule [was] based not just on dictatorship, but also on plunder, theft and looting,” beginning with the “systematic confiscation of Jewish assets, beginning almost immediately on the Nazi seizure of power in 1933.” That looting expanded once the war began; at the end of September, 1939 for instance, Evans reports, the German government “decreed a blanket confiscation of Polish property.” Dutch historian Gerard Aalders has estimated that Nazi rule stole “the equivalent of 14 billion guilders in today’s money in Jewish-owned assets alone” from the Netherlands. In addition, Hitler and other Nazi leaders, like Herman Göring, were also known for stealing priceless artworks in conquered nations (the subject of the recent film, Monument Men). In the context of such thievery on such a grand scale it hardly appears a stretch to think they might pay off the military men who made it all possible. After all, the Nazis had been doing the same for civilian leaders virtually since the moment they took over the state apparatus in 1933.

Yet, there is one difference between the military leaders of the Third Reich and American leaders today—a difference perhaps revealed by their response when confronted after the war with the evidence of their plunder. At the “High Command Trial” at Nuremberg in the winter of 1947-’48, Walther von Brauchitsch and his colleague Franz Halder—who together led the Heer into France in 1940—denied that they ever took payments, even after confronted with clear evidence of just that. Milch, for instance, claimed that his “birthday present” was compensation for the loss of his Lufthansa job. All the other generals did the same: Goda notes that even Guderian, who was well-known for his Polish estate, “changed the dates and circumstances of the transfer in order to pretend that the estate was a legitimate retirement gift.” In short, they all denied it—which is interesting in the light of the fact that, as happened during the first Nuremberg trial on 3 January 1946, at one point a witness could casually admit to the murder of 90,000 people.

To admit receiving payments, in other words, was worse—to the generals—than admitting setting Europe alight for essentially no reason. That it was so is revealed by the fact that the legal silence was matched by similar silences in postwar memoirs and so on, none of which (except Guderian’s, who as mentioned fudged some details in his) ever admitted taking money directly from the national till. That silence implies, in the first place, a conscious knowledge that these payments were simply too large to be legitimate. And that, in turn, implies a consciousness not merely of guilt, but also of shame—a concept that is simply incoherent without an understanding of what the act underlying the payments actually is. The silence of the generals, that is, implies that the German generals had internalized a definition of corruption—unfortunately, however, a recent U.S. Supreme Court case, McDowell v. United States, suggests that Americans (or at least the Supreme Court) have no such definition.

The facts of the case were that Robert McDonnell, then governor of Virginia, received $175,000 in benefits from the executive officer of a company called Star Scientific, presumably because not only did Star Scientific want Virginia’s public universities to conduct research on their product, a “nutritional supplement” based on tobacco—but they felt McDonnell could conjure the studies. The burden of the prosecutor—according to Chief Justice John Roberts’ unanimous opinion—was then to show “that Governor McDonnell committed (or agreed to commit) an ‘official act’ in exchange for the loans and gifts.” At that point, then, the case turned on the definition of “official act.”

According to the federal bribery statute, an “official act” is

any decision or action on any question, matter, cause, suit, proceeding or controversy, which may at any time be pending, or which may by law be brought before any public official, in such official’s official capacity, or in such official’s place of trust or profit.

McDonnell, of course, held that the actions McDonnell admitted he took on Star Scientific’s behalf—including setting up meetings with other state officials, making phone calls, and hosting events—did not constitute an “official act” under the law. The federal prosecutors, obviously to the contrary, held they did.

To McDonnell, defining the acts he took on behalf of Star Scientific constituted a too-broad definition of “official act”: to him (or rather his attorneys), the government’s definition made “‘virtually all of a public servant’s activities ‘official,’ no matter how minor or innocuous.’” The prosecutors’ argued that a broad definition of crooked acts is necessary to combat corruption; McDonnell argued that the broad definition threatens the ability of public officials to act. Ultimately, his attorneys said, the broad nature of the anti-corruption statute threatens constitutional government itself.

In the end the Court accepted that argument. In John Roberts’ words, the acts McDonnell committed could not be defined as anything “more specific and focused than a broad policy objective.” In other words, sure McDonnell got a bunch of stuff from a constituent, and then he did a bunch of things for that constituent, but the things that he did did not constitute anything more than simply doing his job—a familiar defense, to be sure, at Nuremberg.

The effective upshot of McDowell, then, appears to be that the U.S. Supreme Court, at least, no longer has an adequate definition of corruption—which might appear to be a grandiose conclusion to hang on one court case, of course. But consider the response of Preet Bharara, former United States Attorney for the Southern District of New York, when he was asked by The New Yorker just why it was that his office did not prosecute anyone—anyone—in response to the financial meltdown of 2007-08. Sometimes, Bharara said in response, when “you see a building go up in flames, you have to wonder if there’s arsonyou see a building go up in flames, you have to wonder if there’s arson.” Sometimes, he continued, “it’s not arson, it’s an accident”—but sometimes “it is arson, and you can’t prove it.” Bharara’s comments suggested that the problem was an investigatory one: his detectives could not gather the right evidence. But McDonnell suggests that the problem may have been something else: a legal one, where the problem isn’t with the evidence but rather with the conceptual category required to use the evidence to prosecute a crime.

That something is going on is revealed by a report from Syracuse University’s Transactional Records Access Clearinghouse, or TRAC, which found that in 2011 the Department of Justice reported that prosecutions for financial crimes have been falling since the early 1990s—despite the fact that the economic crisis of 2007 and 2008 was driven by extremely questionable financial transactions. Other studies observe that Ronald Reagan, generally not thought to be a crusader type, prosecuted more financial crimes than did Barack Obama: in 2010, the Obama administration deported 393,000 immigrants—and prosecuted zero bankers.

The question, of course, is why that is so—to which any number of answers have been proposed. One, however, is especially resisted by those at the upper reaches of academia who are in the position of educating future federal prosecutors: people who, like Mouffe, think that

Democratic action … does not require a theory of truth and notions like unconditionality and universal validity but rather a variety of practices and pragmatic moves aimed at persuading people to broaden their commitments to others, to build a more inclusive community.

“Liberal democratic principles,” Mouffe goes on to claim, “can only be defended in a contextualist manner, as being constitutive of our form of life, and we should not try to ground our commitment to them on something supposedly safer”—that “something safer” being, I suppose, anything like the account ledgers of the German treasury from 1933 to 1945, which revealed the extent of Nazi corruption after the war.

To suggest, however, that there is a connection between the linguistic practices of professors and the failures of prosecutors is, of course, to engage in just the same style of argumentation as those who insist, with Mouffe, that it is “the mobilization of passions and sentiments, the multiplication of practices, institutions and language games that provide the conditions of possibility for democratic subjects and democratic forms of willing” that will lead to “the creation of a democratic ethos.” Among these are, for example, literary scholar Jane Tompkins, who once made a similar point by recommending, not “specific alterations in the current political and economic arrangements,” but instead “a change of heart.” But perhaps the rise of such a species of supposed “leftism” ought to be expected in an age characterized by vast economic inequality, which according to Nobel Prize-winning economist Joseph Stiglitz (a proud son of Gary, Indiana), “is due to manipulation of the financial system, enabled by changes in the rules that have been bought and paid for by the financial industry itself—one of its best investments ever.”`The only question left, one supposes, is what else has been bought; the state of academia these days, it appears, suggests that academics can’t even see the Rhine, much less point the way to a bridge across.