Best Intentions

L’enfer est plein de bonnes volontés ou désirs
—St. Bernard of Clairvaux. c. 1150 A.D.

And if anyone knows Chang-Rae Lee,” wrote Penn State English professor Michael Bérubé back in 2006, “let’s find out what he thinks about Native Speaker!” The reason Bérubé gives for doing that asking is, first, that Lee wrote the novel under discussion, Native Speaker—and second, that Bérubé “once read somewhere that meaning is identical with intention.” But this isn’t the beginning of an essay about Native Speaker. It’s actually the end of an attack on a fellow English professor: the University of Illinois at Chicago’s Walter Benn Michaels, who (along with with Steven Knapp, now president of George Washington University), wrote the 1982 essay “Against Theory”—an essay that  argued that “the meaning of a text is simply identical to the author’s intended meaning.” Bérubé’s closing scoff then is meant to demonstrate just how politically conservative Michaels’ work is— earlier in the same piece, Bérubé attempted to tie Michaels’ work to Arthur Schlesinger, Jr.’s The Disuniting of America, a book that, because it argued that “multiculturalism” weakened a shared understanding of the United States, has much the same status among some of the intelligentsia that Mein Kampf has among Jews. Yet—weirdly for a critic who often insists on the necessity of understanding historical context—it’s Bérubé’s essay that demonstrates a lack of contextual knowledge, while it’s Michaels’ view—weirdly for a critic who has echoed Henry Ford’s claim that “History is bunk”—that demonstrates a possession of it. In historical reality, that is, it’s Michaels’ pro-intention view that has been the politically progressive one, while it’s Bérubé’s scornful view that shares essentially everything with traditionally conservative thought.

Perhaps that ought to have been apparent right from the start. Despite the fact that, to many English professors, the anti-intentionalist view has helped to unleash enormous political and intellectual energies on behalf of forgotten populations, the reason it could do so was that it originated from a forgotten population that, to many of those same professors, deserves to be forgotten: white Southerners. Anti-intentionalism, after all, was a key tenet of the critical movement called the New Criticism—a movement that, as Paul Lauter described in a presidential address to the American Studies Association in 1994, arose “largely in the South” through the work of Southerners like John Crowe Ransom, Allen Tate, and Robert Penn Warren. Hence, although Bérubé, in his essay on Michaels, insinuates that intentionalism is politically retrograde (and perhaps even racist), it’s actually the contrary belief that can be more concretely tied to a conservative politics.

Ransom and the others, after all, initially became known through a 1930 book entitled I’ll Take My Stand: The South and the Agrarian Tradition, a book whose theme was a “central attack on the impact of industrial capitalism” in favor of a vision of a specifically Southern tradition of a society based around the farm, not the factory. In their vision, as Lauter says, “the city, the artificial, the mechanical, the contingent, cosmopolitan, Jewish, liberal, and new” were counterposed to the “natural, traditional, harmonious, balanced, [and the] patriachal”: a juxtaposition of sets of values that wouldn’t be out of place in a contemporary Republican political ad. But as Lauter observes, although these men were “failures in … ‘practical agitation’”—i.e., although I’ll Take My Stand was meant to provoke a political revolution, it didn’t—“they were amazingly successful in establishing the hegemony of their ideas in the practice of the literature classroom.” Among the ideas that they instituted in the study of literature was the doctrine of anti-intentionalism.

The idea of anti-intentionalism itself, of course, predates the New Criticism: writers like T.S. Eliot (who grew up in St. Louis) and the University of Cambridge don F.R. Leavis are often cited as antecedents. Yet it did not become institutionalized as (nearly) official doctrine of English departments  (which themselves hardly existed) until the 1946 publication of W.K. Wimsatt and Monroe Beardsley’s “The Intentional Fallacy” in The Sewanee Review. (The Review, incidentally, is a publication of Sewanee: The University of the South, which was, according to its Wikipedia page, originally founded in Tennessee in 1857 “to create a Southern university free of Northern influences”—i.e., abolitionism.) In “The Intentional Fallacy,” Wimsatt and Beardsley explicitly “argued that the design or intention of the author is neither available nor desirable as a standard for judging the success of a work of literary art”—a doctrine that, in the decades that followed, did not simply become a key tenet of the New Criticism, but also largely became accepted as the basis for work in English departments. In other words, when Bérubé attacks Michaels in the guise of acting on behalf of minorities, he also attacks him on behalf of the institution of English departments—and so just who the bully is here isn’t quite so easily made out as Bérubé makes it appear.

That’s especially true because anti-intentionalism wasn’t just born and raised among conservatives—it has also continued to be a doctrine in conservative service. Take, for instance, the teachings of conservative Supreme Court justice Antonin Scalia, who throughout his career championed a method of interpretation he called “textualism”—by which he meant (!) that, as he said in 1995, it “is the law that governs, not the intent of the lawgiver.” Scalia argued his point throughout his career: in 1989’s Green v. Bock Laundry Mach. Co., for instance, he wrote that the

meaning of terms on the statute books ought to be determined, not on the basis of which meaning can be shown to have been understood by the Members of Congress, but rather on the basis of which meaning is … most in accord with context and ordinary usage … [and is] most compatible with the surrounding body of law.

Scalia thusly argued that interpretation ought to proceed from a consideration of language itself, apart from those who speak it—a position that would place him, perhaps paradoxically from Michael Bérubé’s position, among the most rarified heights of literary theorists: it was after all the formidable German philosopher Martin Heidegger—a twelve-year member of the Nazi Party and sometime-favorite of Bérubé’s—who wrote the phrase “Die Sprache spricht”: “Language [and, by implication, not speakers] speaks.” But, of course, that may not be news Michael Bérubé wishes to hear.

Like Odysseus’ crew, there’s a simple method by which Bérubé could avoid hearing the point: all of the above could be dismissed as an example of the “genetic fallacy.” First defined by Morris Cohen and Ernest Nagel in 1934’s An Introduction to Logic and Scientific Method, the “genetic fallacy” is “the supposition that an actual history of any science, art, or social institution can take the place of a logical analysis of its structure.” That is, the arguments above could be said to be like the argument that would dismiss anti-smoking advocates on the grounds that the Nazis were also anti-smoking: just because the Nazi were against smoking is no reason not to be against smoking also. In the same way, just because anti-intentionalism originated among conservative Southerners—and also, as we saw, committed Nazis—is no reason to dismiss the thought of anti-intentionalism. Or so Michael Bérubé might argue.

That would be so, however, only insofar as the doctrine of anti-intentionalism were independent from the conditions from which it arose: the reasons to be against smoking, after all, have nothing to do with anti-Semitism or the situation of interwar Germany. But in fact the doctrine of anti-intentionalism—or rather, to put things in the correct order, the doctrine of intentionalism—has everything to do with the politics of its creators. In historical reality, the doctrine enunciated by Michaels—that intention is central to interpretation—was in fact created precisely in order to resist the conservative political visions of Southerners. From that point of view, in fact, it’s possible to see the Civil War itself as essentially fought over this principle: from this height, “slavery” and “states’ rights” and the rest of the ideas sometimes advanced as reasons for the war become mere details.

It was, in fact, the very basis upon which Abraham Lincoln would fight the Civil War—though to see how requires a series of steps. They are not, however, especially difficult ones: in the first place, Lincoln plainly said what the war was about in his First Inaugural Address. “Unanimity is impossible,” as he said there, while “the rule of a minority, as a permanent arrangement, is wholly inadmissable.” Not everyone will agree all the time, in other words, yet the idea of a “wise minority” (Plato’s philosopher-king or the like) has been tried for centuries—and been found wanting; therefore, Lincoln continued, by “rejecting the majority principle, anarchy or despotism in some form is all that is left.” Lincoln thereby concluded that “a majority, held in restraint by constitutional checks and limitations”—that is, bounds to protect the minority—“is the only true sovereign of a free people.” Since the Southerners, by seceding, threatened this idea of government—the only guarantee of free government—therefore Lincoln was willing to fight them. But where did Lincoln obtain this idea?

The intellectual line of descent, as it happens, is crystal clear: as Wills writes, “Lincoln drew much of his defense of the Union from the speeches of [Daniel] Webster”: after all, the Gettysburg Address’ famous phrase, “government of the people, by the people, for the people” was an echo of Webster’s Second Reply to Hayne, which contained the phrase “made for the people, made by the people, and answerable to the people.” But if Lincoln got his notions of the Union (and thusly, his reasons for fighting the war) from Webster, then it should also be noted that Webster got his ideas from Supreme Court Justice Joseph Story: as Theodore Parker, the Boston abolitionist minister, once remarked, “Mr. Justice Story was the Jupiter Pluvius [Raingod] from whom Mr. Webster often sought to elicit peculiar thunder for his speeches and private rain for his own public tanks of law.” And Story, for his part, got his notions from another Supreme Court justice: James Wilson, who—as Linda Przybyszewski notes in passing in her book, The Republic According to John Marshall Harlan (a later Supreme Court justice)—was “a source for Joseph Story’s constitutional nationalism.” So in this fashion Lincoln’s arguments concerning the constitution—and thus, the reasons for fighting the war—ultimately derived from Wilson.

 

JamesWilson
Not this James Wilson.

Yet, what was that theory—the one that passed by a virtual apostolic succession from Wilson to Story to Webster to Lincoln? It was derived, most specifically, from a question Wilson had publicly asked in 1768, in his Considerations on the Nature and Extent of the Legislative Authority of the British Parliament. “Is British freedom,” Wilson had there asked, “denominated from the soil, or from the people, of Britain?” Nineteen years later, at the Constitutional Convention of 1787, Wilson would echo the same theme: “Shall three-fourths be ruled by one-fourth? … For whom do we make a constitution? Is it for men, or is it for imaginary beings called states?” To Wilson, the answer was clear: constitutions are for people, not for tracts of land, and as Wills correctly points out, it was on that doctrine that Lincoln prosecuted the war.

James Wilson (1742-1798)
This James Wilson.

Still, although all of the above might appear unobjectionable, there is one key difficulty to be overcome. If, that is, Wilson’s theory—and Lincoln’s basis for war—depends on a theory of political power derived from people, and not inanimate objects like the “soil,” that requires a means of distinguishing between the two—which perhaps is why Wilson insisted, in his Lectures on Law in 1790 (the very first such legal works in the United States), that “[t]he first and governing maxim in the interpretation of a statute is to discover the meaning of those who made it.” Or—to put it another way—the intention of those who made it. It’s intention, in other words, that enables Wilson’s theory to work—as Knapp and Michaels well-understand in “Against Theory.”

The central example of “Against Theory,” after all, is precisely about how to distinguish people from objects. “Suppose that you’re walking along a beach and you come upon a curious sequence of squiggles in the sand,” Michaels and his co-author ask. These “squiggles,” it seems, appear to be the opening lines of Wordsworth’s “A Slumber”: “A slumber did my spirit seal.” That wonder, then, is reinforced by the fact that, in this example, the next wave leaves, “in its wake,” the next stanza of the poem. How to explain this event, Knapp and Michaels ask?

There are, they say, only two alternatives: either to ascribe “these marks to some agent capable of intentions,” or to “count them as nonintentional effects of mechanical processes,” like some (highly unlikely) process of erosion or wave action or the like. Which, in turn, leads up to the $64,000 question: if these “words” are the result of “mechanical processes” and not the actions of an actor, then “will they still seem to be words?”

The answer, of course, is that they will not: “They will merely seem to resemble words.” Thus, to deprive (what appear to be) the words “of an author is to convert them into accidental likenesses of language.” Intention and meaning are, in this way, identical to each other: no intention, no meaning—and vice versa. Similarly, I suggest, to Lincoln (and his intellectual antecedents), the state is identical to its people—and vice versa. Which, clearly, then suggests that those who deny intention are, in their own fashion—and no matter what they say—secessionists.

If so, then that would, conversely, make those who think—along with Knapp and Michaels—that it is intention that determines meaning, and—along with Lincoln and Wilson—that it is people that constitutes states, then it would follow that those who thought that way really could—unlike the sorts of “radicals” Bérubé is attempting to cover for—construct the United States differently, in a fashion closer to the vision of James Wilson as interpreted by Abraham Lincoln. There are, after all, a number of things about the government of the United States that still lend themselves to the contrary theory, that power derives from the inanimate object of the soil: the Senate, for one. The Electoral College, for another. But the “radical” theory espoused by Michael Bérubé and others of his ilk does not allow for any such practical changes in the American constitutional architecture. In fact, given its collaboration—a word carefully chosen—with conservatives like Antonin Scalia, it does rather the reverse.

Then again, perhaps that is the intention of Michael Bérubé. He is, after all, an apparently-personable man who nevertheless asked, in a 2012 essay in the Chronicle of Higher Education explaining why he resigned the Paterno Family Professorship in Literature at Pennsylvania State University, us to consider just how horrible the whole Jerry Sandusky scandal was—for Joe Paterno’s family. (Just “imagine their shock and grief” at finding out that the great college coach may have abetted a child rapist, he asked—never mind the shock and grief of those who discovered that their child had been raped.) He is, in other words, merely a part-time apologist for child rape—and so, I suppose, on his logic we ought to give a pass to his slavery-defending, Nazi-sympathizing, “intellectual” friends.

They have, they’re happy to tell us after all, only the best intentions.

Baal

Just as ancient Greek and Roman propagandists insisted, the Carthaginians did kill their own infant children, burying them with sacrificed animals and ritual inscriptions in special cemeteries to give thanks for favours from the gods, according to a new study.
The Guardian, 21 January 2014.

 

Just after the last body fell, at three seconds after 9:40 on the morning of 14 December, the debate began: it was about, as it always is, whether Americans ought to follow sensible rules about guns—or whether they ought to be easier to obtain than, say, the right to pull fish out of the nearby Housatonic River. There’s been a lot of words written about the Sandy Hook killings since the day that Adam Lanza—the last body to fall—killed 20 children and six adults at the elementary school he once attended, but few of them have examined the culpability of some of the very last people one might expect with regard to the killings: the denizens of the nation’s universities. After all, it’s difficult to accuse people who themselves are largely in favor of gun control of aiding and abetting the National Rifle Association—Pew Research reported, in 2011, that more than half of people with more than a college degree favored gun control. And yet, over the past several generations a doctrine has gained ground that, I think, has not only allowed academics to absolve themselves of engaging in debate on the subject of gun control, but has actively harmed the possibility of accomplishing it.

Having said that, of course, it is important to acknowledge that virtually all academics—even those who consider themselves “conservative” politically—are in favor of gun control: when for example Texas passed a law legalizing carrying guns on college campus recently Daniel S. Hamermesh, a University of Texas emeritus professor of economics (not exactly a discipline known for its radicalism), resigned his position, citing a fear for his own and his students’ safety. That’s not likely accidental, because not only do many academics oppose guns in their capacities as citizens, but academics have a special concern when it comes to guns: as Firmin DeBrabander, a professor of philosophy at the Maryland Institute College of Art argued in the pages of Inside Higher Ed last year, against laws similar to Texas’, “guns stand opposed” to the “pedagogical goals of the classroom” because while in the classroom “individuals learn to talk to people of different backgrounds and perspectives,” guns “announce, and transmit, suspicion and hostility.” If anyone has a particular interest in controlling arms, in other words, it’s academics, being as their work is particularly designed to foster what DeBrabander calls “open and transformative exchange” that may air “ideas [that] are offensive.” So to think that academics may in fact be an obstacle towards achieving sensible policies regarding guns might appear ridiculous on the surface.

Yet there’s actually good reason to think that academic liberals bear some responsibility for the United States’ inability to regulate guns like every other industrialized—I nearly said, “civilized”—nation on earth. That’s because changing gun laws would require a specific demands for action, and as political science professor Adolph Reed, Jr. of the University of Pennsylvania put the point not long ago in Harper’s, these days the “left has no particular place it wants to go.” That is, to many on campus and off, making specific demands of the political sphere is itself a kind of concession—or in other words, as journalist Thomas Frank remarked a few years ago about the Occupy Wall Street movement, today’s academic left teaches that “demands [are] a fetish object of literal-minded media types who stupidly crave hierarchy and chains of command.” Demanding changes to gun laws is, after all, a specific demand, and to make specific demands is, from this sophisticated perspective, a kind of “sell out.”

Still, how did the idea of making specific demands become a derided form of politics? After all, the labor movement (the eight-hour day), the suffragette movement (women’s right to vote) or the civil rights movement (an end to Jim Crow) all made specific demands. How then has American politics arrived at the diffuse and essentially inarticulable argument of the Occupy movement—a movement within which, Elizabeth Jacobs claimed in a report for the Brookings Institute while the camp in Zuccotti Park still existed, “the lack of demands is a point of pride?” I’d suggest that one possible way the trick was turned was through a 1967 article written by one Robert Bellah, of Harvard: an article that described American politics, and its political system, as a “civil religion.” By describing American politics in religious rather than secular terms, Bellah opened the way towards what some have termed the “non-politics” of Occupy and other social movements—and incidentally, allow children like Adam Lanza’s victims to die.

In “Civil Religion in America,” Bellah—who received his bachelor’s from Harvard in 1950, and then taught at Harvard until moving to the University of California at Berkeley in 1967, where he continued until the end of his illustrious career—argued that “few have realized that there actually exists alongside of and rather clearly differentiated from the churches an elaborate and well-institutionalized civil religion in America.” This “national cult,” as Bellah terms it, has its own holidays: Thanksgiving Day, Bellah says, “serves to integrate the family into the civil religion,” while “Memorial Day has acted to integrate the local community into the national cult.” Bellah also remarks that the “public school system serves as a particularly important context for the cultic celebration of the civil rituals” (a remark that, incidentally, perhaps has played no little role in the attacks on public education over the past several decades). Bellah also argues that various speeches by American presidents like Abraham Lincoln and John F. Kennedy are also examples of this “civil religion” in action: Bellah spends particular time with Lincoln’s Gettysburg Address, which he notes that poet Robert Lowell observed is filled with Christian imagery, and constitutes “a symbolic and sacramental act.” In saying so, Bellah is merely following a longstanding tradition regarding both Lincoln and the Gettysburg Address—a tradition that, however, that does not have the political valence that Bellah, or his literal spiritual followers, might think it does.

“Some think, to this day,” wrote Garry Wills of Northwestern University in his magisterial Lincoln at Gettysburg: The Words that Remade America, “that Lincoln did not really have arguments for union, just a kind of mystical attachment to it.” It’s a tradition that Wills says “was the charge of Southerners” against Lincoln at the time: after the war, Wills notes, Alexander Stephens—the only vice president the Confederate States ever had—argued that the “Union, with him [Lincoln], in sentiment rose to the sublimity of a religious mysticism.” Still, it’s also true that others felt similarly: Wills points out that the poet Walt Whitman wrote that “the only thing like passion or infatuation” in Lincoln “was the passion for the Union of these states.” Nevertheless, it’s a dispute that might have fallen by the historical wayside if it weren’t for the work of literary critic Edmund Wilson, who called his essay on Lincoln (collected in a relatively famous book Patriotic Gore: Studies in the Literature of the American Civil War) “The Union as Religious Mysticism.” That book, published in 1962, seems to have at least influenced Lowell—the two were, if not friends, at least part of the same New York City literary scene—and through Lowell Bellah, seems plausible.

Even if there was no direct route from Wilson to Bellah, however, it seems indisputable that the notion—taken from Southerners—concerning the religious nature of Lincoln’s arguments for the American Union became widely transmitted through American culture. Richard Nixon’s speechwriter, William Safire—since a longtime columnist for the New York Times—was familiar with Wilson’s ideas: as Mark Neely observed in his The Fate of Liberty: Abraham Lincoln and the Civil Liberties, on two occasions in Safire’s novel Freedom, “characters comment on the curiously ‘mystical’ nature of Lincoln’s attachment to the Union.” In 1964, the theologian Reinhold Niebuhr published an essay entitled “The Religion of Abraham Lincoln,” while in 1963 William J. Wolfe of the Episcopal Theological School of Cambridge, Massachusetts claimed that “Lincoln is one of the greatest theologians in America,” in the sense “of seeing the hand of God intimately in the affairs of nations.” Sometime in the early 1960s and afterwards, in other words, the idea took root among some literary intellectuals that the United States was a religious society—not one based on an entirely secular philosophy.

At least when it comes to Lincoln, at any rate, there’s good reason to doubt this story: far from being a religious person, Lincoln has often been described as non-religious or even an atheist. His longtime friend Jesse Fell—so close to Lincoln that it was he who first suggested what became the famous Lincoln-Douglas debates—for instance once remarked that Lincoln “held opinions utterly at variance with what are usually taught in the church,” and Lincoln’s law partner William Herndon—who was an early fan of Charles Darwin’s—said that the president also was “a warm advocate of the new doctrine.” Being committed to the theory of evolution—if Lincoln was—doesn’t mean, of course, that the president was therefore anti-religious, but it does mean that the notion of Lincoln as religious mystic has some accounting to do: if he was, it apparently was in no very simple way.

Still, as mentioned the view of Lincoln as a kind of prophet did achieve at least some success within American letters—but, as Wills argues in Lincoln at Gettysburg, that success has in turn obscured what Lincoln really argued concerning the structure of American politics. As Wills remarks for instance, “Lincoln drew much of his defense of the Union from the speeches of [Daniel] Webster, and few if any have considered Webster a mystic.” Webster’s views, in turn, descend from a line of American thought that goes back to the Revolution itself—though its most significant moment was at the Constitutional Convention of 1787.

Most especially, to one James Wilson, a Scottish emigrant, delegate to the Constitutional Convention of 1787, and later one of the first justices of the Supreme Court of the United States. If Lincoln got his notions of the Union from Webster, then Webster got his from Supreme Court Justice Joseph Story: as Wills notes, Theodore Parker, the Boston abolitionist minister, once remarked that “Mr. Justice Story was the Jupiter Pluvius [Raingod] from whom Mr. Webster often sought to elicit peculiar thunder for his speeches and private rain for his own public tanks of law.” Story, for his part, got his notion from Wilson: as Linda Przybyscewski notes in passing in her book, The Republic According to John Marshall Harlan (a later justice), Wilson was “a source for Joseph Story’s constitutional nationalism.” And Wilson’s arguments concerning the constitution—which he had a strong hand in making—were hardly religious.

At the constitutional convention, one of the most difficult topics to confront the delegates was the issue of representation: one of the motivations for the convention itself, after all, was the fact that under the previous terms of government, the Articles of Confederation, each state, rather than each member of the Continental Congress, possessed a vote. Wilson had already, in 1768, attacked the problem of representation as being one of the foremost reasons for the Revolution itself—the American colonies were supposed, by British law, to be fully as much British subjects as a Londoner or Mancunian, but yet had no representation in Parliament: “Is British freedom,” Wilson therefore asked in his Considerations on the Nature and Extent of the Legislative Authority of the British Parliament, “denominated from the soil, or from the people, of Britain?” That question was very much the predecessor of the question Wilson would ask at the convention: “For whom do we make a constitution? Is it for men, or is it for imaginary beings called states?” To Wilson, the answer was clear: constitutions are for people, not for tracts of land.

Wilson also made an argument that would later be echoed by Lincoln: he drew attention to the disparities of population between the several states. At the time of the convention, Pennsylvania—just as it is today—was a much more populous state than New Jersey was, a difference that made no difference under the Articles of Confederation, under which all states had the same number of votes: one. “Are not the citizens of Pennsylvania,” Wilson therefore asked the Convention, “equal to those of New Jersey? Does it require 150 of the former to balance 50 of the latter?” This argument would later be echoed by Lincoln, who, in order to illustrate the differences between free states and slave states, would—in October of 1854, at Peoria, in the speech that would mark his political comeback—note that

South Carolina has six representatives, and so has Maine; South Carolina has eight presidential electors, and so has Maine. This is precise equality so far; and, of course they are equal in Senators, each having two. Thus in the control of the government, the two States are equals precisely. But how are they in the number of their white people? Maine has 581,813—while South Carolina has 274,567. Maine has twice as many as South Carolina, and 32,679 over. Thus each white man in South Carolina is more than the double of any man in Maine.

The point of attack for both men, in other words, was precisely the same: the matter of representation in terms of what would later be called a “one man, one vote” standard. It’s an argument that hardly appears “mystical” in nature: since the matter turns, if anything, upon ratios of numbers to each other, it seems more aposit to describe the point of view adopted here as, if anything, “scientific”—if it weren’t for the fact that even the word “scientific” seems too dramatic a word for a matter that appears to be far more elemental.

Were Lincoln or Wilson alive today, then, it seems that the first point they might make about the gun control debate is that it is a matter about which the Congress is greatly at variance with public opinion: as Carl Bialik reported for FiveThirtyEight this past January, whenever Americans are polled “at least 70 percent of Americans [say] they favor background checks,” and furthermore that an October 2015 poll by CBS News and the New York Times “found that 92 percent of Americans—including 87 percent of Republicans—favor background checks for all gun buyers.” Yet, as virtually all Americans are aware, it has become essentially impossible to pass any sort of sensible legislation through Congress: a fact dramatized this spring by a “sit-down strike” in Congress by congressmen and congresswomen. What Lincoln and Wilson might further say about the point is that the trouble can’t be solved by such a “religious” approach: instead, what they presumably would recommend is that what needs to change is a system that inadequately represents the people. That isn’t the answer that’s on offer from academics and others on the American left, however. Which is to say that, soon enough, there will be another Adam Lanza to bewail—another of the sacrifices, one presumes, that the American left demands Americans must make to what one can only call their god.

For Miracles Are Ceased

Turn him to any cause of policy,
The Gordian knot of it he will unloose …
Henry V

 

For connoisseurs of Schadenfreude, one of the most entertaining diversions of the past half-century or so is the turf war fought out in the universities between the sciences and the humanities now that, as novelist R. Scott Bakker has written, “at long last the biological sciences have gained the tools and techniques required to crack problems that had hitherto been the exclusive province of the humanities.” A lot of what’s happened in the humanities since the 1960s—the “canon wars,” the popularization of Continental philosophy, the establishment of various sorts of “studies”—could be described as a disciplinary battle with the sciences, and not the “political” war that it is often advertised as; under that description, the vaunted outreach of the humanities to previously-underserved populations stops looking entirely so noble and more like the efforts, a century ago, of robber baron industrialists to employ minority scabs against striking workers. It’s a comparison in fact that is not only not meant flippantly, but suggests that the history of the academy since the 1960s stops looking like the glorious march towards inclusion its proponents sometimes portray it as—and rather more like the initial moves of an ideological war designed to lay the foundation for the impoverishment of all America.

According to University of Illinois at Chicago professor of literature Walter Benn Michaels, after all, today’s humanistic academy has largely become the “human resources department of neoliberalism.” Michaels’ work suggests, in fact, that the “real” purpose of the professoriate promoting the interests of women and minorities has not been for the sheer justice of the cause, but rather to preserve their own antiquated and possibly ridiculous methods of “scholarship.” But that bargain however—if there was one—may perhaps be said to have had unintended consequences: among them, the reality that some CEOs enjoy pay thousands of times that of the average worker.

Correlation is not causation, of course, but it does seem inarguable that, as former Secretary of Labor Robert Reich wrote recently in Salon, Americans have forgotten the central historical lesson of the twentieth century: that a nation’s health (and not just its economic health) depends on consumer demand. As Reich wrote, contrary to those who argue in favor of some form of “trickle down” economics, “America’s real job creators are consumers, whose rising wages generate jobs and growth.” When workers get raises, they have “enough purchasing power to buy what expanding businesses [have] to offer.” In short (pardon, Secretary Reich), “broadly shared prosperity isn’t just compatible with a healthy economy that benefits everyone—it’s essential to it.” But Americans have, it seems, forgotten that lesson: as many, many observers have demonstrated, American wages have largely been stagnant since the early 1970s.

Still, that doesn’t mean the academy is entirely to blame: for the most part, it’s only because of the work of academics that the fact of falling wages is known to any certainty—though it’s also fair to say that the evidence can be gathered by a passing acquaintance with reality. Yet it’s also true that, as New York University professor of physics Alan Sokal averred some two decades ago, much of the work of the humanities since the 1960s has been devoted towards undermining, in the name of one liberatory vision or another, the “stodgy” belief “that there exists an external world, [and] that there exist objective truths about it.” Such work has arguably had a version of the political effect often bombastically claimed for it—undoubtedly, there are many more people from previously unrepresented groups in positions of authority throughout American society today than there were before.

Yet, as the Marxist scholars often derided by their “postmodernist” successors knew—and those successors appear to ignore—every advance has its cost, and interpreted dialectically the turn of the humanities away from scientific naturalism has two possible motives: the first, as mentioned, the possibility that territory once the exclusive province of the humanities has been invaded by the sciences, and that much of the behavior of professors of the humanities can be explained by fear that “the traditional humanities are about to be systematically debunked” by what Bakker calls “the tremendous, scientifically-mediated transformations to come.” In the wake of the “ongoing biomechanical renovation of the human,” Bakker says, it’s become a serious question whether “the idiom of the humanities can retain cognitive legitimacy.” If Bakker’s suggestion is correct, then the flight of the humanities from the sciences can be interpreted as something akin to the resistance of old-fashioned surgeons to the practice of washing their hands.

There is, however, another possible interpretation: one that accounts for the similarity between the statistical evidence of rising inequality since the 1970s gathered by many studies and the evidence in favor of the existence of global warming—a comparison not made lightly. In regards to both, there’s a case to be made that many of the anti-naturalistic doctrines developed in the academy have conspired with the mainstream media’s tendency to ignore reality to prevent, rather than aid, political responses—a conspiracy that itself is only encouraged by the current constitutional structure of the American state, which according to some academic historians (of the non-“postmodern” sort) was originally designed with precisely the intention of both ignoring and preventing action about another kind of overwhelming, but studiously ignored, reality.

In early March, 1860, not-yet presidential candidate Abraham Lincoln addressed an audience at New Haven, Connecticut; “the question of Slavery,” he said during that speech, “is the question, the all absorbing topic of the day.” Yet it was also the case, Lincoln observed, that while in private this was the single topic of many conversations, in public it was taboo: according to slavery’s defenders, Lincoln said, opponents of slavery “must not call it wrong in the Free States, because it is not there, and we must not call it wrong in the Slave States because it is there,” while at the same time it should not be called “wrong in politics because that is bringing morality into politics,” and also that it should not be called “wrong in the pulpit because that is bringing politics into religion.” In this way, even as slavery’s defenders could admit that slavery was wrong, they could also deny that there was any “single place … where this wrong thing can properly be called wrong!” Thus, despite the fact that slavery was of towering importance it was also to be disregarded.

There were, of course, entirely naturalistic reasons for that premeditated silence: as documented by scholars like Leonard Richards and Garry Wills, the structure of American government itself is due to a bargain between the free and the slave states—a bargain that essentially ceded control of the federal machinery to the South in exchange for their cooperation. The evidence is compelling: “between Washington’s election and the Compromise of 1850,” as Richards has noted for example, “slaveholders controlled the presidency for fifty years, the Speaker [of the House]’s chair for forty-one years, and the chairmanship of House Ways and Means [the committee that controls the federal budget] for forty-two years.” By controlling such key offices, according to these scholars, slaveowners could prevent the federal government from taking any action detrimental to their interests.

The continuing existence of structures originally designed to ensure Southern control—among them the Supreme Court and the Senate, institutions well-known to constitutional scholars for being offerings to society’s “aristocratic” interests even if the precise nature of that interest is never explicitly identified as such—even beyond the existence of slavery, in turn, may perhaps explain, naturalistically, the relative failure of naturalistic, scientific thinking in the humanities over the past several decades—even as the public need for such thinking has only increased. Such, at least, is what might be termed the “positive” interpretation of humanistic antagonism toward science: not so much an interested resistance to progress but instead a principled reaction to a continuing drag on not just the political interests of Americans, but perhaps even to the progress of knowledge and truth itself.

What’s perhaps odd, to be sure, is that no one from the humanities has dared to make this case publicly—excluding only a handful of historians and law professors, most of them far from the scholarly centers of excitement. On the contrary, jobs in the humanities generally go to people who urge, like European lecturer in art history and sociology Anselm Joppe, some version of a “radical separation from the world of politics and its institutions of representation and delegation,” and ridicule those who “still flock to the ballot box”—often connected, as Joppe’s proposals are, to a ban on television and an opposition to both genetically modified food and infrastructure investment. Still, even when—as Richards and Wills and others have—academics have made their case in a responsible way, none has connected that struggle to the larger issues of the humanities generally. Of course, to make such connections—to make such a case—would require such professors to climb down from the ivory tower that is precisely the perch that enables them to do the sort of thinking that I have attempted to present here, inevitably exhibiting innumerable, and perhaps insuperable, difficulties. Yet, without such attempts, it’s difficult to see how either the sciences or the humanities can be preserved—to speak nothing of the continuing existence of the United States.

Still, there is one “positive” possibility: if none of them do, then the opportunities for Schadenfreude will become nearly limitless.

Fine Points

 

Whenever asked a question, [John Lewis] ignored the fine points of whatever theory was being put forward and said simply, “We’re gonna march tonight.”
—Taylor Branch.
   Parting the Waters: America in the King Years Vol. 1 

 

 

“Is this how you build a mass movement?” asked social critic Thomas Frank in response to the Occupy Wall Street movement: “By persistently choosing the opposite of plain speech?” To many in the American academy, the debate is over—and plain speech lost. More than fifteen years ago articles like philosopher Martha Nussbaum’s 1999 criticism of professor Judith Butler, “The Professor of Parody,” or political scientist James Miller’s late 1999 piece “Is Bad Writing Necessary?” got published—and both articles sank like pianos. Since then it’s seemed settled that (as Nussbaum wrote at the time) the way “to do … politics is to use words in a subversive way.” Yet at a minimum this pedagogy diverts attention from, as Nussbaum says, “the material condition of others”—and at worst, as professor Walter Benn Michaels suggests, it turns the the academy into “the human resources department of the right, concerned that the women [and other minorities] of the upper middle class have the same privileges as the men.” Supposing then that bad writers are not simply playing their part in class war, what is their intention? I’d suggest that subversive writing is best understood as a parody of a tactic used, but not invented, by the civil rights movement: packing the jails.

“If the officials threaten to arrest us for standing up for our rights,” Martin Luther King, Jr. said in a January 1960 speech in Durham, North Carolina, “we must answer by saying that we are willing and prepared to fill up the jails of the South.” King’s speech was written directly towards the movement’s pressing problem: bailing out protestors cost money. In response, Thomas Gaither, a field secretary for the Congress for Racial Equality (CORE), devised a solution: he called it “Jail No Bail.” Taylor Branch, the historian, explained the concept in Parting the Waters: America in the King Years 1954-63: the “obvious advantage of ‘jail, no bail’ was that it reversed the financial burden of protest, costing the demonstrators no cash while obligating the white authorities to pay for jail space and food.” All protestors had to do was: get arrested, serve the time—and thereby cost the state their room and board.

Yet Gaither did not invent the strategy. “Packing the jails” as a strategy began, so far as I can tell, in October of 1909; so reports the Minnesotan, Harvey O’Connor, in his 1964 autobiography Revolution in Seattle: A Memoir. All that summer, the International Workers of the World (the “Wobblies”) had been engaged in a struggle against “job sharks”: companies that claimed to procure jobs for their clients after the payment of a fee—and then failed to deliver. (“It was customary,” O’Connor wrote, “for the employment agencies … to promote a rapid turnover”: the companies would take the money and either not produce the job, or the company that “hired” the newly-employed would fire them shortly afterwards.) In the summer of 1909 those companies succeeded in banning public assemblies and speaking on the part of the Wobblies, and legal challenges proved impossible. So in the October of that year the Wobblies “sent out a call” in the labor organization’s newspaper, the Industrial Worker: “Wanted: Men To Fill The Jails of Spokane.”

Five days later, the Wobblies held a “Free Speech Day” rally, and managed to get 103 men arrested. By “the end of November 500 Wobblies were in jail.” Through the “get arrested” strategy, the laborers filled the city’s jail “to bursting and then a school was used for the overflow, and when that filled up the Army obligingly placed a barracks at the city’s command.” And so the Wobblies’ strategy was working: the “jail expenses threatened to bankrupt the treasuries of cities even as large as Spokane.” As American writer and teacher Archie Binns had put the same point in 1942: it “was costing thousands of dollars every week to feed” the prisoners, and so the city was becoming “one big jail.” In this way, the protestors threatened to “eat the capitalistic city out of house and home”—and so the “city fathers” of Spokane backed down, instituting a permitting system for public marches and assemblies. “Packing the jails” won.

What, however, has this history to do with the dispute between plain-speakers and bad writers? In the first place it demonstrates how our present-day academy would much rather talk about Martin Luther King, Jr. and CORE than Harvey O’Connor and the Wobblies. Writing ruefully about left-wing professors like himself, Walter Benn Michaels writes “We would much rather get rid of racism than get rid of poverty”; elsewhere he says, “American liberals … carry on about racism and sexism in order to avoid doing so about capitalism.” Despite the fact that, historically, the civil rights movement borrowed a lot from the labor movement, today’s left doesn’t have much to say about that—nor much about today’s inequality. So connecting the tactics of the Wobblies to those of the civil rights movement is important because it demonstrates continuity where today’s academy wants to see, just as much as any billionaire, a sudden break.

That isn’t the only point of bringing up the “packing the jails” tactic however—the real point is that writers like Butler are making use of a version of this argument without publicly acknowledging it. As laid out by Nussbaum and others, the unsaid argument or theory or idea or concept (whatever name you’d have for it) behind “bad” writing is a version of “packing the jails.” To be plain: that by filling enough academic seats (with the right sort of person) political change will somehow automatically follow, through a kind of osmosis.

Admittedly, no search of the writings of America’s professors, Judith Butler or otherwise, will discover a “smoking gun” regarding that idea—if there is one, presumably it’s buried in an email or in a footnote in a back issue of Diacritics from 1978. The thesis can only to be discovered in the nods and understandings of the “professionals.” On what warrant, then, can I claim that it is their theory? If that’s the plan, how do I know?

My warrant extends from a man who knew, as Garry Wills of Northwestern says,  something about “the plain style”: Abraham Lincoln. To Lincoln, the only possible method of interpretation is a judgment of intent: as Lincoln said in his speech at Peoria in 1858, “when we see a lot of framed timbers, different portions of which we know have been gotten out at different times and places by different workmen,” and “we see these timbers joined together, and see they exactly make the frame of a house or a mill,” why, “in such a case we find it impossible not to believe” that everyone involved “all understood each other from the beginning.” Or as Walter Benn Michaels has put the same point: “you can’t do textual interpretation without some appeal to authorial intention.” In other words, when we see a lot of people acting in similar ways, we should be able to make a guess about what they’re trying to do.

In the case of Butlerian feminists—and, presumably, other kinds of bad writers—bad writing allows them to “do politics in [the] safety of their campuses,” as Nussbaum says, by “making subversive gestures through speech.” Instead of “packing the jails” this pedagogy, this bad writing, teaches “packing the academy”: the theory presumably being that, just as Spokane could only jail so many people, the academy can only hold so many professors. (Itself an issue, because there are a lot fewer professorships available these days, and only liable to be fewer.) Since, as Abraham Lincoln said about what he saw in the late 1850s, we can only make a guess—but we must make a guess—about what those intentions are, I’d hazard that my guess is more or less what these bad writers have in mind.

Unfortunately, in the hands of Butler and others, bad writing is only a parody—it only mimics the very real differences between the act of going to jail and that of attempting to become the, say, Coca-Cola Professor of Rhetoric at Wherever State. A black person willing to go to jail in the South in 1960 was a person with a great deal of courage—and still would be today. But it’s also true that it’s unlikely the courageous civil rights volunteers would have conceived of, much less carried out, the act of attempting to “pack the jails” without the example of the Wobblies prior to them—just as it might be argued that, without the sense of being of the same race and gender as their oppressors, the Wobblies might not have had the courage to pack the jails of Spokane. So it certainly could be argued that the work of the “bad writers” is precisely to make those connections—and so create the preconditions for similar movements in the future.

Yet, as George Orwell might have asked, “where’s the omelette?” Where are the people in jail—and where are the decent pay and equal rights that might follow them? Butler and other “radical” critics don’t produce either: I am not reliably informed of Judith Butler’s arrest record, but I’d suspect it’s not much. So Nussbaum’s observation that while Butler’s pedagogy “instructs people that they can, right now, without compromising their security, do something bold” [emp. added] she wasn’t entirely snide then, and her words look increasingly prescient now. That’s what Nussbaum means when she says that “Butlerian feminism is in many ways easier than the old feminism”: it is a path that demonstrates to middle-class white people, women especially, just how they can “dissent” without giving up their status or power. Nussbaum thus implies that feminism or any other kind of “leftism” practiced along Butler’s lines is not only, quite literally, physically cowardly—but perhaps more importantly suggests just why the “left,” such as it is, is losing.

For surely the “Left” is losing: as many, many people besides Walter Benn Michaels have written, economic inequality has risen, and is rising, even as the sentences and jargon of today’s academics have become more complex—and the academy’s own power slowly dissolves into a mire of adjunct professorships and cut-rate labor policies. Emmanuel Saez of the University of California says that “U.S. income inequality has been steadily increasing since the 1970s, and now has reached levels not seen since 1928,” and Nobel Prize winner Paul Krugman says that even the wages of “highly educated Americans have gone nowhere since the late 1990s.” We witness the rise of plutocrats on a scale never seen before, perhaps at least since the fall of the Bourbons—or even the Antonines.

That is not to suggest, to be sure, that individual “bad writers” are or are not cowards: merely to be a black person or a woman requires levels of courage many people will never be aware of in their lifetimes. Yet, Walter Benn Michaels is surely correct when he says that as things now stand, the academic left in the United States today is largely “a police force for, than an alternative to, the right,” insofar as it “would much rather get rid of racism [or sexism] than get rid of poverty.” Fighting “power” by means of a program of bad, rather than good, writing—writing designed to appeal to great numbers of people—is so obviously stupid it could only have been invented by smart people.

The objection is that giving up the program of Butlerian bad writing requires giving up the program of “liberation” her prose suggests: what Nussbaum calls Butler’s “radical libertarian” dream of the “sadomasochistic rituals of parody.” Yet as Thomas Frank has suggested, it’s just that kind of libertarian dream that led the United States into this mess in the first place: America’s recent troubles have, Frank says, resulted from “the political power of money”—a political power that was achieved courtesy of “a philosophy of liberation as anarchic in its rhetoric as Occupy [Wall Street] was in reality” [emp. Frank’s]. By rejecting that dream, American academics might obtain “food, schools, votes” and (possibly) less rape and violence for both women and men alike. But how?

Well, I have a few ideas—but you’d have to read some plain language.

Telegraphing A Punch

From his intense interest in the telegraph, Lincoln developed what Garry Wills calls a ‘telegraphic eloquence,’ with a ‘monosyllabic and staccato beat’ that gave Lincoln a means of ‘say[ing] a great deal in the fewest words’”
—Sarah Luria, Capital Speculations: Writing and Building Washington D.C.



“Well,” I said, “I wanted to indicate to you that, while I had not shot the distance”—that is, used a rangefinder to measure it, since I don’t have one—“yet still I felt pretty confident about it.” We were on the eighteenth hole at Butler, which was our ninth hole. Mr. B., the member I was working for, was rebuking me for breaking one of the cardinal rules of looping: a good caddie never adds a caveat. All yardages are “154” or “87” or such; never “about 155” or “just shy of 90.” He was right: what I’d said was “either 123 or 24,” which isn’t exact in a narrow sense, but conveyed what I wanted it to convey. The point raised, however, became apparent only recently; in a broader sense because of the recent election, and in a more particular sense because of a party I’d attended shortly before.

The party was in Noble Square, one of those Chicago neighborhoods now infested with hipsters, women working at non-profits, and “all that dreary tribe of high-minded women and sandal-wearers and bearded fruit-juice drinkers who come flocking toward the smell of ‘progress’ like bluebottles to a dead cat,” as George Orwell once referred to the tribe. The food provided by the host was not just vegetarian but vegan, and so just that much more meritorious: the woman whose guest I was seemed to imply that the host, whose food we were eating, was somehow closer to the godhead than the rest of us. Of that “us,” there were not many; most were women while, of the three men present, one was certainly gay while the other wasn’t obviously so, and the third was me.

All of which sounds pretty awful, and almost certainly I’m going to catch hell for writing such so I’ll hurry to explain that everything wasn’t all bad. There was, for instance, a dog. So often, when attending such affairs, it’s necessary to listen to some explanation of the owner’s cat: how it cannot eat such and such, or needs such and such medicines, or how it was lost and became found—stories that, later, can become mixed up with said owner’s parallel stories of boyfriends discovered and discarded, so that it’s unclear whether that male of the species discovered in an alley outside of the Empty Bottle was of the human or feline variety. But a dog is usually a marker of some healthy kind of sense of irony about one’s beliefs: dogs, or so some might say, encourage a sociality that precludes the aloofness necessary for genocide.

I bring this up because one of the topics of conversation; one of the women present, a social worker by training, was discussing the distinction between her own leadership style and that of one her colleagues, both being supervisors of some kind. One of the other women present noted that it was difficult at times for women to assert that role: women, she said, often presented their ideas with diffidence and hesitancy. And so, as the women around nodded in agreement, sometimes ideas that were actually better got ignored, just because of what was essentially a better rhetorical technique.

As it happens, all of the women at said party—all of them, presumably, Obama voters—were white, which is just why I should have remembered the event just after the presidential election, upon reading a short John Cassidy piece in the New Yorker. There, Cassidy points out that, in the aftermath, a lot of ink has been spilled on the subject of why Obama lost the white male vote so disastrously—by twenty-seven percentage points—while simultaneously winning the women’s vote: Obama “carried the overall female vote by eleven [percentage] points” Cassidy notes. (The final total was 55% to 44%.) Yet the story of the “gender gap” papers over another disconnect: it’s true that Obama won over women as a distinct set of people, he actually lost one subset of women: white women.

Romney’s success among white women, in fact, is one reason why he did better among women in general than did the previous Republican candidate for the presidency, John McCain. In 2008, “Obama got fifty-six per cent of the female vote and John McCain got forty-three per cent,” which, even if the margin of error is taken into account, at least indicates that Obama did not make further inroads into the woman vote than he’d already made four years ago. And the reason Obama did not capture a greater percentage of women voters was that he lost white women: “Romney got fifty-six per cent of the white female vote; Obama got just forty-two per cent.” The question to be put to this fact is, obviously, what distinguishes white women from other women, or at least what it was about Mitt Romney that appealed to white women, and only white women.

Clearly there must be some commonality between Caucasian women and men: “Surely,” Cassidy says, “many of the same factors that motivated white male Romney supporters played into the decision-making of white female Romney supporters.” After all, both “are shaped by the same cultural and economic environment.” The explanatory factor that Cassidy finds is economic: “The reason Romney did a bit better … among white women is probably that they viewed him as a stronger candidate on economic issues, which are as important to women as to men.” Obviously, though, this begs the question: why did white women find Romney more persuasive on economic matters?

That, to be sure, is a very large question, and I wouldn’t presume to answer it here. What I would suggest, however, is that there might be a relationship between those results, the first presidential debate, and the anecdote with which I began this piece. If white people voted more for Romney, it might be because of one of the qualities he exhibited in the first presidential debate in early October: as many commenters noted afterward, Romney was “crisp and well-organized” in the words of one pundit, while President Obama was “boring [and] abstract” in the words of another. Romney was gut-punching, while Obama was waving his fists in the air.

Maybe Romney’s performance in that debate illustrated just why he should have been the candidate of white America: he, at least in early October if not elsewhere during the campaign, understood and used a particular rhetorical style to greater effect than Obama did. And, apparently, it worked: he did have greater success than Obama among the audience attuned to that appeal. In turn, what my experience at Butler might—perhaps—illustrate is just how that audience might be constructed: by what mechanisms is power sorted, and how are those mechanisms distributed?

If Romney achieved success among white Americans because he was briefer and more to the point than Obama—itself rather a whopper, it may be—then it remains to understand just why that should appeal to that particular audience. And maybe the experience of caddies demonstrates just why that should be so: as I mentioned at the start, the habit of saying “154” instead of “155 or so” is something that’s inculcated early among caddies, and while it might be taught in, say, the public schools, there’s something wonderfully clarifying about learning when there’s money at stake. White kids exposed to caddieing, in other words, probably take the lesson more to heart than other kids.

All of this of course is a gossamer of suppositions, but perhaps there’s something to it. Yet, if there is, Obama’s election in the teeth of Romney’s success among Caucasian voters also may forecast something else: that the old methods of doing things are no longer as significant, and may even be no longer viable. In caddieing, the old methods of saying a yardage aren’t as important: since everyone has a range finder (a device that tells the distance) it isn’t nearly as important to suppress uncertainties about what the actual distance is because there aren’t any uncertainties any more. (This actually isn’t true, because range finders themselves aren’t as accurate as, say, pin location sheets are, and anyway they still can’t tell you what club to hit.) Maybe, in part because of technologies and the rest, in the future it won’t be as necessary to compress information, and hence the ability to do that won’t be as prized. If so, we’ll exist in a world that’s unrecognizable to a lot of people in this one.

Including, I suspect, Mr. B.

Hallow This Ground

“Country clubs and cemeteries are the biggest wasters of prime real estate!”
—Al Czervik (Rodney Dangerfield)
Caddyshack, 1980



As I write it’s been a month since the Ryder Cup—it’s Halloween in fact—and I’ve been thinking about the thirteenth hole. The back tee on the thirteenth hole on Medinah’s Course Three is about a hundred yards behind the most forward tee-box on the par-three hole, and perhaps fifteen feet higher; during the Cup, viewers often witnessed Michael Jordan lying on the grass next to that tee, watching the players send their shots soaring through the slot in the trees and out over Lake Khadijah where, for the first time, the golf ball is exposed to whatever wind is there. It’s one of the most photogenic spots on Medinah’s property: while the first tee is a popular spot, the reigning photographic champion of Medinah’s Course Three is the back tee on the thirteenth hole. There are, it seems, a number of people who think they know why.

The thirteenth, for those who haven’t been there, is a very long three-par hole: two hundred and fifty yards long, give or take, and the tee shot has to carry part of Medinah’s Lake Khadijah (named after Muhammad’s wife) in order to reach the green. Most amateurs are content to take a picture from the height, then climb down to a more comfortable elevation—their cameras, after all, usually have more chance of capturing the green than their clubs do. It’s at this point, as a writer named Steve Sailer might put it, where the Anglo-Irish writer Edmund Burke (chiefly remembered as being a member of the British Parliament not unfriendly to the American Revolution, who later was an enemy of the French one), comes in.

Burke, to those with uneasy educations, first came to prominence via a book about the distinction between the beautiful and what he called the sublime. In an essay entitled, “From Bauhaus to the Golf Course: The Rise, Fall, and Revival of Golf Course Architecture,” Sailer notes that Burke’s distinction fits golf courses quite well, because while for Burke the “beautiful is … meadows, valleys, slow moving streams, grassland intermingled with copses of trees, the whole English country estate shtick,” the “sublime is nature so magnificent that it induces the feeling of terror because it could kill you, such as by falling off a mountain or into a gorge.” Or at least, the golf course is “the mock sublime, where you are in danger of losing not your life, but your mis-hit golf ball into a water hazard or ravine” or such.

The thirteenth is a good example of the “mock sublime”; while it’s true that no one is likely to die by falling off the tee, it is true that a great many hopes have been dashed, or at least threatened, there. Sam Snead, who had four runner-up finishes in the US Open over his career, missed the green during the final round of the 1949 edition, made bogey—and missed a playoff with Cary Middlecoff by a stroke. Ben Crenshaw saw his chances to get into the playoff at the 1975 US Open dowsed in the lake. In 1999 Tiger Woods, like Snead fifty years before, missed the green in the final round and it led to a double bogie—though, while Tiger’s over-par score allowed Sergio Garcia’s dramatic shot from behind a tree on the sixteenth hole to matter, it didn’t end up costing him the tournament.

At any rate, at times I’ll find myself behind somebody’s iPhone taking a picture of the foursome on that tee, looking down towards the distant flag. People like Sailer are dissatisfied by answering the question, “Why?” with invocations of past disasters or the musings of 18th century philosophers. For Sailer and the rest it seems that a Harvard biologist has produced just the right balm for this intellectual itch. Sailer himself notes the source of that balm in his essay, but it’s also been mentioned by David Owen—author of The Chosen One (about Tiger Woods) and a writer for the New Yorker among other places—in his blog.

Owen has been reading the biologist Edward O. Wilson’s recent book, The Social Conquest of the Earth, and in it the esteemed Harvard sociobiologist claims that human beings desire three items in their surroundings: they “want to be on a height looking down, they prefer open savanna-like terrain with scattered trees and copses, and they want to be close to a body of water, such as a river, lake, or ocean.” The reason for these three desires is, Owen says that Wilson says, because of an “‘innate affiliation’ that humans feel with landscapes that resemble ‘those environments in which our species evolved over millions of years in Africa.’” An affiliation that, surely, is satisfied by the heights of the back tee on the thirteenth hole; QED.

All of it sounds quite tidy as an explanation. People who think like this, however, might consider Sam Snead’s remark at a major championship contested only three years before the contest at Medinah. As his train pulled into town for the 1946 Open Championship (the proper name for the British Open), Snead infamously remarked that St. Andrews’ Old Course—the one that’s had golfers on it since the fifteenth century—looked like “an old, abandoned golf course.” (Unlike Medinah three years later, and despite his remark, Snead won the tournament.) At first look, Snead’s comment sounds like the same kind of humorous remark made by the “hillbilly” who once asked his agent how his photo got into a New York paper “when I ain’t never been there.” (Snead said later that he was just pulling legs.) But what Snead said isn’t just that.

It’s also a marker of time’s passage: how the look of St. Andrews had, by the 1940s, stopped being synonymous with “golf course.” By then, “golf course” meant something different. Not long before, that is, Snead’s comment would not have been understandable. “The chosen home of golf, its ‘most loved abode,’” wrote the writer and artist Garden Grant Smith in The World of Golf in 1898, “is the links, or common land, which is found by the seashore.” As John Paul Newport wrote in the Wall Street Journal about St. Andrews in 2010, links courses were built on “coastal waste land used for golf initially because it was unsuitable for farming.” And what’s most noticeable, or perhaps rather unnoticeable, about links golf courses as opposed to other kinds of golf courses is just what links courses don’t have: trees.

If trees could grow on that land, in other words, Scotsmen would have farmed it. So no true links course has any trees on it, which is how all golf courses looked—until the end of the nineteenth century. The course whose building signaled that shift was Willie Park, Jr.’s design of Sunningdale’s “Old Course” (it wasn’t called the Old Course when it was opened, of course) in 1901. The construction of Sunningdale’s first course had such an impact in part because of who its designer was: in addition to winning the Open twice himself, in 1887 and 1889, Park was the son of Willie Park, Sr., who not only had won the first Open Championship ever held, at Prestwick in 1860, but then won it again three more times. Junior’s uncle, Mungo Park, who is not to be confused with the explorer of the same name, also won the Open, in 1874.

Whatever Park did, in other words, came pretty close to defining what golf was: imagine the kind of authority Gary Nicklaus would have if in addition to his dad’s victories, he’d won the US Open twice, and so did one of his brothers. Anyway, according to Wikipedia’s entry on Sunningdale Golf Club Park’s design was “set in a heathland area, with sandy subsoil amid mixed treed foliage,” and was “among the first successful courses located away from the seaside, as many people had thought at the time that turf would not grow well in such regions.” The success of Sunningdale and Park’s Huntercombe—also opened in 1901 and where, later, James Bond would own a 9 handicap—proved to the traditionalists that golf could be played away from the sea.

Park’s later designs, like Olympia Field’s North course, further demonstrated that golf courses could be designed with trees on them. In retrospect, of course, that move would appear inevitable: as Garden Grant Smith observed in 1898, “we cannot all live by the seaside, and as we must apparently all play golf, we must take it where and how we can.” If proximity to the ocean was necessary to the game, it would still be a curious Scottish custom and not a worldwide sport.

It’s hard to think, then, that somehow golf is popular because it replicates the look of a landscape that, surely, only a small percentage of human beings ever experienced: the landscape of some percentage of Africa’s vastness. Consider, for instance, the description offered in 1865 by a Scotsman named William Saunders about a project he was working on: “The disposition of trees and shrubs is such as will ultimately produce a considerable degree of landscape effect” by working together with the “spaces of lawn provided” to “form vistas … showing … prominent points.” The effect aimed for by Saunders, in other words, sounds similar to that described by Wilson: grassy lawns interrupted here and there by copses of trees, arranged so as to open up what Saunders calls a “pleasure ground effect.” Saunders’ project, in short, sounds very like a modern golf course—and support for Wilson’s theory.

Yet what Saunders was describing was not a new golf course, but rather the design for a new kind of park: the national cemetery at Gettysburg, built in the aftermath of the great battle. I found Saunders’ remarks contained in a book entitled Lincoln at Gettysburg, and the book’s author, Garry Wills, takes pains to trace the connections between what ultimately got constructed in that Pennsylvania town and its forebears. The American source for the design of the Gettysburg burial ground, Wills says, was a cemetery built outside of Boston in 1831. Called Mount Auburn, it was it seems a place so well-known in the nineteenth-century that it even introduced the word “cemetery”—a word whose origin is Greek—to American English.

Like that of its Pennsylvania progeny a generation later, Mount Auburn would consist of “shady groves in the neighborhood of murmuring streams and merry fountains,” as Justice Story of the United States Supreme Court would say in a speech at Mount Auburn’s opening. These new places were to be unlike the churchyard, the former place of American burials; rather than urban, these places would be rural: “an escape from the theological gloom of churchyards, a return to nature,” as Wills says.

Mount Auburn, in turn, had its genesis in Pére Lachaise, the cemetery in Paris now best known to Americans as the final resting place of Jim Morrison, leader of the American band the Doors. Opened in 1804, Pére Lachaise was meant to be an alternative to the crowded churchyards of Paris; “outside the precincts of the city,” as the place’s Wikipedia entry reads. Alexandre Brongniart, the cemetery’s architect, imagined “an English garden mingled with a contemplation place,” as one website describes it. And Pére Lachaise was meant to supersede the old churchyards in another way as well: “Every citizen has the right to be buried regardless of race or religion,” declared Napoleon Bonaparte on the occasion of the cemetery’s opening—a line with an especial resonance in the context of Gettysburg.

That resonance, in fact, might intimate that those who wish to trace golf’s attraction back to Africa have other motives in mind. “In the US,” writes David Givens—director of the Center for Nonverbal Studies—in Psychology Today, “according to Golf magazine, ninety-eight percent of CEOs play golf.” According to Givens, golf’s centrality to modern American business culture is by no means arbitrary. “Stalking through grassy fields in close-knit, face-to-face groups, sticks in hand,” Givens says, “business people enjoy the same concentration, competition, and camaraderie their ancestors once experienced in Africa.” In other words, golf is popular because it is a lot like hunting a wildebeest.

“On the geological time scale,” writes John McPhee in Annals of the Former World, “a human lifetime is reduced to a brevity that is too inhibiting to think about deep time”—sometimes human beings like to castigate themselves for not thinking sufficiently long term. But it’s also wise, perhaps, not to follow all leads down to the rabbit hole of deep time’s abyss: this notion of golf’s appeal doesn’t do a great deal to explain why the golf course only began to resemble the African plain—if it has—within the past century, nor does it particularly explain why golf courses should resemble nineteenth-century cemeteries.

To believe Wilson and his followers, that is, we would have to believe not only that golf courses are more like Kenya than they are like Pennsylvania, but also that somehow those infinitely tiny bits of plasma known DNA somehow contains within it memories of an African past, and that those bits somehow trump the ideas championed by Napoleon and Lincoln—and those ideas are, perhaps, at least as plausible as the idea that a player’s golf clubs, and not just his cell phone’s camera, can capture the green from the back tee at the thirteenth hole.