A Fable of a Snake

 

… Thus the orb he roamed
With narrow search; and with inspection deep
Considered every creature, which of all
Most opportune might serve his wiles; and found
The Serpent subtlest beast of all the field.
Paradise Lost. Book IX.
The Commons of England assembled in Parliament, [find] by too long experience, that
the House of Lords is useless and dangerous to the people of England …
—Parliament of England. “An Act for the Abolishing of the House of Peers.” 19 March 1649.

 

Imagine,” wrote the literary critic Terry Eagleton some years ago in the first line of his review of the biologist Richard Dawkins’ book, The God Delusion, “someone holding forth on biology whose only knowledge of the subject is the Book of British Birds, and you have a rough idea of what it feels like to read Richard Dawkins on theology.” Eagleton could quite easily have left things there—the rest of the review contains not much more information, though if you have a taste for that kind of thing it does have quite a few more mildly-entertaining slurs. Like a capable prosecutor, Eagleton arraigns Dawkins for exceeding his brief as a biologist: that is, of committing the scholarly heresy of speaking from ignorance. Worse, Eagleton appears to be right: of the two, clearly Eagleton is better read in theology. Yet although it may be that Dawkins the real person is ignorant of the subtleties of the study of God, the rules of logic suggest that it’s entirely possible that someone could be just as educated as Eagleton in the theology—and yet hold arguably views closer to Dawkins’ than to Eagleton’s. As it happens, that person not only once existed, but Eagleton wrote a review of someone else’s biography of him. His name is Thomas Aquinas.

Thomas Aquinas is, of course, the Roman Catholic saint whose writings stand, even today, as the basis of Church doctrine: according to Aeterni Patris, an encyclical delivered by Pope Leo XIII in 1879, Aquinas stands as “the chief and master of all” the scholastic Doctors of the church. Just as, in other words, the scholar Richard Hofstadter called American Senator John Calhoun of South Carolina “the Marx of the master class,” so too could Aquinas be called the Marx of the Catholic Church: when a good Roman Catholic searches for the answer to a difficult question, Aquinas is usually the first place to look. It might be difficult then to think of Aquinas, the “Angelic Doctor” as he is sometimes referred to by Catholics, as being on Dawkins’ side in this dispute: both Aquinas and Eagleton lived by means of examining old books and telling people about what they found, whereas Dawkins is, by training at any rate, a zoologist.

Yet, while in that sense it could be argued that the Good Doctor (as another of his Catholic nicknames puts it) is therefore more like Eagleton (who was educated in Catholic schools) than he is like Dawkins, I think it could equally well be argued that it is Dawkins who makes better use of the tools Aquinas made available. Not merely that, however: it’s something that can be demonstrated simply by reference to Eagleton’s own work on Aquinas.

“Whatever other errors believers may commit,” Eagleton for example says about Aquinas’ theology, “not being able to count is not one of them”: in other words, as Eagleton properly says, one of the aims of Aquinas’ work was to assert that “God and the universe do not make two.” That’s a reference to Aquinas’ famous remark, sometimes called the “principle of parsimony,” in his magisterial Summa Contra Gentiles: “If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments where one suffices.” But what’s strange about Eagleton’s citation of Aquinas’ thought is that it is usually thought of as a standard argument on Richard Dawkins’ side of the ledger.

Aquinas’ statement is after all sometimes held to be one of the foundations of scientific belief. Sometimes called “Occam’s Razor,” Isaac Newton referred to Aquinas’ axiom in the Principia Mathematica when the great Englishman held that his work would “admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” Later still, in a lecture Albert Einstein gave at Oxford University in 1933, Newton’s successor affirmed that “the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.” Through these lines of argument runs more or less Aquinas’ thought that there is merely a single world—it’s just that the scientists had a rather different idea of what that world is than Aquinas did.

“God for Aquinas is not a thing in or outside the world,” according to Eagleton, “but the ground of possibility of anything whatever”: that is, the world according to Aquinas is a God-infused one. The two great scientists seem to have held, however, a position closer to the view supposed to have been expressed to Napoleon by the eighteenth-century mathematician Pierre-Simon LaPlace: that there is “no need of that hypothesis.” Both in other words think there is a single world; the distinction to be made is simply whether the question of God is important to that world’s description—or not.

One way to understand the point is to say that the scientists have preserved Aquinas’ way of thinking—the axiom sometimes known as the “principle of parsimony”—while discarding (as per the principle itself) that which was unnecessary: that is, God. Viewed in that way, the scientists might be said to be more like Aquinas than Aquinas—or, at least, than Terry Eagleton is like Aquinas. For Eagleton’s disagreement with Aquinas is different: instead of accepting the single-world hypothesis and rejecting whether it is God or not, Eagleton’s contention is with the “principle of parsimony” itself—the contention that there can be merely a single explanation for the world.

Now, getting into that whole subject is worth a library, so we’ll leave it aside here; let me simply ask you to stipulate that there is a lot of discussion about Occam’s Razor and its relation to the sciences, and that Terry Eagleton (a—former?—Marxist) is both aware of it and bases his objection to Aquinas upon it. The real question to my mind is this one: although Eagleton—as befitting a political radical—does what he does on political grounds, is the argumentative move he makes here as legitimate and as righteous as he makes it out to be? The reason I ask this is because the “principle of parsimony” is an essential part of a political case that’s been made for over two centuries—which is to say that, by abandoning Thomas Aquinas’ principle, people adopting Eagleton’s anti-scientific view are essentially conceding that political goal.

That political application concerns the design of legislatures: just as Eagleton and Dawkins argue over whether there is a single world or two, in politics the question of whether legislatures ought to have one house or two has occupied people for centuries. (Leaving aside such cases as Sweden, which once had—in a lovely display of the “diversity” so praised by many of Eagleton’s compatriots—four legislative houses.) The French revolutionary leader, the Abbè Sieyés—author of the manifesto of the French Revolution, What Is the Third Estate?—has likely put the case for a single house most elegantly: the abbè once wrote that legislatures ought to have one house instead of two on the grounds that “if the second chamber agrees with the first, it is useless; if it disagrees it is dangerous.” Many other French revolutionary leaders had similar thoughts: for example, Mirabeau wrote that what are usually termed “second chambers,” like the British House of Lords or the American Senate, are often “the constitutional refuge of the aristocracy and the preservation of the feudal system.” The Marquis de Condorcet thought much the same. But such a thought has not been limited to the eighteenth-century, nor to the right-hand side of the English Channel.

Indeed, there has long been similar-minded people across the Channel—there’s reason in fact to think that the French got the idea from the English in the first place given that Oliver Cromwell’s “Roundhead” regime had abolished the House of Lords in 1649. (Though it was brought back after the return of Charles II.) In 1867’s The English Constitution, the writer and editor-in-chief of The Economist, Walter Bagehot, had asserted that the “evil of two co-equal Houses of distinct natures is obvious.” George Orwell, the English novelist and essayist, thought much the same: in the early part of World War II he fully expected that the need for efficiency produced by the war would result in a government that would “abolish the House of Lords”—and in reality, when the war ended and Clement Atlee’s Labour government took power, one of Orwell’s complaints about it was that it had not made a move “against the House of Lords.” Suffice it to say, in other words, that the British tradition regarding the idea of a single legislative body is at least as strong as that of the French.

Support for the idea of a single legislative house, called unicameralism, is however not limited to European sources. For example, the French revolutionary leader, the Marquis de Condorcet, only began expressing support for the concept after meeting Benjamin Franklin in 1776—the Philadelphian having recently arrived in Paris from an American state, Pennsylvania, best-known for its single-house legislature. (A result of 1701’s Charter of Privileges.) Franklin himself contributed to the literature surrounding this debate by introducing what he called “the famous political Fable of the Snake, with two Heads and one Body,” in which the said thirsty Snake, like Buridan’s Ass, cannot decide which way to proceed towards water—and hence dies of dehydration. Franklin’s concerns were later taken up, a century and half later, by the Nebraskan George Norris—ironically, a member of the U.S. Senate—who criss-crossed his state in the summer of 1934 (famously wearing out two sets of tires in the process) campaigning for the cause of unicameralism. Norris’ side won, and today Nebraska’s laws are passed by a single legislative house.

Lately, however, the action has swung back across the Atlantic: both Britain and Italy have sought to reform, if not abolish, their upper houses. In 1999, the Parliament of Great Britain passed the House of Lords Act, which ended a tradition that had lasted nearly a thousand years: the hereditary right of the aristocracy to sit in that house. More recently, Italian prime minister Matteo Renzi called “for eliminating the Italian Senate,” as Alexander Stille put it in The New Yorker, which the Italian leader claimed—much as Norris had claimed—that doing so would “reduc[e] the cost of the political class and mak[e] its system more functional.” That proved, it seems, a bridge too far for many Italians, who forced Renzi out of office in 2016; similarly, despite the withering scorn of Orwell (who could be quite withering), the House of Lords has not been altogether abolished.

Nevertheless, American professor of political science James Garner observed so early as 1910, citing the example of Canadian provincial legislatures, that among “English speaking people the tendency has been away from two chambers of equal rank for nearly two hundred years”—and the latest information indicates the same tendency at work worldwide. According to the Inter-Parliamentary Union—a kind of trade organization for legislatures—there are for instance currently 116 unicameral legislatures in the world, compared with 77 bicameral ones. That represents a change even from 2014, when there were 3 less unicameral ones and 2 more bicameral ones, according to a 2015 report by Betty Drexage for the Dutch government. Globally, in other words, bicameralism appears to be on the defensive and unicameralism on the rise—for reasons, I would suggest, that have much to do with widespread adoption of a perspective closer to Dawkins’ than to Eagleton’s.

Within the English-speaking world, however—and in particular within the United States—it is in fact Eagleton’s position that appears ascendent. Eagleton’s dualism is, after all, institutionally a far more useful doctrine for the disciplines known, in the United States, as “the humanities”: as the advertisers know, product differentiation is a requirement for success in any market. Yet as the former director of the American National Humanities Center, Geoffrey Galt Harpham, has remarked, the humanities are “truly native only to the United States”—which implies that the dualist conception of knowledge that depicts the sciences as opposed to something called “the humanities” is one that is merely contingent, not a necessary part of reality. Therefore, Terry Eagleton, and other scholars in those disciplines, may advertise themselves as on the side of “the people,” but the real history of the world may differ—which is to say, I suppose, that somebody’s delusional, all right.

It just may not be Richard Dawkins.

Advertisements

Eat The Elephant

Well, gentlemen. Let’s go home.
Sink the Bismarck! (1960).

Someday someone will die and the public will not understand why we were not more effective and throwing every resource we had at certain problems.
—FBI Field Office, New York City, to FBI Headquarters, Washington, D.C.
29 August, 2001.

 

Simon Pegg, author of the latest entry in the Star Trek franchise, Star Trek Beyond, explained the new film’s title in an interview over a year ago: the studio in charge of the franchise, Pegg said, thought that Star Trek was getting “a little too Star Trek-y.” One scene in particular seems to designed to illustrate graphically just how “beyond” Beyond is willing to go: early on, the fabled starship Enterprise is torn apart by (as Michael O’Sullivan of the Washington Post describes it) “a swarm of mini-kamikaze ships called ‘bees.’” The scene is a pretty obvious signal of the new film’s attitude toward the past—but while the destruction of the Enterprise very well might then be read as a kind of meta-reference to the process of filmmaking (say, how movies, which are constructed by teams of people over years of work, can be torn apart by critics in a virtual instant), another way to view the end of the signature starship is in the light of how Star Trek’s original creator, Gene Rodenberry originally pitched Star Trek: as “space-age Captain Horatio Hornblower.” The demise of the Enterprise is, in other words, a perfect illustration of a truth about navies these days: navies today are examples of the punchline of the old joke about how to eat an elephant. (“One bite at a time.”) The payoff for thinking about Beyond in this second way, I would argue, is that it leads to much clearer thinking about things other than stories about aliens, or even stories themselves—like, say, American politics, where the elephant theory has held sway for some time.

“Starfleet,” the fictional organization employing James T. Kirk, Spock, and company has always been framed as a kind of space-going navy—and as Pando Daily’s “War Nerd,” Gary Brecher, pointed out so long ago as 2002, navies are anachronistic in reality. Professionals know, as Brecher wrote fourteen years ago, that “every one of those big fancy aircraft carriers we love”—massive ships much like the fictional Enterprise—“won’t last one single day in combat against a serious enemy.” The reason we know this is not merely because of the attack on the USS Cole in 2000, which showed how two Al Qaeda guys in a thousand-dollar speedboat could blow a 250 million dollar-sized hole in a 2 billion dollar warship, but also because—as Brecher points out in his piece—of research conducted by the U.S. military itself: a war game entitled “Millennium Challenge 2002.”

“Millennium Challenge 2002,” which conveniently took place in 2002, pitted an American “Blue” side against a fictional “Red” force (believed to be a representation of Iran). The commander of “Red” was Marine Corps Lieutenant General Paul K. Riper, who was hired because, in the words of his superiors, he was a “devious sort of guy”—though in the event, he proved to live up to his billing a little too well for the Pentagon’s taste. Taking note of the tactics used against the Cole, Riper attacked Blue’s warships with cruise missiles and a few dozen speedboats loaded with enormous cans of gasoline and driven by gentlemen with an unreasonable belief in the afterlife—a (fictional) attack that sent 19 U.S. Navy vessels to the bottom in perhaps 10 minutes. In doing so, Riper effectively demonstrated the truth also illustrated by the end of the Enterprise in Beyond: that large naval vessels are redundant.

Even warships like the U.S. Navy’s latest supercarrier, the Gerald R. Ford—a floating city capable of completely leveling other cities of the non-floating variety—are nevertheless, as Brecher writes elsewhere, “history’s most expensive floating targets.” That’s because they’re vulnerable to exactly the sort of assault that takes down Enterprise: “a saturation attack by huge numbers of low-value attackers, whether they’re Persians in Cessnas or mass-produced Chinese cruise missiles.” They’re as vulnerable, in other words, as elephants are according to the old joke. Yet, whereas that might be a revolutionary insight in the military, the notion that with enough mice, even an elephant falls is old hat within American political circles.

After all, American politics has, at least since the 1980s, proceeded only by way of “saturation attacks by huge numbers of low-value attackers.” That was the whole point of what are now sometimes called “the culture wars.” During the 1980s and 1990s, as the late American philosopher Richard Rorty put the point before his death, liberals and conservatives conspired together to allow “cultural politics to supplant real politics,” and for “cultural issues” to become “central to public debate.” In those years, it was possible to gain a name for oneself within departments of the humanities by attacking the “intrinsic value” of literature (while ignoring the fact that those arguments were isomorphic with similar ideas being cooked up in economics departments), while conversely, many on the religious right did much the same by attacking (sometimes literally) abortion providers or the teaching of evolution in the schools. To use a phrase of British literary critic Terry Eagleton, in those years “micropolitics seem[ed] the order of the day”—somewhere during that time politics “shift[ed] from the transformative to the subversive.” What allowed that shift to happen, I’d say, was the notion that by addressing seemingly minor-scale points instead of major-scale ones, each might eventually achieve a major-scale victory—or to put it more succinctly, that by taking enough small bites they could eat the elephant.

Just as the Americans and the Soviets refused to send clouds of ICBMs at each other during the Cold War, and instead fought “proxy wars” from the jungles of Vietnam to the mountains of Afghanistan, during the 1980s and 1990s both American liberals and conservatives declined to put their chief warships to sea, and instead held them in port. But right at this point the two storylines—the story of the navy, the story of American politics—begin to converge. That’s because the story of why warships are obsolete is also a story about why that story has no application to politics whatever.

“What does that tell you,” Brecher rhetorically asks, “about the distinguished gentlemen with all the ribbons on their chests who’ve been standing up on … bridges looking like they know what they’re doing for the past 50 years?” Since all naval vessels are simply holes in the water once the shooting really starts, those gentlemen must be, he says, “either stupid or so sleazy they’re willing to make a career commanding ships they goddamn well know are floating coffins for thousands.” Similarly, what does that tell you about an American liberal left that supposedly stands up for the majority of Americans, yet has stood by while, for instance, wages have remained—as innumerable reports confirm—essentially the same for forty years? For while it is all well and good for conservatives to agree to keep their Bismarcks and Nimitzs in port, that sort of agreement does not have the same payout for those on the liberal left—as ought to be obvious to anyone with an ounce of sense.

To see why requires seeing what the two major vessels of American politics are. Named most succinctly by William Jennings Bryan at Chicago in 1896, they concern what Bryan said were the only “two ideas of government”: the first being the idea that, “if you just legislate to make the well-to-do prosperous, that their prosperity will leak through on those below,” and the “Democratic idea,” the idea “that if you legislate to make the masses prosperous their prosperity will find its way up and through every class that rests upon it.” These are the two arguments that are effectively akin to the Enterprise: arguments at the very largest of scales, capable of surviving voyages to strange new worlds—because they apply as well to the twenty-third century of the Federation as they did to Bryan’s nineteenth. But that’s also what makes them different from any real battleship: unlike the Enterprise, they can’t be taken down no matter how many attack them.

There is, however, another way in which ideas can resemble warships: both molder in port. That’s one reason why, to speak of naval battles, the French lost the Battle of Trafalgar in 1805: as Wikipedia reports, because the “main French ships-of-the-line had been kept in harbour for years by the British blockade,” therefore the “French crews included few experienced sailors and, as most of the crew had to be taught the elements of seamanship on the few occasions when they got to sea, gunnery was neglected.” It’s perfectly alright to stay in port, in other words, if you are merely protecting the status quo—the virtues of wasting time with minor issues is sure enough if keeping things as they are is the goal. But that’s just the danger from the other point of view: the more time in port, the less able in battle—and certainly the history of the past several generations shows that supposed liberal or left-types have been increasingly unwilling to take what Bryan called the “Democratic idea” out for a spin.

Undoubtedly, in other words, American conservatives have relished observing left-wing graduate students in the humanities debate—to use some topics Eagleton suggests—“the insatiability of desire, the inescapability of the metaphysical … [and] the indeterminate effects of political action.” But what might actually affect political change in the United States, assuming anyone is still interested in the outcome and not what it means in terms of career, is a plain, easily-readable description of how that might be accomplished. It’s too bad that the mandarin admirals in charge of liberal politics these days appear to think that such a notion is a place where no one has gone before.

Miracles Alone

They say miracles are past; and we have our
philosophical persons, to make modern and familiar, things supernatural and causeless.
All’s Well That Ends Well Act II, scene 3  

“If academic writing is to become expansive again,” wrote Joshua Rothman in The New Yorker a year ago, in one of the more Marxist sentences to appear in a mainstream publication lately, “academia will probably have to expand first.” What Rothman is referring to was the minor controversy set off by a piece by Nicholas Kristof in the New York Times entitled “Professors, We Need You!”—a rant attacking the “unintelligibility” of contemporary academic writing blah blah blah. Rothman’s take on the business—as a former graduate student himself—is that the increasing obscurity of the superstructure of academic writing is the result of an ever-smaller base: “the audience for academic work has been shrinking,” he says, and so building “a successful academic career” requires “serially impress[ing] very small groups of people,” like journal editors, hiring committees, etc. So, to Rothman, turning academic writing around would mean an expanding university system: that is, one in which it wasn’t terribly difficult to get a job. To put it another way, it’s to say that in order to make academics visible to the people, it would probably help to allow the people to become academics.

To very many current academics, however, that’s precisely off the table, because their work involves questioning the assumption necessary to power Rothman’s whole proposal: to write for large numbers of people requires the writing not to need some enormous amount of training in order to be read. A lot of academics in today’s humanities departments would “historicize” that assumption by saying that it only came into being with the Protestant Reformation at the beginning of the modern era, which held that the Bible could be read, and understood, by anyone—not just a carefully chosen set of acolytes capable of translating the holy mysteries to the laity, as in Roman Catholic practice. Academics of this sort might then make reference, as Benedict Anderson did in his Imagined Communities, to “print capitalism”—how the growth of newspapers and other printed materials demonstrated how writing untethered from a clerical caste could generate huge profits. And so on.

The defenses of obscure and difficult writing offered by such academics as Judith Butler, however, do not always take that turn: very often, difficult writing is defended on the grounds that such esoteric kinds of efforts “can help point the way to a more socially just world,” because “language plays an important role in shaping and altering our common or ‘natural’ understanding of social and political realities.” That, one supposes, might be true—and it’s certainly true that what’s known as the “cultural left” has, as the philosopher Richard Rorty once remarked, made all of us more sensitive to the peculiar ways in which language can influence the ways in which people perceive other people. But it’s also true that such a kind of thinking fails to think through the entire meaning of standing against intelligibility.

Most obviously, though this point is often obscured, it means standing against the idea of what is known as the doctrine of “naturalism,” a notion defined by the Stanford Encyclopedia of Philosophy as “asserting that reality has no place for ‘supernatural’ or other ‘spooky’ kinds of entity.” At least since Mark Twain adopted naturalism to literature by saying that “the personages of a tale shall confine themselves to possibilities and let miracles alone,” a baseline belief in naturalism has been what created the kind of widely literate public Kristof’s piece requires. Mysteries, that is, can only be understood by someone initiated into them: hence, to proceed without initiates requires outlawing mystery.

As should be obvious but apparently isn’t, it’s only absent a belief in mystery that anyone could, in Richard Rorty’s words, “think of American citizenship as an opportunity for action”—rather than, as Rorty laments so much of this so-called “cultural left” has become, possessed by the “spirit of detached spectatorship.” Difficult writing, in other words, might be able to do something for small groups, but it cannot, by definition, help larger ones—which is to say that it is probably no accident that Judith Butler should have left just what she meant by “socially just” undefined, because by the logic of her argument it almost certainly does not include the vast majority of America’s, or the world’s, people.

“In the early decades of” the twentieth century, Richard Rorty once wrote, “when an intellectual stepped back from his or her country’s history and looked at it through skeptical eyes, the chances were that he or she was about to propose a new political initiative.” That tradition is, it seems, nearly lost: today’s “academic Left,” Rorty wrote then, “has no projects to propose to America, no vision of a country to be achieve by building a consensus on the need for specific reforms.” For Rorty, however, that seems blamable on the intellectuals themselves—a kind of “blaming the victim” or traison des clercs that is itself a betrayal of the insights of naturalism: according to those notions, it’s no more possible that large numbers of smart people should have inexplicably given up on their political efforts completely than a flaming shrubbery could talk.

It’s that possibility that the British literary critic Terry Eagleton appears to have considered when, in his The Illusions of Postmodernism, he suggests that the gesture of denying that “there is any significant distinction between discourse and reality”—a denial specifically aimed at naturalism’s attempt to rule out the mysterious—may owe more to “the deadlocked political situation of a highly specific corner of the globe” than it does to the failures of the intellectuals. What I presume Eagleton is talking about is what Eric Alterman, writing in The Atlantic, called “the conundrum of a system that, as currently constructed, gives the minority party no strategic stake in sensible governance.” Very many of the features of today’s American government, that is, are designed not to produce good government, but rather to enable a minority to obstruct the doings of the majority—the famous “checks and balances.”

While American civic discourse often celebrates those supposed features, as I’ve written before the work of historians like Manisha Sinha and Leonard Richards shows that in fact they are due, not to the foresight of the Founding Fathers, but instead in order to protect the richest minority of the then-newborn republic: the slaveowners. It isn’t any accident that, as Alterman says, it “has become easier and easier for a determined minority to throw sand in the gears of the legislative process”: the very structure of the Senate, for example, allows “the forty Republican senators … [who] represent barely a third of the US population” to block any legislation, even excluding the more obscure senatorial tools, like the filibuster and the hold. These devices, as the work of historians shows, were originally developed in order to protect slavery; as Lawrence Goldstone put the point in the New Republic recently, during the Constitutional Convention of 1787, “slaveholders won a series of concessions,” among them “the makeup of the Senate” and the method of electing a president. These hangovers linger on, defending interests perhaps less obviously evil than the owners of slaves, but interests by and large not identical with those of the average citizen: today, those features are all check and no balance.

Such an explanation, I think, is more likely than Rorty’s stance of casting blame on people like Judith Butler, as odious as her beliefs really are. It might explain better how for instance, as the writer Seymour Krim described in his essay, “The American Novel Made Me,” intellectuals began “in the mid 50s [1950s] to regard the novel as a used-up medium,” so that the “same apocalyptic sense of possibility that we once felt in the U.S. novel now went into its examination”: what Krim calls “the game” of “literary criticism.” In that game, what matters isn’t the description of reality itself, but rather the methods of description by which “reality” is recorded: in line with Rorty’s idea of the intellectual turn against reality, not so much the photograph so much as the inner workings of the camera. Yet while that pursuit might appear to  some as a ridiculous and objectively harmful pursuit, blaming people, even smart people, for having become involved in such efforts because you have blocked their real path to advancement is like blaming butter for melting in the sun.

What all of this may show, in other words, is that for academic writing to become expansive again, as Joshua Rothman wishes, it may require far more than just academia to expand, though almost certainly that may be part of it. What it will also require is a new band of writers and politicians, recommitted to the tenets of naturalism and determined, as Krim said about “the American realistic novel of the mid to late 1930s,” to be “‘truthful’ in recreating American life.” To Kristof or Rothman, that’s a task unlikely even to be undertaken in our lifetimes, much less accomplished. Yet it ought to be acknowledged that Kristof and Rothman’s own efforts imply that a hunger exists that may not know its name—that a wanderer is abroad, holding aloft a lantern flickering not because of a rising darkness, but an onrushing dawn.

 

Get Lucky

All ends with beginnings
“Get Lucky”
Daft Punk
Random Access Memories (2013)

No one in their right mind would have thought the shot was any good when it departed the man’s club; no one reading the man’s card, later, would have thought it anything less than majestic. Standing at the sixth tee on Streamsong’s Red Course, displaying a form that most professionals would have described as “slouchy,” the man searched after his pellet with worried eyes as it took off at an angle best referred to as “obtuse” in a direction usually noted in connection with the phrase “last seen.” The ball had not, in short, behaved in the manner the golfer had intended—even though the evidence of the scorecard might appear to differ.

The sixth on the Red is a short par three, with a pond to the right and a large bunker—so inviting to the pond’s resident alligators—intervening between the pond and the green. There is a dune to the left that forms the base for seventh hole’s tee box slightly in front of the green, and another dune farther on, creating about a twenty-yard space between the two dunes that is hidden from the tee. The golfer’s ball had disappeared into this space, and since both of the dunes were covered with tall grass and brush, it seemed likely that we had already lost sight of that ball for the last time.

Somehow, however, as you have likely already guessed, the ball reappeared from behind the dune it had not buried itself in and sped, as if shot by an improbably goodhearted troll, towards the green’s flagstick, which it struck directly and then, guided inexorably by the laws of physics, buried itself underground like an especially amiable corpse. An “ace,” a hole-in-one: golf’s holy grail, with the kicker that it was not found (or created) by some wizened, ascetic practitioner. It was as if, instead of Don Quixote, Sancho Panza, seated on his ass, had charged the windmills. And won.

It was perhaps the most spectacular instantiation of the maddening phrase amateur golfers are so fond of repeating: “better to be lucky than good”—a phrase that is all too often invoked, not merely in golf, but in wider arenas also. Such as, for instance, in the business of interpreting.

“If I say, ‘I promise to loan you five pounds,’ but as the words cross my lips have no intention of doing so, I have still promised,” writes the British literary critic Terry Eagleton. That’s because the “promising is built into the situation”—promising isn’t, Eagleton claims, “a ghostly impulse in my skull.” All that matters is whether I have said the words that make a promise, not whether I intended to promise or not—a view that is a kind of restatement of the golfer’s adage.

Think, for example, of the home run. “If a batter in a softball game hits a fair ball into the stands,” asks Walter Benn Michaels, “it is not evidence she hit a home run; it is a home run.” When it comes to home runs, the intention of the batter does not matter: as Michaels says, “[w]e do not care whether she was trying to hit a home run, or whether she even meant to swing.” Just as a promise is a promise, a home run is a home run; one reason perhaps why the foreign Marxist Eagleton could share a view of intention with a justice of the United State Supreme Court not known for his sympathies for the revolution: Antonin Scalia.

“What we are looking for when we construe a statute,” Scalia once wrote to describe his approach to interpreting the law, is not “what the legislature intended” but instead “what it said.” Scalia, like Eagleton, refuses to play the game of climbing inside another’s mind.

That’s why, in the words of one of Scalia’s readers (the literary scholar Walter Benn Michaels), what Scalia claims to be interested in is not “what the authors meant by the words … but in the meaning of the words themselves.” Scalia’s claim, in other words, is that words have a meaning that is independent of the uses a writer might put them towards.

It’s an approach that, like Eagleton’s description, has the virtue of appearing to wash its hands of the messy business of discovering the inside of an author’s mind and instead focus on what might seem to be the only tangible evidence available: in this case, the words on the page. Interpreting a law ought to be as simple as recognizing an ace, Scalia wants to say. Intention shouldn’t matter.

Yet to erase intention from the act of construing meaning is, Michaels wants to say, as ridiculous as excluding water from Niagara Falls: without it, there’s nothing left. It’s a point Michaels (along with Stephen Knapp) made thirty years ago in an article entitled “Against Theory”: an article that contains its own knockdown anecdote. Instead of a sports analogy, however, Knapp and Michaels’ account is about a visit to the shore.

“Suppose that you’re walking along a beach,” this story goes, “and you come upon a curious sequence of squiggles in the sand.” On further examination, you find that the squiggles greatly resemble several lines of Wordsworth’s “A Slumber.” How, Michaels and Knapp ask, would we respond to such a discovery?

If we are curious, we might want to think about what might have generated the squiggles—yet while there might be many possible candidates, all of them reduce to two categories. “You will either be ascribing these marks to some agent capable of intentions (the living sea, the haunting Wordsworth, etc.),” Knapp and Michaels say, “or you will count them as nonintentional effects of mechanical processes (erosion, percolation, etc.).” And so the point arrives: if it is demonstrated that the squiggles are produced by some natural cause, “will they still seem to be words?”

The answer clearly is no—the squiggles “will merely seem to resemble words.” As one who agrees with Knapp and Michaels’ view, Stanley Fish, put the point in a column for the New York Times: “The moment you decide that nature caused the effect,” whatever that effect is, “you will have lost all interest in interpreting the formation, because you no longer believe that it has been produced intentionally, and therefore you no longer believe that it’s a word, a bearer of meaning.” The sudden appearance of a seeming depiction of the True Cross on a water-stained wall, or a human face on Mars, is only interesting insofar as we believe that some agent (whether God or aliens) caused the appearance; once we discover that it is only the residue of a mechanical failure in the pipes, or an especially blurry photographic development, coupled with the human brain’s tendency to search for patterns, the phenomena is no longer interesting. Messages are only meaningful inasmuch as they are produced by agents; anything else is not a message at all.

In that way, a home run (or an ace), can only be thought of as having a meaning insofar as it is a purposive act: only a home run hit by a god—that is, a home run hit by a being who can hit (or not hit) home runs as he chooses—could possess meaning. We can know this because even the greatest of home run hitters cannot produce one at will (despite what is rumored about the 1932 World Series and Babe Ruth): hitting a home run requires the cooperation of sudden bursts of wind and other hidden forces beyond the control of any single person. In other words, hitting a home run, or a hole-in-one, might seem like the most intentional act possible—but it isn’t, as the “better to be lucky” adage ruefully communicates. Both are somewhere between a face on Mars and a message, and probably more like the former than the latter.

Antonin Scalia’s dream, in short, of a perfectly communicated law, one that is as easily interpreted as a hole-in-one, is an impossible one: anything so easily understood would not be worth the (minimal) effort it would take to understand. As Fish says, intention “is not something added to language; it is what must already be assumed if what are otherwise mere physical phenomena (rocks or scratch marks) are to be experienced as language.” Which, one supposes, is why hole-in-ones are so fascinating to golfers: they are a moment of in which the physical, non-human world appears to take an interest in our affairs, a moment where the divine appears, for just a moment, to intervene. The reason they can appear so is because of their strange mixture of both intention and random chance, which blurs a line so definitively drawn.

Perhaps that is the reason for the adage: it may be that human beings long for release from the consequences of their own actions—which is to say, release from a world so divided between human actions and natural events. If there is a link between that longing, and the world we now have—one in which, for example, torture is acceptable behavior, but the connection between productivity and wages has been effectively severed—it is probably too much to say that such is the shared intention of the foreign Marxist and the Supreme Court justice. But I may be a poor kind of reader for the purposes of these gentlemen: unlike them, I would rather be good than lucky.