A Fable of a Snake

 

… Thus the orb he roamed
With narrow search; and with inspection deep
Considered every creature, which of all
Most opportune might serve his wiles; and found
The Serpent subtlest beast of all the field.
Paradise Lost. Book IX.
The Commons of England assembled in Parliament, [find] by too long experience, that
the House of Lords is useless and dangerous to the people of England …
—Parliament of England. “An Act for the Abolishing of the House of Peers.” 19 March 1649.

 

Imagine,” wrote the literary critic Terry Eagleton some years ago in the first line of his review of the biologist Richard Dawkins’ book, The God Delusion, “someone holding forth on biology whose only knowledge of the subject is the Book of British Birds, and you have a rough idea of what it feels like to read Richard Dawkins on theology.” Eagleton could quite easily have left things there—the rest of the review contains not much more information, though if you have a taste for that kind of thing it does have quite a few more mildly-entertaining slurs. Like a capable prosecutor, Eagleton arraigns Dawkins for exceeding his brief as a biologist: that is, of committing the scholarly heresy of speaking from ignorance. Worse, Eagleton appears to be right: of the two, clearly Eagleton is better read in theology. Yet although it may be that Dawkins the real person is ignorant of the subtleties of the study of God, the rules of logic suggest that it’s entirely possible that someone could be just as educated as Eagleton in the theology—and yet hold arguably views closer to Dawkins’ than to Eagleton’s. As it happens, that person not only once existed, but Eagleton wrote a review of someone else’s biography of him. His name is Thomas Aquinas.

Thomas Aquinas is, of course, the Roman Catholic saint whose writings stand, even today, as the basis of Church doctrine: according to Aeterni Patris, an encyclical delivered by Pope Leo XIII in 1879, Aquinas stands as “the chief and master of all” the scholastic Doctors of the church. Just as, in other words, the scholar Richard Hofstadter called American Senator John Calhoun of South Carolina “the Marx of the master class,” so too could Aquinas be called the Marx of the Catholic Church: when a good Roman Catholic searches for the answer to a difficult question, Aquinas is usually the first place to look. It might be difficult then to think of Aquinas, the “Angelic Doctor” as he is sometimes referred to by Catholics, as being on Dawkins’ side in this dispute: both Aquinas and Eagleton lived by means of examining old books and telling people about what they found, whereas Dawkins is, by training at any rate, a zoologist.

Yet, while in that sense it could be argued that the Good Doctor (as another of his Catholic nicknames puts it) is therefore more like Eagleton (who was educated in Catholic schools) than he is like Dawkins, I think it could equally well be argued that it is Dawkins who makes better use of the tools Aquinas made available. Not merely that, however: it’s something that can be demonstrated simply by reference to Eagleton’s own work on Aquinas.

“Whatever other errors believers may commit,” Eagleton for example says about Aquinas’ theology, “not being able to count is not one of them”: in other words, as Eagleton properly says, one of the aims of Aquinas’ work was to assert that “God and the universe do not make two.” That’s a reference to Aquinas’ famous remark, sometimes called the “principle of parsimony,” in his magisterial Summa Contra Gentiles: “If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments where one suffices.” But what’s strange about Eagleton’s citation of Aquinas’ thought is that it is usually thought of as a standard argument on Richard Dawkins’ side of the ledger.

Aquinas’ statement is after all sometimes held to be one of the foundations of scientific belief. Sometimes called “Occam’s Razor,” Isaac Newton referred to Aquinas’ axiom in the Principia Mathematica when the great Englishman held that his work would “admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” Later still, in a lecture Albert Einstein gave at Oxford University in 1933, Newton’s successor affirmed that “the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.” Through these lines of argument runs more or less Aquinas’ thought that there is merely a single world—it’s just that the scientists had a rather different idea of what that world is than Aquinas did.

“God for Aquinas is not a thing in or outside the world,” according to Eagleton, “but the ground of possibility of anything whatever”: that is, the world according to Aquinas is a God-infused one. The two great scientists seem to have held, however, a position closer to the view supposed to have been expressed to Napoleon by the eighteenth-century mathematician Pierre-Simon LaPlace: that there is “no need of that hypothesis.” Both in other words think there is a single world; the distinction to be made is simply whether the question of God is important to that world’s description—or not.

One way to understand the point is to say that the scientists have preserved Aquinas’ way of thinking—the axiom sometimes known as the “principle of parsimony”—while discarding (as per the principle itself) that which was unnecessary: that is, God. Viewed in that way, the scientists might be said to be more like Aquinas than Aquinas—or, at least, than Terry Eagleton is like Aquinas. For Eagleton’s disagreement with Aquinas is different: instead of accepting the single-world hypothesis and rejecting whether it is God or not, Eagleton’s contention is with the “principle of parsimony” itself—the contention that there can be merely a single explanation for the world.

Now, getting into that whole subject is worth a library, so we’ll leave it aside here; let me simply ask you to stipulate that there is a lot of discussion about Occam’s Razor and its relation to the sciences, and that Terry Eagleton (a—former?—Marxist) is both aware of it and bases his objection to Aquinas upon it. The real question to my mind is this one: although Eagleton—as befitting a political radical—does what he does on political grounds, is the argumentative move he makes here as legitimate and as righteous as he makes it out to be? The reason I ask this is because the “principle of parsimony” is an essential part of a political case that’s been made for over two centuries—which is to say that, by abandoning Thomas Aquinas’ principle, people adopting Eagleton’s anti-scientific view are essentially conceding that political goal.

That political application concerns the design of legislatures: just as Eagleton and Dawkins argue over whether there is a single world or two, in politics the question of whether legislatures ought to have one house or two has occupied people for centuries. (Leaving aside such cases as Sweden, which once had—in a lovely display of the “diversity” so praised by many of Eagleton’s compatriots—four legislative houses.) The French revolutionary leader, the Abbè Sieyés—author of the manifesto of the French Revolution, What Is the Third Estate?—has likely put the case for a single house most elegantly: the abbè once wrote that legislatures ought to have one house instead of two on the grounds that “if the second chamber agrees with the first, it is useless; if it disagrees it is dangerous.” Many other French revolutionary leaders had similar thoughts: for example, Mirabeau wrote that what are usually termed “second chambers,” like the British House of Lords or the American Senate, are often “the constitutional refuge of the aristocracy and the preservation of the feudal system.” The Marquis de Condorcet thought much the same. But such a thought has not been limited to the eighteenth-century, nor to the right-hand side of the English Channel.

Indeed, there has long been similar-minded people across the Channel—there’s reason in fact to think that the French got the idea from the English in the first place given that Oliver Cromwell’s “Roundhead” regime had abolished the House of Lords in 1649. (Though it was brought back after the return of Charles II.) In 1867’s The English Constitution, the writer and editor-in-chief of The Economist, Walter Bagehot, had asserted that the “evil of two co-equal Houses of distinct natures is obvious.” George Orwell, the English novelist and essayist, thought much the same: in the early part of World War II he fully expected that the need for efficiency produced by the war would result in a government that would “abolish the House of Lords”—and in reality, when the war ended and Clement Atlee’s Labour government took power, one of Orwell’s complaints about it was that it had not made a move “against the House of Lords.” Suffice it to say, in other words, that the British tradition regarding the idea of a single legislative body is at least as strong as that of the French.

Support for the idea of a single legislative house, called unicameralism, is however not limited to European sources. For example, the French revolutionary leader, the Marquis de Condorcet, only began expressing support for the concept after meeting Benjamin Franklin in 1776—the Philadelphian having recently arrived in Paris from an American state, Pennsylvania, best-known for its single-house legislature. (A result of 1701’s Charter of Privileges.) Franklin himself contributed to the literature surrounding this debate by introducing what he called “the famous political Fable of the Snake, with two Heads and one Body,” in which the said thirsty Snake, like Buridan’s Ass, cannot decide which way to proceed towards water—and hence dies of dehydration. Franklin’s concerns were later taken up, a century and half later, by the Nebraskan George Norris—ironically, a member of the U.S. Senate—who criss-crossed his state in the summer of 1934 (famously wearing out two sets of tires in the process) campaigning for the cause of unicameralism. Norris’ side won, and today Nebraska’s laws are passed by a single legislative house.

Lately, however, the action has swung back across the Atlantic: both Britain and Italy have sought to reform, if not abolish, their upper houses. In 1999, the Parliament of Great Britain passed the House of Lords Act, which ended a tradition that had lasted nearly a thousand years: the hereditary right of the aristocracy to sit in that house. More recently, Italian prime minister Matteo Renzi called “for eliminating the Italian Senate,” as Alexander Stille put it in The New Yorker, which the Italian leader claimed—much as Norris had claimed—that doing so would “reduc[e] the cost of the political class and mak[e] its system more functional.” That proved, it seems, a bridge too far for many Italians, who forced Renzi out of office in 2016; similarly, despite the withering scorn of Orwell (who could be quite withering), the House of Lords has not been altogether abolished.

Nevertheless, American professor of political science James Garner observed so early as 1910, citing the example of Canadian provincial legislatures, that among “English speaking people the tendency has been away from two chambers of equal rank for nearly two hundred years”—and the latest information indicates the same tendency at work worldwide. According to the Inter-Parliamentary Union—a kind of trade organization for legislatures—there are for instance currently 116 unicameral legislatures in the world, compared with 77 bicameral ones. That represents a change even from 2014, when there were 3 less unicameral ones and 2 more bicameral ones, according to a 2015 report by Betty Drexage for the Dutch government. Globally, in other words, bicameralism appears to be on the defensive and unicameralism on the rise—for reasons, I would suggest, that have much to do with widespread adoption of a perspective closer to Dawkins’ than to Eagleton’s.

Within the English-speaking world, however—and in particular within the United States—it is in fact Eagleton’s position that appears ascendent. Eagleton’s dualism is, after all, institutionally a far more useful doctrine for the disciplines known, in the United States, as “the humanities”: as the advertisers know, product differentiation is a requirement for success in any market. Yet as the former director of the American National Humanities Center, Geoffrey Galt Harpham, has remarked, the humanities are “truly native only to the United States”—which implies that the dualist conception of knowledge that depicts the sciences as opposed to something called “the humanities” is one that is merely contingent, not a necessary part of reality. Therefore, Terry Eagleton, and other scholars in those disciplines, may advertise themselves as on the side of “the people,” but the real history of the world may differ—which is to say, I suppose, that somebody’s delusional, all right.

It just may not be Richard Dawkins.

Advertisements

Double Down

There is a large difference between our view of the US as a net creditor with assets of about 600 billion US dollars and BEA’s view of the US as a net debtor with total net debt of 2.5 trillion. We call the difference between these two equally arbitrary estimates dark matter, because it corresponds to assets that we know exist, since they generate revenue but cannot be seen (or, better said, cannot be properly measured). The name is taken from a term used in physics to account for the fact that the world is more stable than you would think if it were held together only by the gravity emanating from visible matter. In our measure the US owns about 3.1 trillion of unaccounted net foreign assets. [Emp. added]
—Ricardo Hausmann and Frederico Sturzenegger.
“U.S. and Global Imbalances: Can Dark Matter Prevent a Big Bang?”
13 November 2005.

 

Last month Wikileaks, the journalistic-like platform, released a series of emails that included (according to the editorial board of The Washington Post) “purloined emailed excerpts” of Hillary Clinton’s “paid speeches to corporate audiences” from 2013 to 2015—the years in which Clinton withdrew from public life while building a war-chest for her presidential campaign. In one of those speeches, she expressed what the board of the Post calls “her much-maligned view that ‘you need both a public and a private position’”—a position that, the Post harumphs, “is playing as a confession of two-facedness but is actually a clumsy formulation of obvious truth”: namely, that politics cannot operate “unless legislators can deliberate and negotiate candidly, outside the glare of publicity.” To the Post, in other words, thinking that people ought to believe the same things privately as they loudly assert publicly is the sure sign of a näivete verging on imbecility; almost certainly, the Post’s comments draw a dividing line in American life between those who “get” that distinction and those who don’t. Yet, while the Post sees fit to present Clinton’s comments as a sign of her status as “a knowledgeable, balanced political veteran with sound policy instincts and a mature sense of how to sustain a decent, stable democracy,” in point of fact it demonstrates—far more than Donald Trump’s ridiculous campaign—just how far from a “decent, stable democracy” the United States has become: because as those who, nearly a thousand years ago, first set in motion the conceptual revolution that resulted in democracy understood, there is no thought or doctrine more destructive of democracy than the idea that there is a “public” and a “private” truth.

That’s a notion that, likely, is difficult for the Post’s audience to encompass. Presumably educated at the nation’s finest schools, the Post’s audience can see no issue with Clinton’s position because the way towards it has been prepared for decades: it is, in fact, one of the foundational doctrines of current American higher education. Anyone who has attended an American institution of higher learning over the past several decades, in other words, is going to learn a version of Clinton’s belief that truth can come in two (or more) varieties, because that is what intellectuals of both the political left and the political right have asserted for more than half a century.

The African-American novelist James Baldwin asserted, for example, in 1949 that “literature and sociology are not the same,” while in 1958 the conservative political scientist Leo Strauss dismissed “the ‘scientific’ approach to society” as ignoring “the moral distinctions by which we take our bearings as citizens and”—in a now-regrettable choice of words—“as men.” It’s become so unconscious a belief among the educated, in fact, that even some scientists themselves have adopted this view: the biologist Stephen Jay Gould, for instance, towards the end of his life argued that science and religion constituted what he called “non-overlapping magisteria,” while John Carmody, a physician turned writer for The Australian, more prosaically—and seemingly modestly—asserted not long ago that “science and religion, as we understand them, are different.” The motives of those arguing for such a separation are usually thought to be inherently positive: agreeing to such a distinction, in fact, is nearly a requirement for admittance to polite society these days—which is probably why the Post can assert that Clinton’s admissions are a sign of her fitness for the presidency, instead of being disqualifying.

To the Post’s readers, in short, Hillary Clinton’s doubleness is a sign of her “sophistication” and “responsibility.” It’s a sign that she’s “one of us”—she, presumably unlike the trailer trash interested in Donald Trump’s candidacy, understands the point Rashomon! (Though, Kurosawa’s film does not—because logically it cannot—necessarily imply the view of ambiguity it’s often suggested it does: if Rashomon makes the claim that reality is ultimately unknowable, how can we know that?) But those who think thusly betray their own lack of sophistication—because, in the long history of humanity, this isn’t the first time that someone has tried to sell a similar doctrine.

Toward the height of the Middle Ages the works of Aristotle became re-discovered in Europe, in part through contacts with Muslim thinkers like the twelfth-century Andalusian Ibn-Rushd—better known in Europe as “Averroes.” Aristotle’s works were extremely exciting to students used to a steady diet of Plato and the Church Fathers—precisely because at points they contradicted, or at least appeared to contradict, those same Church Fathers. (Which was also, as it happened, what interested Ibn-Rushd about Aristotle—though in his case, the Greek philosopher appeared to contradict Muslim, instead of Christian, sources.) That however left Aristotle enthusiasts with a problem: if they continued to read the Philosopher (Aristotle) and his Commentator (Averroes), they would embark on a collision course with the religious authorities.

In The Harmony of Religion and Philosophy, it seems, Averroes taught that “philosophy and revelation do not contradict each other, and are essentially different means of reaching the same truth”—a doctrine that his later Christian followers turned into what became known as the doctrine of “double truth.” According to a lecturer at the University of Paris in the thirteenth century named Siger of Brabant, for instance, “there existed a ‘double truth’: a factual or ‘hard’ truth that is reached through science and philosophy, and a ‘religious’ truth that is reached through religion.” To Brabant and his crowd, according to Encyclopedia Britannica, “religion and philosophy, as separate sources of knowledge, might arrive at contradictory truths without detriment to either.” (Which was not the same as Averroes’ point, however: the Andalusian scholar “taught that there is only one truth, but reached in two different ways, not two truths.”) Siger of Brabant, in other words, would have been quite familiar with Hillary Clinton’s distinction between the “public” and the “private.”

To some today, of course, that would merely point to how contemporary Siger of Brabant was, and how fuddy-duddy were his opponents—like Stephen Tempier, the bishop of Paris. As if he were some 1950s backwoods Baptist preacher denouncing Elvis or the Beatles, in 1277 Tempier denounced those who “hold that something is true according to philosophy but not according to the Catholic faith, as if there are two contrary truths.” Yet, while some might want to portray Brabant, thusly, as a forerunner to today’s tolerant societies, in reality it was Tempier’s insistence that truth comes in mono, not stereo, that (seemingly paradoxically) led to the relatively open society we at present enjoy.

People who today would make that identification, that is, might be uneasy if they knew that part of the reason Brabant believed his doctrine was his belief in “the superiority of philosophers to the common people,” or that Averroes himself warned “against teaching philosophical methods to the general populace.” Two truths, in other words, easily translated into two different kinds of people—and make no mistake, these doctrines did not imply that these two differing types were “separate but equal.” Instead, they were a means of asserting the superiority of the one type over the other. The doctrine of “double truth,” in other words, was not a forerunner to today’s easygoing societies.

To George Orwell, in fact, it was prerequisite for totalitarianism: Brabant’s theory of “double truth,” in other words, may be the origin of the concept of “doublethink” as used in Orwell’s 1984. In that 1948 novel, “doublethink” is defined as

To know and not to know, to be conscious of complete truthfulness while telling carefully constructed lies, to hold simultaneously two opinions which cancelled out, knowing them to be contradictory and believing in both of them, to use logic against logic, to repudiate morality while laying claim to it, to believe that democracy was impossible and that the Party was the guardian of democracy, to forget whatever it was necessary to forget, then to draw it back into memory again at the moment when it was needed, and then promptly to forget it again, and above all, to apply the same process to the process itself – that was the ultimate subtlety: consciously to induce unconsciousness, and then, once again, to become unconscious of the act of hypnosis you had just performed. Even to understand the word ‘doublethink’ involved the use of doublethink.

It was a point Orwell had been thinking about for some time: in a 1946 essay entitled “Politics and the English Language,” he had denounced “unscrupulous politicians, advertisers, religionists, and other doublespeakers of whatever stripe [who] continue to abuse language for manipulative purposes.” To Orwell, the doctrine of the “double truth” was just a means of sloughing off feelings of guilt or shame naturally produced by human beings engaged in such manipulations—a technique vital to totalitarian regimes.

Many in today’s universities, to be sure, have a deep distrust for Orwell: Louis Menand—who not only teaches at Harvard and writes for The New Yorker, but grew up in a Hudson Valley town named for his own great-grandfather—perhaps summed up the currently fashionable opinion of the English writer when he noted, in a drive-by slur, that Orwell was “a man who believed that to write honestly he needed to publish under a false name.” The British novelist Will Self, in turn, has attacked Orwell as the “Supreme Mediocrity”—and in particular takes issue with Orwell’s stand, in “Politics and the English Language,” in favor of the idea “that anything worth saying in English can be set down with perfect clarity such that it’s comprehensible to all averagely intelligent English readers.” It’s exactly that part of Orwell’s position that most threatens those of Self’s view.

Orwell’s assertion, Self says flatly, is simply “not true”—an assertion that Self explicitly ties to issues of minority representation. “Only homogeneous groups of people all speak and write identically,” Self writes against Orwell; in reality, Self says, “[p]eople from different heritages, ethnicities, classes and regions speak the same language differently, duh!” Orwell’s big argument against “doublethink”—and thusly, totalitarianism—is in other words just “talented dog-whistling calling [us] to chow down on a big bowl of conformity.” Thusly, “underlying” Orwell’s argument “are good old-fashioned prejudices against difference itself.” Orwell, in short, is a racist.

Maybe that’s true—but it may also be worth noting that the sort of “tolerance” advocated by people like Self can also be interpreted, and has been for centuries, as in the first place a direct assault on the principle of rationality, and in the second place an abandonment of millions of people. Such, at least, is how Thomas Aquinas would have received Self’s point. The Angelic Doctor, as the Church calls him, asserted that Averroeists like Brabant could be refuted on their own terms: the Averroeists said they believed, Aquinas remarked, that philosophy taught them that truth must be one, but faith taught them the opposite—a position that would lead those who held it to think “that faith avows what is false and impossible.” According to Aquinas, the doctrine of the “double truth” would imply that belief in religion was as much as admitting that religion was foolish—at which point you have admitted that there is only a single truth, and it isn’t a religious one. Hence, Aquinas’ point was that, despite what Orwell feared in 1984, it simply is not psychologically possible to hold two opposed beliefs in one’s head simultaneously: whenever someone is faced with a choice like that, that person will inevitably choose one side or the other.

In this, Aquinas was merely following his predecessors. To the ancients, this was known as the “law of non-contradiction”—one of the ancient world’s three fundamental laws of thought. “No one can believe that the same thing can (at the same time) be and not be,” as Aristotle himself put that law in the Metaphysics; nobody can (sincerely) believe one thing and its opposite at the same time. As the Persian, Avicenna—demonstrating that this law was hardly limited to Europeans—put it centuries later: “Anyone who denies the law of non-contradiction should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned is not the same as not to be burned.” Or finally, as Arthur Schopenhauer wrote centuries after that in The World as Will and Representation (using the heavy-handed vocabulary of German philosophers), “every two concept-spheres must be thought of as either united or as separated, but never as both at once; and therefore, although words are joined together which express the latter, these words assert a process of thought which cannot be carried out” (emp. added). If anyone says the contrary, these philosophers implied,  somebody’s selling something.

The point that Aristotle, Aquinas, Avicenna, and Orwell were making, in other words, is that the law of non-contradiction is essentially identical to rationality itself: a nearly foolproof method of performing the most basic of intellectual tasks—above all, telling honest and rational people from dishonest and duplicitous ones. And that, in turn, would lead to their second refutation of Self’s argument: by abandoning the law of non-contradiction, people like Brabant (or Self) were also effectively setting themselves above ordinary people. As one commenter on Aquinas writes, the Good Doctor’s insisted that if something is true, then “it must make sense and it must make sense in terms which are related to the ordinary, untheological ways in which human beings try to make sense of things”—as Orwell saw, that position is related to the law of noncontradiction, and both are related to the notion of democratic government, because telling which candidate is the better one is exactly the very foundation of that form of government. When Will Self attacks George Orwell for being in favor of comprehensibility, in other words, he isn’t attacking Orwell alone: he’s actually attacking Thomas Aquinas—and ultimately the very possibility of self-governance.

While the supporters of Hillary Clinton like to describe her opponent as a threat to democratic government, in other words, Donald Trump’s minor campaign arguably poses far less threat to American freedoms than hers does: from one point of view, Clinton’s accession to power actually threatens the basic conceptual apparatus without which there can be no democracy. Of course, given that during this presidential campaign virtually no attention has been paid, say, to the findings of social scientists (like Ricardo Hausmann and Federico Sturzenegger) and journalists (like those who reported on The Panama Papers) that while many conservatives bemoan such deficits as the U.S. budget or trade imbalances, in fact there is good reason to suspect that such gaps are actually the result of billions (or trillions) of dollars being hidden by wealthy Americans and corporations beyond the reach of the Internal Revenue Service (an agency whose budget has been gutted in recent decades by conservatives)—well, let’s just say that there’s good reason to suspect that Hillary Clinton’s campaign may not be what it appears to be.

After all—she said so.

Double Vision

Ill deeds are doubled with an evil word.
The Comedy of Errors. III, ii

The century just past had been both one of the most violent ever recorded—and also perhaps the highest flowering of civilized achievement since Roman times. A great war had just ended, and the danger of starvation and death had receded for millions; new discoveries in agriculture meant that many more people were surviving into adulthood. Trade was becoming more than a local matter; a pioneering Westerner had just re-established a direct connection with China. As well, although most recent contact with Europe’s Islamic neighbors had been violent, there were also signs that new intellectual contacts were being made; new ideas were circulating from foreign sources, putting in question truths that had been long established. Under these circumstances a scholar from one of the world’s most respected universities made—or said something that allowed his enemies to make it appear he had made—a seemingly-astonishing claim: that philosophy, reason, and science taught one kind of truth, and religion another, and that there was no need to reconcile the two. A real intellect, he implied, had no obligation to be correct: he or she had only to be interesting. To many among his audience that appeared to be the height of both sheer brainpower and politically-efficacious intellectual work—but then, none of them were familiar with either the history of German auto-making, or the practical difficulties of the office of the United States Attorney for the Southern District of New York.

Some literary scholars of a previous generation, of course, will get the joke: it’s a reference to then-Johns Hopkins University Miltonist Stanley Fish’s assertion, in his 1976 essay “Interpreting ‘Interpreting the Variorum,’” that, as an interpreter, he has no “obligation to be right,” but “only that [he] be interesting.” At the time, the profession of literary study was undergoing a profound struggle to “open the canon” to a wide range of previously-neglected writers, especially members of minority groups like African-Americans, women, and homosexuals. Fish’s remark, then, was meant to allow literary scholars to study those writers—many of whom would have been judged “wrong” according to previous notions of literary correctness. By suggesting that the proper frame of reference was not “correct/incorrect,” or “right/wrong,” Fish implied that the proper standard was instead something less rigid: a criteria that thusly allowed for the importation of new pieces of writing and new ideas to flourish. Fish’s method, in other words, might appear to be an elegant strategy that allowed for, and resulted in, an intellectual flowering in recent decades: the canon of approved books has been revamped, and a lot of people who probably would not have been studied—along with a lot of people who might not have done the studying—entered the curriculum who might not have had the change of mind Fish’s remark signified not have become standard in American classrooms.

I put things in the somewhat cumbersome way I do in the last sentence because of course Fish’s line did not arrive in a vacuum: the way had been prepared in American thought long before 1976. Forty years prior, for example, F. Scott Fitzgerald had claimed, in his essay “The Crack-Up” for Esquire, that “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” In 1949 Fitzgerald’s fellow novelist, James Baldwin, similarly asserted that “literature and sociology are not the same.” And thirty years after Fish’s essay, the notion had become so accepted that American philosopher Richard Rorty could casually say that the “difference between intellectuals and the masses is the difference between those who can remember and use different vocabularies at the same time, and those who can remember only one.” So when Fish wrote what he wrote, he was merely putting down something that a number of American intellectuals had been privately thinking for some time—a notion that has, sometime between now and then, become American conventional wisdom.

Even some scientists have come to accept some version of the idea: before his death, the biologist Stephen Jay Gould promulgated the notion of what he called “non-overlapping magisteria”: the idea that while science might hold to one version of truth, religion might hold another. “The net of science,” Gould wrote in 1997, “covers the empirical universe,” while the “net of religion extends over questions of moral meaning and value.” Or, as Gould put it more flippantly, “we [i.e., scientists] study how the heavens go, and they [i.e., theologians] determine how to go to heaven.” “Science,” as medical doctor (and book reviewer) John Carmody put the point in The Australian earlier this year, “is our attempt to understand the physical and biological worlds of which we are a part by careful observation and measurement, followed by rigorous analysis of our findings,” while religion “and, indeed, the arts are, by contrast, our attempts to find fulfilling and congenial ways of living in our world.” The notion then that there are two distinct “realms” of truth is a well-accepted one: nearly every thinking, educated person alive today subscribes to some version of it. Indeed, it’s a belief that appears necessary to the pluralistic, tolerant society that many envision the United States is—or should be.

Yet, the description with which I began this essay, although it does in some sense apply to Stanley Fish’s United States of the 1970s, also applies—as the learned knew, but did not say, at the time of Fish’s 1976 remark—to another historical era: Europe’s thirteenth century. At that time, just as during Fish’s, the learned of the world were engaged in trying to expand the curriculum: in this case, they were attempting to recoup the work of Aristotle, largely lost to the West since the fall of Rome. But the Arabs had preserved Aristotle’s work: “In 832,” as Arthur Little, of the Jesuits, wrote in 1947, “the Abbaside Caliph, Almamun,” had the Greek’s work translated “into Arabic, roughly but not inaccurately,” in which language Aristotle’s works “spread through the whole Moslem world, first to Persia in the hand of Avicenna, then to Spain where its greatest exponent was Averroes, the Cordovan Moor.” In order to read and teach Aristotle without interference from the authorities, Little tells us, Averroes (Ibn Rushd) decided that “Aristotle’s doctrine was the esoteric doctrine of the Koran in opposition to the vulgar doctrine of the Koran defended by the orthodox Moslem priests”—that is, the Arabic scholar decided that there was one “truth” for the masses and another, far more subtle, for the learned. Averroes’ conception was, in turn, imported to the West along with the works of Aristotle: if the ancient Greek was at times referred to as the Master, his Arabic disciple was referred to as the Commentator.

Eventually, Aristotle’s works reached Paris, and the university there, sometime towards the end of the twelfth century. Gerard of Cremona, for example, had translated the Physics into Latin from the Arabic of the Spanish Moors sometime before he died in 1187; others had translated various parts of Aristotle’s Greek corpus either just before or just afterwards. For some time, it seems, they circulated in samizdat fashion among the young students of Paris: not part of the regular curriculum, but read and argued over by the brightest, or at least most well-read. At some point, they encountered a young man who would become known to history as Siger of Brabant—or perhaps rather, he encountered them. And like many other young, studious people, Siger fell in love with these books.

It’s a love story, in other words—and one that, like a lot of other love stories, has a sad, if not tragic, ending. For what Siger was learning by reading Aristotle—and Averroes’ commentary on Aristotle—was nearly wholly incompatible with what he was learning in his other studies through the rest of the curriculum—an experience that he was not, as the experience of Averroes before him had demonstrated, alone in having. The difference, however, is that whereas most other readers and teachers of the learned Greek sought to reconcile him to Christian beliefs (despite the fact that Aristotle long predated Christianity), Siger—as Richard E. Rubenstein puts it in his Aristotle’s Children—presented “Aristotle’s ideas about nature and human nature without attempting to reconcile them with traditional Christian beliefs.” And even more: as Rubenstein remarks, “Siger seemed to relish the discontinuities between Aristotelian scientia and Christian faith.” At the same time, however, Siger also held—as he wrote—that people ought not “try to investigate by reason those things which are above reason or to refute arguments for the contrary position.” But assertions like this also left Siger vulnerable.

Vulnerable, that is, to the charge that what he and his friends were teaching was what Rubenstein calls “the scandalous doctrine of Double Truth.” Or, in other words, the belief that “a proposition [that] could be true scientifically but false theologically, or the other way round.” Whether Siger and his colleagues did, or did not, hold to such a doctrine—there have been arguments about the point for centuries now— isn’t really material, however: as one commenter, Vincent P. Benitez, has put it, either way Siger’s work highlighted just how the “partitioning of Christian intellectual life in the thirteenth century … had become rather pronounced.” So pronounced, in fact, that it suggested that many supposed “intellectuals” of the day “accepted contradictories as simultaneously true.” And that—as it would not to F. Scott Fitzgerald later—posed a problem to the medievals, because it ran up against a rule of logic.

And not just any rule of logic: it’s one that Aristotle himself said was the most essential to any rational thought whatever. That rule of logic is usually known by the name the Law of Non-Contradiction, usually placed as the second of the three classical rules of logic in the ancient world. (The others being the Law of Identity—A is A—and the Law of the Excluded Middle—either A is A or it is not-A.) As Aristotle himself put it, the “most certain of all basic principles is that contradictory propositions are not true simultaneously.” Or—as another of Aristotle’s Arabic commenters, Avicenna (Ibn-Sina) put it in one of its most famous formulations—that rule goes like this: “Anyone who denies the law of non-contradiction should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned is not the same as not to be burned.” In short, a thing cannot be both true and not true at the same time.

Put in Avicenna’s way, of course, the Law of Non-Contradiction will sound distinctly horrible to most American undergraduates, perhaps particularly those who attend the most exclusive colleges: it sounds like—and, like a lot of things, has been—a justification for the worst kind of authoritarian, even totalitarian, rule, and even torture. In that sense, it might appear that attacking the law of non-contradiction could be the height of oppositional intellectual work: the kind of thing that nearly every American undergraduate attracted to the humanities aspires to do. Who is not, aside from members of the Bush Administration legal team (for that matter, nearly every regime known to history) and viewers of the television show 24, against torture? Who does not know that black-and-white morality is foolish, that the world is composed of various “shades of gray,” that “binary oppositions” can always be dismantled, and that it is the duty of the properly educated to instruct the lower orders in the world’s real complexity? Such views might appear obvious—especially if one is unfamiliar with the recent history of Volkswagen.

In mid-September of 2015, the Environmental Protection Agency of the United States issued a violation notice to the German automaker Volkswagen. The EPA had learned that, although the diesel engines Volkswagen built were passing U.S. emissions tests, they were doing it on the sly: each car’s software could detect when the car’s engine was being tested by government monitors, and if so could reduce the pollutants that engine was emitting. Just more than six months later, Volkswagen agreed to pay a settlement of 15.3 billion dollars in the largest auto-related class-action lawsuit in the history of the United States. That much, at least, is news; what interests me, however,  about this story in relation to this talk about academics and monks was a curious article put out by The New Yorker in October of 2015. Entitled “An Engineering Theory of the Volkswagen Scandal,” Paul Kedrosky—perhaps significantly—“a venture investor and a former equity analyst,” explains these events as perhaps not the result of “engineers … under orders from management to beat the tests by any means necessary.” Instead, the whole thing may simply have been the result of an “evolution” of technology that “subtly and stealthily, even organically, subverted the rules.” In other words, Kedrosky wishes us to entertain the possibility that the scandal ought to be understood in terms of the undergraduate’s idea of shades of gray.

Kedrosky takes his theory from a book by sociologist Diane Vaughn, about the Challenger space shuttle disaster of 1986. In her book, Vaughn describes how, over nine launches from 1983 onwards, the space shuttle organization had launched Challenger under colder and colder temperatures, until NASA’s engineers had “effectively declared the mildly abnormal normal,” Kedrosky says—and until, one very frigid January morning in Florida, the shuttle blew into thousands of pieces moments after liftoff. Kedrosky’s attempt at an analogy is that maybe the Volkswagen scandal developed similarly: “Perhaps it started with tweaks that optimized some aspect of diesel performance and then evolved over time.” If so, then “at no one step would it necessarily have felt like a vast, emissions-fixing conspiracy by Volkswagen engineers.” Instead—as this story goes—it would have felt like Tuesday.

The rest of Kedrosky’s thrust is relatively easy to play out, of course—because we have heard a similar story before. Take, for instance, another New Yorker story; this one, a profile of the United States Attorney for the Southern District of New York, Preet Bharara. Mr. Bharara, as the representative of the U.S. Justice Department in New York City, is in charge of prosecuting Wall Street types; because he took office in 2009, at the crest of the financial crisis that began in 2007, many thought he would end up arresting and charging a number of executives as a result of the widely-acknowledged chicaneries involved in creating the mess. But as Jeffrey Toobin laconically observes in his piece, “No leading executive was prosecuted.” Even more notable, however, is the reasoning Bharara gives for his inaction.

“Without going into specifics,” Toobin reports, Bharara told him “that his team had looked at Wall Street executives and found no evidence of criminal behavior.” Sometimes, Bharara went on to explain, “‘when you see a bad thing happen, like you see a building go up in flames, you have to wonder if there’s arson’”—but “‘sometimes it’s not arson, it’s an accident.’” In other words, to Bharara, it’s entirely plausible to think of the entire financial meltdown of 2007-8, which ended three giant Wall Street firms (Bear Stearns, Merrill Lynch, and Lehman Brothers) and two arms of the United States government (Fannie Mae and Freddie Mac), and is usually thought to have been caused by predatory lending practices driven by Wall Street’s appetite for complex financial instruments, as essentially analogous to Diane Vaughn’s view of the Challenger disaster—or Kedrosky’s view of Volkswagen’s cavalier thoughts about environmental regulation. To put it in another way, both Kedrosky and Bharara must possess, in Fitzgerald’s terms, “first-rate intelligences”: in Kedrosky’s version of Volkswagen’s actions or Bharara’s view of Wall Street, crimes were committed, but nobody committed them. They were both crimes and not-crimes at the same time.

These men can, in other words, hold opposed ideas in their head simultaneously. To many, that makes these men modern—or even, to some minds, “post-modern.” Contemporary intellectuals like to cite examples—like the “rabbit-duck” illusion referred to by Wittgenstein, which can be seen as either a rabbit or a duck, or the “Schroedinger’s Cat” thought experiment, whereby the cat is neither dead nor alive until the box is opened, or the fact that light is both a wave and a particle—designed to show how out-of-date the Law of Noncontradiction is. In that sense, we might as easily blame contemporary physics as contemporary work in the humanities for Kedrosky or Bharara’s difficulties in saying whether an act was a crime or not—and for that matter, maybe the similarity between Stanley Fish and Siger of Brabant is merely a coincidence. Still, in the course of reading for this piece I did discover another apparent coincidence in Arthur Little’s same article I previously cited. “Unlike Thomas Aquinas,” the Jesuit wrote 1947, “whose sole aim was truth, Siger desired most of all to find the world interesting.” The similarity to Stanley Fish’s 1976 remarks about himself—that he has no obligation to be right, only to be interesting—are, I think, striking. Like Bharara, I cannot demonstrate whether Fish knew of this article of Little’s, written thirty years before his own.

But then again, if I have no obligation to be right, what does it matter?

Left Behind

Banks and credit companies are, strictly speaking, the direct source of their illusory “income.” But considered more abstractly, it is their bosses who are lending them money. Most households are net debtors, while only the very richest are net creditors. In an overall sense, in other words, the working classes are forever borrowing from their employers. Lending replaces decent wages, masking income disparities even while aggravating them through staggering interest rates.
Kim Phillips-Fein “Chapters of Eleven”
    The Baffler No. 11, 1998


Note: Since I began this blog by writing about golf, I originally wrote a short paragraph tying what follows to the FIFA scandal, on the perhaps-tenuous connection that the Clinton Foundation had accepted money from FIFA and Bill had been the chairman of the U.S. bid for the 2022 World Cup. But I think the piece works better without it.

“Why is it that women still get paid less than men for doing the same work?” presidential candidate Hillary Clinton asked recently in, of all places, Michigan. But the more natural question in the Wolverine State might seem to be the question a lot of economists are asking these days: “Why is everyone getting paid less?” Economists like Emmanuel Saez of the University of California, who says that “U.S. income inequality has been steadily increasing since the 1970s, and now has reached levels not seen since 1928.” Or Nobel Prize winner Paul Krugman, who says that even the wages of “highly educated Americans have gone nowhere since the late 1990s.” But while it’s not difficult to imagine that Clinton  asks the question she asks in a cynical fashion—in other words, to think that she is a kind of Manchurian candidate for Wall Street—it’s at least possible to think she asks it innocently. All Americans, says scholar Walter Benn Michaels, have been the victims of a “trick” over the last generation: the trick of responding to “economic inequality by insisting on the importance of … identity.” But how was the trick done?

The dominant pedagogy of the American university suggests one way: if it’s true that, as the professors say, reality is a function of the conceptual tools available, then maybe Hillary Clinton cannot see reality because she doesn’t have the necessary tools. As well she might not: in Clinton’s case, one might as well ask why a goldfish can’t see water. Raised in a wealthy Chicago suburb, on to Ivy League colleges; then the governor’s mansion in Little Rock, Arkansas and the White House; followed by Westchester County, then back to D.C. It’s true of course that Clinton did write a college thesis about Saul Alinsky’s community organizing tactics, so she cannot possibly be unfamiliar with the question of economic inequality. But it’s also easy to see how economics is easily obscured in such places.

What’s perhaps stranger though is that economics, as a subject, should have become more obscure, not less, since Clinton left New Haven—and even if Clinton should have been wholly ignorant of the subject, that doesn’t explain how she could then become a national candidate for president of the party. Yet at about the same time that Clinton was at Yale, another young woman with bright academic credentials was living practically just down the road in Hartford, Connecticut—and the work she did has helped to ensure that, as Michaels says, “for the last 30 years, while the gap between the rich and the poor has grown larger, we’ve been urged to respect people’s identities.” That doesn’t mean of course that the story I am going to tell explains everything about why Hillary asked the question she asked in Michigan, instead of the one she should have asked, but it is, I think, illustrative—by telling this one story in depth, it becomes possible to understand how what Michaels calls the “trick” was pulled.

“In 1969,” Jane Tompkins tells us in “Sentimental Power: Uncle Tom’s Cabin and the Politics of Literary History,” she “lived in the basement of a house on Forest Street in Hartford, Connecticut, which had belonged to Isabella Beecher Hooker—Harriet Beecher Stowe’s half-sister.” Living where she did sent Tompkins off on an intellectual journey that eventually led to the essay “Sentimental Power”—an essay that took up the question of why, as Randall Fuller observed not long ago in the magazine Humanities, “Uncle Tom’s Cabin was seen by most literary professionals as a cultural embarrassment.” Her conclusion was that Uncle Tom’s Cabin was squelched by a “male-dominated scholarly tradition that controls both the canon of American literature … and the critical perspective that interprets the canon for society.” To Tompkins, Uncle Tom’s Cabin was “repressed” on the basis of “identity”: Stowe’s work was called “trash”—as the Times of London did at the time it was published—because it was written by a woman.

To make her argument, however, required Tompkins to make several moves that go some way towards explaining why Hillary Clinton asks the question she asks, rather than the one she should ask. Most significant is Tompkins’ argument against the view she ascribes to her opponents: that “sentimental novels written by women in the nineteenth century”—like Uncle Tom’s Cabin—“were responsible for a series of cultural evils whose regrets still plague us,” among them the “rationalization of an unjust economic order.” Already, Tompkins is telling her readers that she is going to argue against those critics who used Uncle Tom’s Cabin to discuss the economy; already, we are not far from Hillary Clinton’s question.

Next, Tompkins takes her critical predecessors to task for ignoring the novel’s “enormous popular success”: it was, as Tompkins points out, the first novel to sell “over a million copies.” So part of her argument is not only the bigotry, but also the snobbishness of her opponents—an argument familiar enough to anyone who listens to right-wing talk radio. The distance from Tompkins’ argument to those who “argue” that quality is guaranteed by popularity, and vice versa—the old “if you’re so smart, why ain’t you rich” line—is about as far from the last letter in this sentence to its period. So Tompkins deprecates the idea that value can be independent of “success”—the idea that there can be slippage between an economic system and reality.

Yet perhaps the largest step Tompkins takes on the road to Hillary’s question simply concerns how she ascribes criticisms of Uncle Tom’s Cabin to sexism, or Stowe’s status as a woman—despite the fact that perhaps the best-known critical text on the novel, James Baldwin’s 1949 essay “Everybody’s Protest Novel,” was not only written by a gay black man, but Baldwin’s based his criticism of Stowe’s novel on rules originally applied to a white male author: James Fenimore Cooper, the object of Mark Twain’s scathing 1895 essay, “Fenimore Cooper’s Literary Offenses.” That essay, with which Twain sought to bury Cooper, furnished the critical precepts Baldwin uses to attempt to bury Stowe.

Stowe’s work, Baldwin says, is “a very bad novel” for two reasons: first, it is full of “excessive and spurious emotion.” Secondly, the novel “is activated by what might be called a theological terror,” so that “the spirit that breathes in this book … is not different from that spirit of medieval times which sought to exorcise evil by burning witches.” Both of these reasons derive from principles propounded by Twain in “Fenimore Cooper’s Literary Offenses.”

“Eschew surplusage” is number fourteen of Twain’s rules, so when Baldwin says Stowe’s writing is “excessive,” he is implicitly accusing Stowe of breaking this rule. Even Tompkins admits that Uncle Tom’s Cabin breaks this rule when she says that Stowe’s novel possesses “a needless proliferation of incident.” Then, number nine on Twain’s list is “that the personages of a tale shall confine themselves to possibilities and let miracles alone”—the rule that Baldwin invokes when he criticizes Uncle Tom’s Cabin for its “theological terror.” When burning witches, after all, it is necessary to have a belief in miracles—i.e., the supernatural—and certainly Stowe, who not only famously claimed that “God wrote” her novel but also suffused her novel with supernatural events, believed in the supernatural. So, if Baldwin—who remember was both black and homosexual—is condemning Stowe on the basis of rules originally used against a white male writer, it’s difficult to see how Stowe is being unfairly singled out on the basis of her sex. But that is what Tompkins says.

I take such time on these points because ultimately Twain’s rules go back much further than Twain himself—and it’s ultimately these roots that are both Tompkin’s object and, I suspect, the reason why Hillary asks the question she asks instead of the one she should. Twain’s ninth rule, concerning miracles, is more or less a restatement of what philosophers call naturalism: the belief “that reality has no place for ‘supernatural’ or other ‘spooky’ kinds of entity” according to the Stanford Encyclopedia of Philosophy. And the roots of that idea trace back to the original version of Twain’s fourteenth rule (“Eschew surplusage.”): Thomas Aquinas, in his Summa Theologica, gave one example of it when wrote that if “a thing can be done adequately by means of one, it is superfluous to do by several.” (In a marvelous economy, in other words, Twain reduced Aquinas’ rule—sometimes known as “Occam’s Razor,” to two words.) So it’s possible to say that Baldwin’s criticisms of Stowe are actually the same criticism: that “excessive” writing leads to, or perhaps more worrisomely just is, a belief in the supernatural.

It’s this point that Tompkins ultimately wants to address—she calls Uncle Tom’s Cabin “the Summa Theologica of nineteenth-century America’s religion of domesticity,” after all. Also, Tompkins doesn’t try to defend Stowe against Baldwin on the same grounds that two other critics tried to defend Cooper against Twain. In an essay named “Fenimore Cooper’s Literary Defenses,” Lance Schachterle and Kent Ljungquist argue that Twain doesn’t do justice to Cooper because he doesn’t take into account the different literary climate of Cooper’s time. While “Twain valued economy of style,” they write, “such concision simply was not a characteristic of many early nineteenth-century novelists’ work.” They’re willing to allow, in other words, the merits of Twain’s rules—they’re just arguing that it isn’t fair to apply those rules to writers who could not have been aware of them. Tompkins however takes a different tack: she says that in Uncle Tom’s Cabin, “it is the spirit alone that is finally real.” According to Tompkins, the novel is not just unaware of naturalism: Uncle Tom’s Cabin actively rejects naturalism.

To Tompkins, Stowe’s anti-naturalism is somehow a virtue. Stowe’s rejection of naturalism leads her to recommend, Tompkins says, “not specific alterations in the current political and economic arrangements but rather a change of heart … as the necessary precondition for sweeping social change.” To Stowe, attempts to “alter the anti-abolitionist majority in the Senate,” for instance, are absurdities: “Reality, in Stowe’s view, cannot be changed by manipulating the physical environment.” Apparently, this is a point in Stowe’s favor.

Without naturalism and its corollaries—basic intellectual tools—it’s difficult to think a number of things: that all people are people, first of all. That is, members of a species that has had, more or less, the same cognitive abilities for at least the last 100,000 years or so, which implies that most people’s cognitive abilities aren’t much different than anyone else’s—nor are they much different from anyone in history’s. Which, one might say, is prerequisite to running a democratic state—as opposed to, say, a monarchy or aristocracy, in which one person is better than another by blood right. But if naturalism is dead, then the growth of “identity” politics is perhaps easy to understand: without the conceptual category of “human being” available, other categories have to be substituted.

Without grouping votes on some basis, how could they be gathered into large enough clumps to make a difference? Hillary Clinton must ask for votes on the basis of some commonality between voters large enough to ensure her election. Assuming that she does, in fact, wish to be elected, it’s enlightening to observe that Clinton is appealing for votes on the basis of the next largest category after “human being”—“woman,” the category of 51 percent of the population according to most figures. That alone might explain why Hillary Clinton should ask “Why are women paid less” rather than “Why is everyone paid less?”

Yet the effects of Tompkins’ argument, as I suspect will be drearily apparent to the reader by now, are readily observable in many more places than Hillary Clinton’s campaign in today’s world. Think of it this way: what else are contemporary phenomena like unpaid internships, “doing it for the exposure,” or just trying to live on a minimum wage or public assistance, but attempts to live without material substance—that is, attempts to live as a “spirit?” Or for that matter, what is credit card debt, which Kim Phillips-Fein was explaining in The Baffler so long ago as 1998 as what happened when “people began to borrow to make up for stagnant wages.” These are all matters in which what matters isn’t matter—i.e., the material—but the “spirit.”

In the same way, what else was the “long-time” Occupy Wall Street camper named “Ketchup” doing when she said, to Josh Harkinson at Mother Jones, that the “‘whole big desire for demands is something people want to use to co-opt us’” but, as Tompkins would put it, refusing to delineate “specific alterations in the current political and economic arrangements?” That’s why Occupy, as Thomas Frank memorably wrote in his essay, “To the Precinct Station,” “seems to have had no intention of doing anything except building ‘communities’ in public spaces and inspiring mankind with its noble refusal to have leaders.” The values described by Tompkins’ essay are, specifically, anti-naturalist: Occupy Wall Street, and its many, many sympathizers, was an anti-naturalist—a religious—movement.

It may, to be sure, be little wonder that feminists like Tompkins should look to intellectual traditions explicitly opposed to the intellectual project of naturalism—most texts written by women have been written by religious women. So have most texts written by most people everywhere—to study a “minority” group virtually requires studying texts written by people who believed in a supernatural being. It’s wholly understandable, then, that anti-naturalism should have become the default mode of people who claim to be on the “left.” But while it’s understandable, it’s no way to, say, raise wages. Whatever Jane Tompkins says about her male literary opponents, Harriet Beecher Stowe didn’t free anybody. Abraham Lincoln—by all accounts an atheist—did.

Which is Hillary Clinton’s model?