A Fable of a Snake

 

… Thus the orb he roamed
With narrow search; and with inspection deep
Considered every creature, which of all
Most opportune might serve his wiles; and found
The Serpent subtlest beast of all the field.
Paradise Lost. Book IX.
The Commons of England assembled in Parliament, [find] by too long experience, that
the House of Lords is useless and dangerous to the people of England …
—Parliament of England. “An Act for the Abolishing of the House of Peers.” 19 March 1649.

 

Imagine,” wrote the literary critic Terry Eagleton some years ago in the first line of his review of the biologist Richard Dawkins’ book, The God Delusion, “someone holding forth on biology whose only knowledge of the subject is the Book of British Birds, and you have a rough idea of what it feels like to read Richard Dawkins on theology.” Eagleton could quite easily have left things there—the rest of the review contains not much more information, though if you have a taste for that kind of thing it does have quite a few more mildly-entertaining slurs. Like a capable prosecutor, Eagleton arraigns Dawkins for exceeding his brief as a biologist: that is, of committing the scholarly heresy of speaking from ignorance. Worse, Eagleton appears to be right: of the two, clearly Eagleton is better read in theology. Yet although it may be that Dawkins the real person is ignorant of the subtleties of the study of God, the rules of logic suggest that it’s entirely possible that someone could be just as educated as Eagleton in the theology—and yet hold arguably views closer to Dawkins’ than to Eagleton’s. As it happens, that person not only once existed, but Eagleton wrote a review of someone else’s biography of him. His name is Thomas Aquinas.

Thomas Aquinas is, of course, the Roman Catholic saint whose writings stand, even today, as the basis of Church doctrine: according to Aeterni Patris, an encyclical delivered by Pope Leo XIII in 1879, Aquinas stands as “the chief and master of all” the scholastic Doctors of the church. Just as, in other words, the scholar Richard Hofstadter called American Senator John Calhoun of South Carolina “the Marx of the master class,” so too could Aquinas be called the Marx of the Catholic Church: when a good Roman Catholic searches for the answer to a difficult question, Aquinas is usually the first place to look. It might be difficult then to think of Aquinas, the “Angelic Doctor” as he is sometimes referred to by Catholics, as being on Dawkins’ side in this dispute: both Aquinas and Eagleton lived by means of examining old books and telling people about what they found, whereas Dawkins is, by training at any rate, a zoologist.

Yet, while in that sense it could be argued that the Good Doctor (as another of his Catholic nicknames puts it) is therefore more like Eagleton (who was educated in Catholic schools) than he is like Dawkins, I think it could equally well be argued that it is Dawkins who makes better use of the tools Aquinas made available. Not merely that, however: it’s something that can be demonstrated simply by reference to Eagleton’s own work on Aquinas.

“Whatever other errors believers may commit,” Eagleton for example says about Aquinas’ theology, “not being able to count is not one of them”: in other words, as Eagleton properly says, one of the aims of Aquinas’ work was to assert that “God and the universe do not make two.” That’s a reference to Aquinas’ famous remark, sometimes called the “principle of parsimony,” in his magisterial Summa Contra Gentiles: “If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments where one suffices.” But what’s strange about Eagleton’s citation of Aquinas’ thought is that it is usually thought of as a standard argument on Richard Dawkins’ side of the ledger.

Aquinas’ statement is after all sometimes held to be one of the foundations of scientific belief. Sometimes called “Occam’s Razor,” Isaac Newton referred to Aquinas’ axiom in the Principia Mathematica when the great Englishman held that his work would “admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” Later still, in a lecture Albert Einstein gave at Oxford University in 1933, Newton’s successor affirmed that “the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.” Through these lines of argument runs more or less Aquinas’ thought that there is merely a single world—it’s just that the scientists had a rather different idea of what that world is than Aquinas did.

“God for Aquinas is not a thing in or outside the world,” according to Eagleton, “but the ground of possibility of anything whatever”: that is, the world according to Aquinas is a God-infused one. The two great scientists seem to have held, however, a position closer to the view supposed to have been expressed to Napoleon by the eighteenth-century mathematician Pierre-Simon LaPlace: that there is “no need of that hypothesis.” Both in other words think there is a single world; the distinction to be made is simply whether the question of God is important to that world’s description—or not.

One way to understand the point is to say that the scientists have preserved Aquinas’ way of thinking—the axiom sometimes known as the “principle of parsimony”—while discarding (as per the principle itself) that which was unnecessary: that is, God. Viewed in that way, the scientists might be said to be more like Aquinas than Aquinas—or, at least, than Terry Eagleton is like Aquinas. For Eagleton’s disagreement with Aquinas is different: instead of accepting the single-world hypothesis and rejecting whether it is God or not, Eagleton’s contention is with the “principle of parsimony” itself—the contention that there can be merely a single explanation for the world.

Now, getting into that whole subject is worth a library, so we’ll leave it aside here; let me simply ask you to stipulate that there is a lot of discussion about Occam’s Razor and its relation to the sciences, and that Terry Eagleton (a—former?—Marxist) is both aware of it and bases his objection to Aquinas upon it. The real question to my mind is this one: although Eagleton—as befitting a political radical—does what he does on political grounds, is the argumentative move he makes here as legitimate and as righteous as he makes it out to be? The reason I ask this is because the “principle of parsimony” is an essential part of a political case that’s been made for over two centuries—which is to say that, by abandoning Thomas Aquinas’ principle, people adopting Eagleton’s anti-scientific view are essentially conceding that political goal.

That political application concerns the design of legislatures: just as Eagleton and Dawkins argue over whether there is a single world or two, in politics the question of whether legislatures ought to have one house or two has occupied people for centuries. (Leaving aside such cases as Sweden, which once had—in a lovely display of the “diversity” so praised by many of Eagleton’s compatriots—four legislative houses.) The French revolutionary leader, the Abbè Sieyés—author of the manifesto of the French Revolution, What Is the Third Estate?—has likely put the case for a single house most elegantly: the abbè once wrote that legislatures ought to have one house instead of two on the grounds that “if the second chamber agrees with the first, it is useless; if it disagrees it is dangerous.” Many other French revolutionary leaders had similar thoughts: for example, Mirabeau wrote that what are usually termed “second chambers,” like the British House of Lords or the American Senate, are often “the constitutional refuge of the aristocracy and the preservation of the feudal system.” The Marquis de Condorcet thought much the same. But such a thought has not been limited to the eighteenth-century, nor to the right-hand side of the English Channel.

Indeed, there has long been similar-minded people across the Channel—there’s reason in fact to think that the French got the idea from the English in the first place given that Oliver Cromwell’s “Roundhead” regime had abolished the House of Lords in 1649. (Though it was brought back after the return of Charles II.) In 1867’s The English Constitution, the writer and editor-in-chief of The Economist, Walter Bagehot, had asserted that the “evil of two co-equal Houses of distinct natures is obvious.” George Orwell, the English novelist and essayist, thought much the same: in the early part of World War II he fully expected that the need for efficiency produced by the war would result in a government that would “abolish the House of Lords”—and in reality, when the war ended and Clement Atlee’s Labour government took power, one of Orwell’s complaints about it was that it had not made a move “against the House of Lords.” Suffice it to say, in other words, that the British tradition regarding the idea of a single legislative body is at least as strong as that of the French.

Support for the idea of a single legislative house, called unicameralism, is however not limited to European sources. For example, the French revolutionary leader, the Marquis de Condorcet, only began expressing support for the concept after meeting Benjamin Franklin in 1776—the Philadelphian having recently arrived in Paris from an American state, Pennsylvania, best-known for its single-house legislature. (A result of 1701’s Charter of Privileges.) Franklin himself contributed to the literature surrounding this debate by introducing what he called “the famous political Fable of the Snake, with two Heads and one Body,” in which the said thirsty Snake, like Buridan’s Ass, cannot decide which way to proceed towards water—and hence dies of dehydration. Franklin’s concerns were later taken up, a century and half later, by the Nebraskan George Norris—ironically, a member of the U.S. Senate—who criss-crossed his state in the summer of 1934 (famously wearing out two sets of tires in the process) campaigning for the cause of unicameralism. Norris’ side won, and today Nebraska’s laws are passed by a single legislative house.

Lately, however, the action has swung back across the Atlantic: both Britain and Italy have sought to reform, if not abolish, their upper houses. In 1999, the Parliament of Great Britain passed the House of Lords Act, which ended a tradition that had lasted nearly a thousand years: the hereditary right of the aristocracy to sit in that house. More recently, Italian prime minister Matteo Renzi called “for eliminating the Italian Senate,” as Alexander Stille put it in The New Yorker, which the Italian leader claimed—much as Norris had claimed—that doing so would “reduc[e] the cost of the political class and mak[e] its system more functional.” That proved, it seems, a bridge too far for many Italians, who forced Renzi out of office in 2016; similarly, despite the withering scorn of Orwell (who could be quite withering), the House of Lords has not been altogether abolished.

Nevertheless, American professor of political science James Garner observed so early as 1910, citing the example of Canadian provincial legislatures, that among “English speaking people the tendency has been away from two chambers of equal rank for nearly two hundred years”—and the latest information indicates the same tendency at work worldwide. According to the Inter-Parliamentary Union—a kind of trade organization for legislatures—there are for instance currently 116 unicameral legislatures in the world, compared with 77 bicameral ones. That represents a change even from 2014, when there were 3 less unicameral ones and 2 more bicameral ones, according to a 2015 report by Betty Drexage for the Dutch government. Globally, in other words, bicameralism appears to be on the defensive and unicameralism on the rise—for reasons, I would suggest, that have much to do with widespread adoption of a perspective closer to Dawkins’ than to Eagleton’s.

Within the English-speaking world, however—and in particular within the United States—it is in fact Eagleton’s position that appears ascendent. Eagleton’s dualism is, after all, institutionally a far more useful doctrine for the disciplines known, in the United States, as “the humanities”: as the advertisers know, product differentiation is a requirement for success in any market. Yet as the former director of the American National Humanities Center, Geoffrey Galt Harpham, has remarked, the humanities are “truly native only to the United States”—which implies that the dualist conception of knowledge that depicts the sciences as opposed to something called “the humanities” is one that is merely contingent, not a necessary part of reality. Therefore, Terry Eagleton, and other scholars in those disciplines, may advertise themselves as on the side of “the people,” but the real history of the world may differ—which is to say, I suppose, that somebody’s delusional, all right.

It just may not be Richard Dawkins.

Advertisements

Double Down

There is a large difference between our view of the US as a net creditor with assets of about 600 billion US dollars and BEA’s view of the US as a net debtor with total net debt of 2.5 trillion. We call the difference between these two equally arbitrary estimates dark matter, because it corresponds to assets that we know exist, since they generate revenue but cannot be seen (or, better said, cannot be properly measured). The name is taken from a term used in physics to account for the fact that the world is more stable than you would think if it were held together only by the gravity emanating from visible matter. In our measure the US owns about 3.1 trillion of unaccounted net foreign assets. [Emp. added]
—Ricardo Hausmann and Frederico Sturzenegger.
“U.S. and Global Imbalances: Can Dark Matter Prevent a Big Bang?”
13 November 2005.

 

Last month Wikileaks, the journalistic-like platform, released a series of emails that included (according to the editorial board of The Washington Post) “purloined emailed excerpts” of Hillary Clinton’s “paid speeches to corporate audiences” from 2013 to 2015—the years in which Clinton withdrew from public life while building a war-chest for her presidential campaign. In one of those speeches, she expressed what the board of the Post calls “her much-maligned view that ‘you need both a public and a private position’”—a position that, the Post harumphs, “is playing as a confession of two-facedness but is actually a clumsy formulation of obvious truth”: namely, that politics cannot operate “unless legislators can deliberate and negotiate candidly, outside the glare of publicity.” To the Post, in other words, thinking that people ought to believe the same things privately as they loudly assert publicly is the sure sign of a näivete verging on imbecility; almost certainly, the Post’s comments draw a dividing line in American life between those who “get” that distinction and those who don’t. Yet, while the Post sees fit to present Clinton’s comments as a sign of her status as “a knowledgeable, balanced political veteran with sound policy instincts and a mature sense of how to sustain a decent, stable democracy,” in point of fact it demonstrates—far more than Donald Trump’s ridiculous campaign—just how far from a “decent, stable democracy” the United States has become: because as those who, nearly a thousand years ago, first set in motion the conceptual revolution that resulted in democracy understood, there is no thought or doctrine more destructive of democracy than the idea that there is a “public” and a “private” truth.

That’s a notion that, likely, is difficult for the Post’s audience to encompass. Presumably educated at the nation’s finest schools, the Post’s audience can see no issue with Clinton’s position because the way towards it has been prepared for decades: it is, in fact, one of the foundational doctrines of current American higher education. Anyone who has attended an American institution of higher learning over the past several decades, in other words, is going to learn a version of Clinton’s belief that truth can come in two (or more) varieties, because that is what intellectuals of both the political left and the political right have asserted for more than half a century.

The African-American novelist James Baldwin asserted, for example, in 1949 that “literature and sociology are not the same,” while in 1958 the conservative political scientist Leo Strauss dismissed “the ‘scientific’ approach to society” as ignoring “the moral distinctions by which we take our bearings as citizens and”—in a now-regrettable choice of words—“as men.” It’s become so unconscious a belief among the educated, in fact, that even some scientists themselves have adopted this view: the biologist Stephen Jay Gould, for instance, towards the end of his life argued that science and religion constituted what he called “non-overlapping magisteria,” while John Carmody, a physician turned writer for The Australian, more prosaically—and seemingly modestly—asserted not long ago that “science and religion, as we understand them, are different.” The motives of those arguing for such a separation are usually thought to be inherently positive: agreeing to such a distinction, in fact, is nearly a requirement for admittance to polite society these days—which is probably why the Post can assert that Clinton’s admissions are a sign of her fitness for the presidency, instead of being disqualifying.

To the Post’s readers, in short, Hillary Clinton’s doubleness is a sign of her “sophistication” and “responsibility.” It’s a sign that she’s “one of us”—she, presumably unlike the trailer trash interested in Donald Trump’s candidacy, understands the point Rashomon! (Though, Kurosawa’s film does not—because logically it cannot—necessarily imply the view of ambiguity it’s often suggested it does: if Rashomon makes the claim that reality is ultimately unknowable, how can we know that?) But those who think thusly betray their own lack of sophistication—because, in the long history of humanity, this isn’t the first time that someone has tried to sell a similar doctrine.

Toward the height of the Middle Ages the works of Aristotle became re-discovered in Europe, in part through contacts with Muslim thinkers like the twelfth-century Andalusian Ibn-Rushd—better known in Europe as “Averroes.” Aristotle’s works were extremely exciting to students used to a steady diet of Plato and the Church Fathers—precisely because at points they contradicted, or at least appeared to contradict, those same Church Fathers. (Which was also, as it happened, what interested Ibn-Rushd about Aristotle—though in his case, the Greek philosopher appeared to contradict Muslim, instead of Christian, sources.) That however left Aristotle enthusiasts with a problem: if they continued to read the Philosopher (Aristotle) and his Commentator (Averroes), they would embark on a collision course with the religious authorities.

In The Harmony of Religion and Philosophy, it seems, Averroes taught that “philosophy and revelation do not contradict each other, and are essentially different means of reaching the same truth”—a doctrine that his later Christian followers turned into what became known as the doctrine of “double truth.” According to a lecturer at the University of Paris in the thirteenth century named Siger of Brabant, for instance, “there existed a ‘double truth’: a factual or ‘hard’ truth that is reached through science and philosophy, and a ‘religious’ truth that is reached through religion.” To Brabant and his crowd, according to Encyclopedia Britannica, “religion and philosophy, as separate sources of knowledge, might arrive at contradictory truths without detriment to either.” (Which was not the same as Averroes’ point, however: the Andalusian scholar “taught that there is only one truth, but reached in two different ways, not two truths.”) Siger of Brabant, in other words, would have been quite familiar with Hillary Clinton’s distinction between the “public” and the “private.”

To some today, of course, that would merely point to how contemporary Siger of Brabant was, and how fuddy-duddy were his opponents—like Stephen Tempier, the bishop of Paris. As if he were some 1950s backwoods Baptist preacher denouncing Elvis or the Beatles, in 1277 Tempier denounced those who “hold that something is true according to philosophy but not according to the Catholic faith, as if there are two contrary truths.” Yet, while some might want to portray Brabant, thusly, as a forerunner to today’s tolerant societies, in reality it was Tempier’s insistence that truth comes in mono, not stereo, that (seemingly paradoxically) led to the relatively open society we at present enjoy.

People who today would make that identification, that is, might be uneasy if they knew that part of the reason Brabant believed his doctrine was his belief in “the superiority of philosophers to the common people,” or that Averroes himself warned “against teaching philosophical methods to the general populace.” Two truths, in other words, easily translated into two different kinds of people—and make no mistake, these doctrines did not imply that these two differing types were “separate but equal.” Instead, they were a means of asserting the superiority of the one type over the other. The doctrine of “double truth,” in other words, was not a forerunner to today’s easygoing societies.

To George Orwell, in fact, it was prerequisite for totalitarianism: Brabant’s theory of “double truth,” in other words, may be the origin of the concept of “doublethink” as used in Orwell’s 1984. In that 1948 novel, “doublethink” is defined as

To know and not to know, to be conscious of complete truthfulness while telling carefully constructed lies, to hold simultaneously two opinions which cancelled out, knowing them to be contradictory and believing in both of them, to use logic against logic, to repudiate morality while laying claim to it, to believe that democracy was impossible and that the Party was the guardian of democracy, to forget whatever it was necessary to forget, then to draw it back into memory again at the moment when it was needed, and then promptly to forget it again, and above all, to apply the same process to the process itself – that was the ultimate subtlety: consciously to induce unconsciousness, and then, once again, to become unconscious of the act of hypnosis you had just performed. Even to understand the word ‘doublethink’ involved the use of doublethink.

It was a point Orwell had been thinking about for some time: in a 1946 essay entitled “Politics and the English Language,” he had denounced “unscrupulous politicians, advertisers, religionists, and other doublespeakers of whatever stripe [who] continue to abuse language for manipulative purposes.” To Orwell, the doctrine of the “double truth” was just a means of sloughing off feelings of guilt or shame naturally produced by human beings engaged in such manipulations—a technique vital to totalitarian regimes.

Many in today’s universities, to be sure, have a deep distrust for Orwell: Louis Menand—who not only teaches at Harvard and writes for The New Yorker, but grew up in a Hudson Valley town named for his own great-grandfather—perhaps summed up the currently fashionable opinion of the English writer when he noted, in a drive-by slur, that Orwell was “a man who believed that to write honestly he needed to publish under a false name.” The British novelist Will Self, in turn, has attacked Orwell as the “Supreme Mediocrity”—and in particular takes issue with Orwell’s stand, in “Politics and the English Language,” in favor of the idea “that anything worth saying in English can be set down with perfect clarity such that it’s comprehensible to all averagely intelligent English readers.” It’s exactly that part of Orwell’s position that most threatens those of Self’s view.

Orwell’s assertion, Self says flatly, is simply “not true”—an assertion that Self explicitly ties to issues of minority representation. “Only homogeneous groups of people all speak and write identically,” Self writes against Orwell; in reality, Self says, “[p]eople from different heritages, ethnicities, classes and regions speak the same language differently, duh!” Orwell’s big argument against “doublethink”—and thusly, totalitarianism—is in other words just “talented dog-whistling calling [us] to chow down on a big bowl of conformity.” Thusly, “underlying” Orwell’s argument “are good old-fashioned prejudices against difference itself.” Orwell, in short, is a racist.

Maybe that’s true—but it may also be worth noting that the sort of “tolerance” advocated by people like Self can also be interpreted, and has been for centuries, as in the first place a direct assault on the principle of rationality, and in the second place an abandonment of millions of people. Such, at least, is how Thomas Aquinas would have received Self’s point. The Angelic Doctor, as the Church calls him, asserted that Averroeists like Brabant could be refuted on their own terms: the Averroeists said they believed, Aquinas remarked, that philosophy taught them that truth must be one, but faith taught them the opposite—a position that would lead those who held it to think “that faith avows what is false and impossible.” According to Aquinas, the doctrine of the “double truth” would imply that belief in religion was as much as admitting that religion was foolish—at which point you have admitted that there is only a single truth, and it isn’t a religious one. Hence, Aquinas’ point was that, despite what Orwell feared in 1984, it simply is not psychologically possible to hold two opposed beliefs in one’s head simultaneously: whenever someone is faced with a choice like that, that person will inevitably choose one side or the other.

In this, Aquinas was merely following his predecessors. To the ancients, this was known as the “law of non-contradiction”—one of the ancient world’s three fundamental laws of thought. “No one can believe that the same thing can (at the same time) be and not be,” as Aristotle himself put that law in the Metaphysics; nobody can (sincerely) believe one thing and its opposite at the same time. As the Persian, Avicenna—demonstrating that this law was hardly limited to Europeans—put it centuries later: “Anyone who denies the law of non-contradiction should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned is not the same as not to be burned.” Or finally, as Arthur Schopenhauer wrote centuries after that in The World as Will and Representation (using the heavy-handed vocabulary of German philosophers), “every two concept-spheres must be thought of as either united or as separated, but never as both at once; and therefore, although words are joined together which express the latter, these words assert a process of thought which cannot be carried out” (emp. added). If anyone says the contrary, these philosophers implied,  somebody’s selling something.

The point that Aristotle, Aquinas, Avicenna, and Orwell were making, in other words, is that the law of non-contradiction is essentially identical to rationality itself: a nearly foolproof method of performing the most basic of intellectual tasks—above all, telling honest and rational people from dishonest and duplicitous ones. And that, in turn, would lead to their second refutation of Self’s argument: by abandoning the law of non-contradiction, people like Brabant (or Self) were also effectively setting themselves above ordinary people. As one commenter on Aquinas writes, the Good Doctor’s insisted that if something is true, then “it must make sense and it must make sense in terms which are related to the ordinary, untheological ways in which human beings try to make sense of things”—as Orwell saw, that position is related to the law of noncontradiction, and both are related to the notion of democratic government, because telling which candidate is the better one is exactly the very foundation of that form of government. When Will Self attacks George Orwell for being in favor of comprehensibility, in other words, he isn’t attacking Orwell alone: he’s actually attacking Thomas Aquinas—and ultimately the very possibility of self-governance.

While the supporters of Hillary Clinton like to describe her opponent as a threat to democratic government, in other words, Donald Trump’s minor campaign arguably poses far less threat to American freedoms than hers does: from one point of view, Clinton’s accession to power actually threatens the basic conceptual apparatus without which there can be no democracy. Of course, given that during this presidential campaign virtually no attention has been paid, say, to the findings of social scientists (like Ricardo Hausmann and Federico Sturzenegger) and journalists (like those who reported on The Panama Papers) that while many conservatives bemoan such deficits as the U.S. budget or trade imbalances, in fact there is good reason to suspect that such gaps are actually the result of billions (or trillions) of dollars being hidden by wealthy Americans and corporations beyond the reach of the Internal Revenue Service (an agency whose budget has been gutted in recent decades by conservatives)—well, let’s just say that there’s good reason to suspect that Hillary Clinton’s campaign may not be what it appears to be.

After all—she said so.

Lions For Lambs

And the remnant of Jacob shall be among the Gentiles in the midst of many people as a lion among the beasts of the forest, as a young lion among the flocks of sheep …
Micah 5:8

Micah was the first prophet to predict the downfall of Jerusalem. According to him, the city was doomed because its beautification was financed by dishonest business practices, which impoverished the city’s citizens. He also called to account the prophets of his day, whom he accused of accepting money for their oracles.
“Micah.” Wikipedia.

 

“Before long I’ll be dead, and you and your brother and your sister and all of her children, all of us dead, all of us rotting underground,” says the villainous patriarch of the aristocratic Lannister clan, Tywin, to his son Jaime in a conversation during the first season of the hit HBO show, Game of Thrones. “It’s the family name that lives on,” Tywin continues—a sentence that not only does much to explain the popularity of the show, but also overturns the usual explanation for that interest: the narrative uncertainty, or the way in which, at least in the first several seasons, it was never obvious which characters were the heroes, and so would survive to the end of the tale. But if Tywin is right, the attraction of the show isn’t that it is so unpredictable. It’s rather that the show’s uncertainty about the various characters’ fates is balanced by a matching certainty that they are in peril: either from the political machinations that end up destroying many of the characters the show had led us to think were protagonists (Ned and his son Robb Stark in particular)—or from the horror that, the opening minutes of the show’s very first episode display, has awakened in the frozen north of Thrones’ fictional world. Hence, the uncertainty about what is going to happen is mirrored by a certainty that something will happen—a certainty signified by the motto of the family to which many fan-favorite characters belong, House Stark: “Winter is Coming.” It’s that motto, I think, that furnishes much of the show’s power—because it is such a direct riposte to much of today’s conventional wisdom, a dogma that unites the supposed “radical left” of the contemporary university with their seeming ideological opposites: the financial elite of Wall Street.

To put it plainly, the relevant division in America today is not between Republicans and Democrats, but instead between those who (still) think the notion encapsulated by the phrase “Winter Is Coming” matters—and those who don’t. For the idea contained within the phrase “Winter Is Coming,” after all, is much older than George Martin’s series of fantasy novels. It is, for example, much the same as an idea expressed by the English writer George Orwell, author of 1984 and Animal Farm, in 1946:

… we are all capable of believing things which we know to be untrue, and then, when we are finally proved wrong, impudently twisting the facts so as to show that we were right. Intellectually, it is possible to carry on this process for an indefinite time: the only check on it is that sooner or later a false belief bumps up against solid reality, usually on a battlefield.

What Orwell expresses here, I’d say, is the Stark idea—the idea that, sooner or later, one’s beliefs run up against reality, whether that reality comes in the form of the weather or war or something else. It’s the notion that, sooner or later, things converge towards reality: a notion that many contemporary intellectuals have abandoned. To them, the view expressed by Orwell and the Starks is what’s known as “foundationalism”: something that all recent students in the humanities have been trained, over the past several generations, to boo and hiss.

“Foundationalism,” according to Pennsylvania State University literature professor Michael Bérubé, for example—a person I often refer to because, unlike the work of a lot others, he at least expresses what he’s saying clearly, and also because he represents a university well-known for its commitment to openness and transparency and occasionally less-than-enthusiastic opposition to child abuse—is the notion that there is a “principle that is independent of all human minds.” That is opposed, for people who think about this sort of thing, to “antifoundationalism”: the idea that a lot of stuff (maybe everything) is simply a matter of “human deliberation and consensus.” Also known as “social constructionism,” it’s an idea that Orwell, or the Starks, would have looked at slant-eyed: winter, for instance, doesn’t particularly care what people think about it, and while war is like both a seminar and a hurricane, the things that happen in war—like, say, having the technology to turn an entire city into a fireball—are not appreciably different from the impact of a tsunami.

Within the humanities however the “anti-foundationalist” or “social constructionist” idea has largely taken the field. “Notwithstanding,” as literature professor Mark Bauerlein of Emory University has remarked, “the diversity trumpeted by humanities departments these days, when it comes to conceptions of knowledge, one standpoint reigns supreme: social constructionism.” To those who hold it, it is a belief that straightforwardly powers what Bauerlein calls “a moral obligation to social justice”: in this view, either you are on the side of antifoundationalism, or you are a yahoo who thinks that the problem with the world is that there isn’t enough Donald Trump in it. Yet antifoundationalism, or the idea that everything is a matter of human discussion, is not necessarily so obviously on the side of good and not evil as the professors of the nation’s universities appear to believe.

In fact, while Bauerlein says that this dogma is “a party line, a tribal glue distinguishing humanities professors from their colleagues in the business school, the laboratory, the chapel, and the computing center, most of whom believe that at least some knowledge is independent of social conditions,” there’s actually good reason to think that a disbelief in an underlying reality isn’t all that unfamiliar to the business school. Arguably, there’s no portion of the university that pays more homage to the dogma of “social construction” than the business school.

Take, for instance, the idea Eugene Fama has built his career upon: the “random walk” theory of the stock market, also known as the “efficient market hypothesis.” Today, Fama is a Nobel Prize-laureate (well, winner of the Swedish National Bank’s Prize in Economic Sciences in Memory of Alfred Nobel, a prize not established by Alfred Nobel in his 1895 will), a professor at the University of Chicago’s Booth School of Business, and the so-called “Father of Finance, ” but in 1965 he was an obscure graduate student—at least, until he wrote the paper that established him within his profession that year, “The Behavior of Stock-Market Prices.” In that paper, Fama argued that “the future path of the price level of a security is no more predictable than the path of a series of cumulated random numbers,” which had the consequence that “the series of price changes has no memory.” (Which is what stock prospectuses mean when they say that “past performance cannot predict future performance.”) What Fama meant was that, no matter how many times he went back over the data, he could find no means by which to predict the future path of a particular stock. Hence he concluded that, when it comes to the market, “the past cannot be used to predict the future in any meaningful way”—an idea with some notably anti-foundationalist consequences.

Those consequences can be be viewed in such papers as Fama’s 2010 study with colleague Kenneth French: “Luck versus Skill in the Cross-Section of Mutual Fund Returns”—a study that set out to examine whether it was true that the managers of mutual funds can actually do what they claim they can do, and outperform the stock market. In “Luck versus Skill,” Fama and French say that the evidence shows those managers can’t: “For fund investors the … results are disheartening,” because “few active funds produce … returns that cover their costs.” Maybe there are really intelligent people out there who are smarter than the market, Fama is suggesting—but if there are, he can’t find them.

Now, so far Fama’s idea might sound pretty unexceptional: to readers of this blog, it might even sound like common sense. It’s a fairly close idea to the one explored, for instance, by psychologist Amos Tversky and his co-authors in the paper, “The Hot Hand in Basketball,” which was about how what appeared to be a “hot,” or “clutch,” basketball shooter was simply an effect of randomness: if your skill level is such that you expect to make a certain percentage of your shots, then—simply through the laws of probability—it is likely that you will make a certain number of baskets in a row. Similarly, if there are enough mutual funds in the market, some number of them will have gaudy track records to report: “Given the multitude of funds,” as Fama writes, “many have extreme returns by chance.” If there’s enough participants in any competition, some will be winners—or to put it another way, if a monkey throws enough shit at a wall, some of it will stick.

That, Fama might say, doesn’t mean that the monkey has somehow gotten in touch with Reality: if no one person can outperform the market, then there is nothing anyone can know that would help them to become a better stock-picker. What that must mean in turn is (as the Wikipedia article on the subject notes) that “market prices reflect all available information,” or that “stocks always trade at their fair value”—which is right about where that the work of seemingly-conservative professors in economics departments and business schools, and their seeming-liberal opponents in departments of the humanities begins to converge.

Fama, after all, denies the existence of what are known as “bubbles”: “speculative bubbles, market bubbles, price bubbles, financial bubbles, speculative manias or balloons” as Wikipedia terms them. “Bubbles” describe situations in which a given asset—like, I don’t know, a house—is traded “at a price or price range that strongly deviates from the corresponding asset’s intrinsic value.” The classic example is the Dutch tulip craze of the seventeenth century, during which a single tulip bulb might have sold for ten times the yearly wage of a workman. (Other instances might be closer to the reader’s mind than that.) But according to Fama there can be no such thing as a “bubble”: when John Cassidy of The New Yorker said to Fama in an interview that the chief problem during the financial crisis of 2008 was that “there was a credit bubble that inflated and ultimately burst,” Fama replied by saying, “I don’t know what a credit bubble means. I don’t even know what a bubble means. These words have become popular. I don’t think they have any meaning.” Although a careful reader might note that what Fama is saying here is something like that there is a bubble in the concept of bubbles, what he intends is to deny that there are bubbles, and thus that there is any “intrinsic value” to a given asset.

It’s at this point, I think, that the connection between Eugene Fama’s contention about the “efficient market hypothesis” and the doctrine in the humanities known as “antifoundationalism” becomes clear: both are denials of the Starks’ “Winter Is Coming” motto. After all, a bubble only makes sense if there is some kind of “intrinsic,” or “foundational,” value to something; similarly, a “foundationalist” thinks that there is some nonhuman reality. But why does this obscure and esoteric doctrinal dispute among a few intellectuals matter, aside from being the latest turn of the wheel of fashion within the walls of the academy?

Well, it matters because what they are really discussing—the real meaning of “intrinsic value”—is whether to allow ordinary people to have any say about the future of their lives.

Many liberals, for instance, have warned about the Republican assault on the right to vote in such matters as the Supreme Court’s 2013 ruling in Shelby County vs. Holder, which essentially gutted the Voting Rights Act of 1965, or the passage of “voter ID laws” in many states—sold as “protections” but in reality a means of preventing voting. What’s far less-often discussed, however, is that intellectuals of the supposed academic left have begun—quietly, to be sure—to question the very idea of voting.

Oxford don Mary Beard, for example—a scholar of the ancient world and avowed feminist—recently wrote a column for the London Review of Books concerning the “Brexit” referendum, in which the people of Great Britain decided whether to stay in the European Union or not. Beard’s sort—educated, with “progressive” opinions—thought that Britain ought to remain in the Union; when the results came in, however, the nation had decided to leave, or “Brexit.” “Handing us a referendum,” Beard wrote in response, “is not a way to reach a responsible decision”—“for God’s sake,” one can almost hear Beard lecturing, “how can you let an important decision be up to the [insert condescending adjective here] voters?” But while that might sound like a one-time response to a very particular situation, in fact many smart people who share Beard’s general views also share her distrust of elections.

What is an election, anyway, but an event analogous to a battle, or a hurricane? To people inclined to dismiss the significance of real events, it’s easy enough to dismiss the notion of elections. “Importantly”— wrote Princeton University’s Lawrance S. Rockefeller Professor of Politics, Stephen Macedo, recently—“majority rule is not a fundamental principle of either democracy or fairness, nor is it required by any basic principle of democracy or fairness.” According to Macedo, “the basic principle of democracy” isn’t elections, but instead “political equality,” or a “respect [for] minority rights and … fair and inclusive deliberation.” In other words, so long as “minority rights” are respected and there is “fair and inclusive deliberation,” it doesn’t matter if anyone votes or not—which is to say that to very many smart, and supposedly “liberal” or “leftist” people, the very notion that voting has any kind of “intrinsic value” to it at all has become irrelevant.

That, more or less, is what the characters on Game of Thrones think too. After all, as Tywin says to Jaime at one point during the conversation I began this essay with, a “lion doesn’t concern himself with the opinion of a sheep.” Which, one supposes, is not a very surprising sentiment on a show that, while it sometimes depicts depicts dragons and magic, mostly concerns the doings of a handful of aristocrats in a feudal age. What might be pretty surprising, however—depending on your level of distrust—is that, today, a great many of the people entrusted to be society’s shepherds appear to agree with them.

The Smell of Victory

To see what is in front of one’s nose needs a constant struggle.
George Orwell. “In Front of Your Nose”
    Tribune, 22 March 1946

 

Who says country clubs are irony-free? When I walked into Medinah Country Club’s caddie shack on the first day of the big member-guest tournament, the Medinah Classic, Caddyshack, that vicious class-based satire of country club stupid was on the television. These days, far from being patterned after Caddyshack’s Judge Smails (a pompous blowhard), most country club members are capable of reciting the lines of the movie nearly verbatim. Not only that—they’ve internalized the central message of the film, the one indicated by the “snobs against the slobs” tagline on the movie poster: the moral that, as another 1970s cinematic feat put it, the way to proceed through life is to “trust your feelings.” Like a lot of films of the 1970s—Animal House, written by the same team, is another example—Caddyshack’s basic idea is don’t trust rationality: i.e., “the Man.” Yet, as the phenomena of country club members who’ve memorized Caddyshack demonstrates, that signification has now become so utterly conventional that even the Man doesn’t trust the Man’s methods—which is how, just like O.J. Simpson’s jury, the contestants in this year’s Medinah Classic were prepared to ignore probabilistic evidence that somebody was getting away with murder.

That’s a pretty abrupt jump-cut in style, to be sure, particularly in regards to a sensitive subject like spousal abuse and murder. Yet, to get caught up in the (admittedly horrific) details of the Simpson case is to miss the trees for the forest—at least according to a short 2010 piece in the New York Times entitled “Chances Are,” by the Schurman Professor of Applied Mathematics at Cornell University, Steven Strogatz.

The professor begins by observing that the prosecution spent the first ten days of the six-month long trial establishing that O.J. Simpson abused his wife, Nicole. From there, as Strogatz says, prosecutors like Marcia Clark and Christopher Darden introduced statistical evidence that showed that abused women who are murdered are usually killed by their abusers. Thus, as Strogatz says, the “prosecution’s argument was that a pattern of spousal abuse reflected a motive to kill.” Unfortunately however the prosecution did not highlight a crucial point about their case: Nicole Brown Simpson was dead.

That, you might think, ought to be obvious in a murder trial, but because the prosecution did not underline the fact that Nicole was dead the defense, led on this issue by famed trial lawyer Alan Dershowitz, could (and did) argue that “even if the allegations of domestic violence were true, they were irrelevant.” As Dershowitz would later write, the defense claimed that “‘an infinitesimal percentage—certainly fewer than 1 of 2,500—of men who slap or beat their domestic partners go on to murder them.’” Ergo, even if battered women do tend to be murdered by their batterers, that didn’t mean that this battered woman (Nicole Brown Simpson) was murdered by her batterer, O.J. Simpson.

In a narrow sense, of course, Dershowitz’s claim is true: most abused women, like most women generally, are not murdered. So it is absolutely true that very, very few abusers are also murderers. But as Strogatz says, the defense’s argument was a very slippery one.

It’s true in other words that, as Strogatz says, “both sides were asking the jury to consider the probability that a man murdered his ex-wife, given that he previously battered her.” But to a mathematician like Strogatz, or his statistician colleague I.J. Good—who first tackled this point publicly—this is the wrong question to ask.

“The real question,” Strogatz writes, is: “What’s the probability that a man murdered his ex-wife, given that he previously battered her and she was murdered?” That’s the question that applied in the Simpson case: Nicole Simpson had been murdered. If the prosecution had asked the right question in turn, the answer to it—that is, the real question, not the poorly-asked or outright fraudulent questions put by both sides at Simpson’s trial—would have been revealed to be about 90 percent.

To run through the math used by Strogatz quickly (but still capture the basic points): of a sample of 100,000 battered American women, we could expect about 5 of them to be murdered by random strangers any given year, while we could also expect about 40 of them to be murdered by their batterers. So of the 45 battered women murdered each year per 100,000 battered women, about 90 percent of them are murdered by their batterers.

In a very real sense then, the prosecution lost its case against O.J. because it did not present its probabilistic evidence correctly. Interviewed years later for the PBS program, Frontline, Robert Ball, a lawyer for one of the jurors on the Simpson case, Brenda Moran, said that according to his client, the jury thought that for the prosecution “to place so much stock in the notion that because [O.J.] engaged in domestic violence that he must have killed her, created such a chasm in the logic [that] it cast doubt on the credibility of their case.” Or as one of the prosecutors, William Hodgman, said after the trial, the jury “didn’t understand why the prosecution spent all that time proving up the history of domestic violence,” because they “felt it had nothing to do with the murder case.” In that sense, Hodgman admitted, the prosecution failed because they failed to close the loop in the jury’s understanding—they didn’t make the point that Strogatz, and Good before him, say is crucial to understanding the probabilities here: the fact that Nicole Brown Simpson had been murdered.

I don’t know, of course, to what degree distrust of scientific or rational thought played in the jury’s ultimate decision—certainly, as has been discovered in recent years, it is the case that crime laboratories have often been accused of “massaging” the evidence, particularly when it comes to African-American defendants. As Spencer Hsu reported in the Washington Post, for instance, just in April of this year the “Justice Department and FBI … formally acknowledged that nearly every examiner in an elite FBI forensic unit gave flawed testimony in almost all trials in which they offered evidence.” Yet, while it’s obviously true that bad scientific thought—i.e., “thought” that isn’t scientific at all—ought to be quashed, it’s also I think true that there is a pattern of distrust of that kind of thinking that is not limited to jurors in Los Angeles County, as I discovered this weekend at the Medinah Classic.

The Classic is a member-guest tournament, and member-guests are golf tournaments consisting of two-man teams made up by a country club member and his guest. They are held by country clubs around the world, played according to differing formats but usually dependent upon each golfer’s handicap index: the number assigned by the United States Golf Association’s computer after the golfer pays a fee and inputs his scores into the USGA’s computer system. (It’s similar to the way that carrying weights allows horses of different sizes to race each other, or how different weight classes allows boxing or wrestling to be fair.) Medinah’s member-guest tournament is, nationally, one of the biggest because of the number of participants: around 300 golfers every year, divided into three flights according to handicap index (i.e. ability). Since Medinah has three golf courses, it can easily accommodate so many players—but what it can’t do, however, is adequately police the tournament’s entrants, as the golfers I caddied for discovered.

Our tournament began with the member shooting an amazing 30, after handicap adjustment, on the front nine of Medinah’s Course Three, the site of three U.S. Opens, two PGA Championships, numerous Western Opens (back when they were called Western Opens) and a Ryder Cup. A score of 30 for nine holes, on any golf course, is pretty strong—but how much more so on a brute like that course, and how much more so again in the worst of the Classic’s three flights? I thought so, and said so to the golfers I was caddieing for after our opening round. They were kind of down about the day’s ending—especially the guest, who had scored an eight on our last hole of the day. Despite that I told my guys that on the strength the member’s opening 30, if we weren’t just outright winning the thing we were top three. As it turned out, I was correct—but despite the amazing showing we had on the tournament’s first day, we would soon discover that there was no way we could catch the leading team.

In a handicapped tournament like the Classic, what matters isn’t so much what any golfer scores, but what he scores in relation to the handicap index. Thus, the member half of our member-guest team hadn’t actually shot a 30 on the front side of Medinah’s Course 3—which certainly would have been a record for an amateur tournament, and I think a record for any tournament at Medinah ever—but instead had shot a 30 considering the shots his handicap allowed. His score, to use the parlance, wasn’t gross but rather net: my golfer had shot an effective six under par according to the tournament rules.

Naturally, such an amazing score might raise questions: particularly when it’s shot as part of the flight reserved for the worst players. Yet my player has a ready explanation for why he was able to shoot a low number (in the mid 40s) and yet still have a legitimate handicap: he has a legitimate handicap—a congenital deformity in one of his ankles. The deformation is not enough to prevent him from playing, but as he plays—and his pain medications wear off—he usually tires, which is to say that he can very often shoot respectable scores in the first nine holes, and horrific scores on the second nine holes. His actual handicap, in other words, causes his golf handicap index to be askew slightly from reality.

Thus, he is like the legendary Sir Gawain, who according to Arthurian legend tripled his strength at noon but faded as the sun set—a situation that the handicap system is ill-designed to handle. Handicap indexes presume roughly the same ability at the beginning of a round as at the end, so in this Medinah member’s case his index understates his ability at the beginning of his round while wildly overstating it at the end. In a sense then it could perhaps be complained that this member benefits from the handicap system unfairly—unless you happen to consider that the man walks in nearly constant pain every day of his life. If that’s “gaming the system” it’s a hell of a way to do it: getting a literal handicap to pad your golf handicap would obviously be absurd.

Still, the very question suggests the great danger of handicapping systems, which is one reason why people have gone to the trouble of investigating whether there are ways to determine whether someone is taking advantage of the handicap system—without using telepathy or some other kind of magic to determine the golfer’s real intent. The most important of the people who have investigated the question is Dean L. Knuth—the former Senior Director of Handicapping for the United States Golf Association, a man whose nickname is the “Pope of Slope.” In that capacity Mr. Knuth developed the modern handicapping system—and a way to calculate the odds of a person of a given handicap shooting a particular score.

In this case, my information is that the team that ended up winning our flight—and won the first round—had a guest player who represented himself as possessing a handicap index of 23 when the tournament began. For those who aren’t aware, a 23 is a player who does not expect to play better than a score of ninety during a round of golf, when the usual par for most courses is 72. (In other words, a 23 isn’t a very good player.) Yet this same golfer shot a gross 79 during his second round for what would have been a net 56: a ridiculous number.

Knuth’s calculations reflect that: they judge that the odds of someone shooting a score so far below his handicap to be on the order of several tens of thousands to one, especially in tournament conditions. In other words, while my player’s handicap wasn’t a straightforward depiction of his real ability, it did adequately capture his total worth as a golfer. This other player’s handicap though sure appeared to many, including one of the assistant professionals who went out to watch him play, to be highly suspect.

That assistant professional, who is a five handicap himself, said that after watching this guest play he would hesitate to play him straight up, much less giving the fellow ten or more shots: the man not only was hitting his shots crisply, but also hit shots that even professionals fear, like trying to get a ball to stop on a downslope. So for the gentleman to claim to be a 23 handicap seemed, to this assistant professional, to be incredibly, monumentally, improbable. Observation then seems to confirm what Dean Knuth’s probability tables would suggest: the man was playing with an improper handicap.

What happened as the tournament went along also appears to indicate that at least Medinah’s head professional was aware that the man’s reported handicap index wasn’t legitimate: after the first round, in which that player shot a similarly suspect score as his second round 79 (I couldn’t discover what it was precisely), his handicap was adjusted downwards, and after that second round 79 more shots got knocked off his initial index. Yet although there was a lot of complaining on the part of fellow competitors, no one was willing to take any kind of serious action.

Presumably, this inaction was on a theory similar to the legal system’s presumption of innocence: maybe the man just really had “found his swing” or “practiced really hard” or gotten a particularly good lesson just before arriving at Medinah’s gates. But to my mind, such a presumption ignores, like the O.J. jury did, the really salient issue: in the Simpson case, that Nicole was dead; in the Classic, the fact that this team was leading the tournament. That was the crucial piece of data: it wasn’t just that this team could be leading the tournament, it was that they were leading the tournament—just in the same way that, while you couldn’t use statistics to predict whether O.J. Simpson would murder his ex-wife Nicole, you certainly can use statistics to say that O.J. probably murdered Nicole once Nicole was murdered.

The fact in other words that this team of golfers was winning the tournament was itself evidence they were cheating—why would anyone cheat if they weren’t going to win as a result? That doesn’t mean, to be sure, that winning constitutes conclusive evidence of fraud—just as probabilistic evidence doesn’t mean that O.J. must have killed Nicole—but it does indicate the need for further investigation, and suggests what presumption an investigation ought to pursue. Particularly by the amount of the lead: by the end of the second day, that team was leading by more than twenty shots over the next competitors.

Somehow however it seems that Americans have lost the ability to see the obvious. Perhaps that’s through the influence of films from the 1970s like Caddyshack or Star Wars: both films, interestingly, feature scenes where one of the good guys puts on a blindfold in order to “get in touch” with some cosmic quality that lies far outside the visible spectrum. (The original Caddyshack script actually cites the Star Wars scene.) But it is not necessary to blame just those films themselves: as Thomas Frank says in his book The Conquest of Cool, one of America’s outstanding myths represents the world as a conflict between all that is “tepid, mechanical, and uniform” versus the possibility of a “joyous and even a glorious cultural flowering.” In the story told by cultural products like Caddyshack, it’s by casting aside rational methods—like Luke Skywalker casting aside his targeting computer in the trench of the Death Star—that we are all going to be saved. (Or, as Rodney Dangerfield’s character puts it at the end of Caddyshack, “We’re all going to get laid!”) That, I suppose, might be true—but perhaps not for the reasons advertised.

After all, once we’ve put on the blindfold, how can we be expected to see?

A Momentary Lapse

 

The sweets we wish for turn to loathed sours
Even in the moment that we call them ours.
The Rape of Lucrece, by William Shakespeare

“I think caddies are important to performance,” wrote ESPN’s Jason Sobel late Friday night. “But Reed/DJ each put a family member on bag last year with no experience. Didn’t skip a beat.” To me, Sobel’s tweet appeared to question the value of caddies, and so I wrote to Mr. Sobel and put it to him that sure, F. Scott Fitzgerald could write before he met Maxwell Perkins—but without Perkins on Fitzgerald’s bag, no Gatsby. Still, I don’t mention the point simply to crow about what happened: how Dustin Johnson missed a putt to tie Jordan Spieth in regulation, a putt that arguably a professional caddie would have held Johnson from hitting so quickly. What’s important about Spieth’s victory is that it might finally have killed the idea of “staying in the moment”: an un-American idea far too prevalent for the past two decades or more not only in golf, but in American life.

Anyway, it’s been around a while. “Staying in the moment,” as so much in golf does, likely traces at least so far back as Tiger Woods’ victory at Augusta National in 1997. Sportswriters then liked to make a big deal out of Tiger’s Thai heritage: supposedly, his mother’s people, with their Buddhist religion, helped Tiger to focus. It was a thesis that to my mind was more than a little racially suspect—seemed to me that Woods’ won a lot of tournaments because he hit the ball further than anyone else at the time, and it was matched by an amazing short game. That was the story that got retailed then however.

Back in 2000, for instance, Robert Wright of the online magazine Slate was peddling what he called the “the New Age Theory of Golf.” “To be a great golfer,” Wright said, “you have to do what some Eastern religions stress—live in the present and free yourself of aspiration and anxiety.” “You can’t be angry over a previous error or worried about repeating it,” Wright went on to say. You are just supposed to “move forward”—and, you know, forget about the past. Or to put it another way, success is determined by how much you can ignore reality.

Now, some might say that it was precisely this attitude that won the U.S. Open for Team Jordan Spieth. “I always try to stay in the present,” Jordan Spieth’s caddie Michael Greller told The Des Moines Register in 2014, when Greller and Spieth returned to Iowa to defend the title the duo had won in 2013. But a close examination of their behavior on the course, by Shane Ryan of Golf Digest, questions that interpretation.

Spieth, Ryan writes, “kept up a neurotic monologue with Michael Greller all day, constantly seeking and receiving reassurance about the wind, the terrain, the distance, the break, and god knows what else.” To my mind, this hardly counts as the usual view of “staying in the present.” The usual view, I think, was what was going on with their opponents.

During the course of his round, Ryan reports, Johnson “rarely spoke with his brother and caddie Austin.” Johnson’s relative silence appears to me to be much like Wright’s passive, “New Age,” reality-ignoring, ideal. Far more, anyway, than the constant squawking that was going on in Spieth’s camp.

It’s a difference, I realize, that is easy to underestimate—but a crucial one nonetheless. Just how significant that difference is might be best revealed by an anecdote the writer, Gary Brecher, tells about the aftermath of the second Iraq War: about being in the office with a higher-ranking woman who declared her support for George Bush’s war. When Brecher said to her that perhaps these rumors of Saddam’s weapons could be exaggerated—well, let’s read Brecher’s description:

She just stared at me a second—I’ve seen this a lot from Americans who outrank me; they never argue with you, they don’t do arguments, they just wait for you to finish and then repeat what they said in the beginning—she said, “I believe there are WMDs.”

It’s a stunning description. Not only does it sum up what the Bush Administration did in the run-up to the Iraq War, but it’s also something of a fact of life around workplaces and virtually everywhere else in the United States these days: two Americans, especially ones of differing classes, rarely talk to one another these days. But they sure are pretty passive.

Americans however aren’t supposed to think of themselves as being passive—at least, they didn’t use to think of themselves that way. The English writer George Orwell described the American attitude in an essay about the quintessentially American author, Mark Twain: a man who “had his youth and early manhood in the golden age of America … when wealth and opportunity seemed limitless, and human beings felt free, indeed were free, as they had never been before and may not be again for centuries.” In those days, Orwell says, “at least it was NOT the case that a man’s destiny was settled from his birth,” and if “you disliked your job you simply hit the boss in the eye and moved further west.” Those older Americans did not simply accept what happened to them, the way the doctrine of “staying in the present” teaches.

If so, then perhaps Spieth and Greller, despite what they say, are bringing back an old American custom by killing an alien one. In a nation where 400 Americans are worth more than the poorest 150 million Americans, as I learned Sunday night after the Open by watching Robert Reich’s film, Inequality for All, it may not be a moment too soon.

Fine Points

 

Whenever asked a question, [John Lewis] ignored the fine points of whatever theory was being put forward and said simply, “We’re gonna march tonight.”
—Taylor Branch.
   Parting the Waters: America in the King Years Vol. 1 

 

 

“Is this how you build a mass movement?” asked social critic Thomas Frank in response to the Occupy Wall Street movement: “By persistently choosing the opposite of plain speech?” To many in the American academy, the debate is over—and plain speech lost. More than fifteen years ago articles like philosopher Martha Nussbaum’s 1999 criticism of professor Judith Butler, “The Professor of Parody,” or political scientist James Miller’s late 1999 piece “Is Bad Writing Necessary?” got published—and both articles sank like pianos. Since then it’s seemed settled that (as Nussbaum wrote at the time) the way “to do … politics is to use words in a subversive way.” Yet at a minimum this pedagogy diverts attention from, as Nussbaum says, “the material condition of others”—and at worst, as professor Walter Benn Michaels suggests, it turns the the academy into “the human resources department of the right, concerned that the women [and other minorities] of the upper middle class have the same privileges as the men.” Supposing then that bad writers are not simply playing their part in class war, what is their intention? I’d suggest that subversive writing is best understood as a parody of a tactic used, but not invented, by the civil rights movement: packing the jails.

“If the officials threaten to arrest us for standing up for our rights,” Martin Luther King, Jr. said in a January 1960 speech in Durham, North Carolina, “we must answer by saying that we are willing and prepared to fill up the jails of the South.” King’s speech was written directly towards the movement’s pressing problem: bailing out protestors cost money. In response, Thomas Gaither, a field secretary for the Congress for Racial Equality (CORE), devised a solution: he called it “Jail No Bail.” Taylor Branch, the historian, explained the concept in Parting the Waters: America in the King Years 1954-63: the “obvious advantage of ‘jail, no bail’ was that it reversed the financial burden of protest, costing the demonstrators no cash while obligating the white authorities to pay for jail space and food.” All protestors had to do was: get arrested, serve the time—and thereby cost the state their room and board.

Yet Gaither did not invent the strategy. “Packing the jails” as a strategy began, so far as I can tell, in October of 1909; so reports the Minnesotan, Harvey O’Connor, in his 1964 autobiography Revolution in Seattle: A Memoir. All that summer, the International Workers of the World (the “Wobblies”) had been engaged in a struggle against “job sharks”: companies that claimed to procure jobs for their clients after the payment of a fee—and then failed to deliver. (“It was customary,” O’Connor wrote, “for the employment agencies … to promote a rapid turnover”: the companies would take the money and either not produce the job, or the company that “hired” the newly-employed would fire them shortly afterwards.) In the summer of 1909 those companies succeeded in banning public assemblies and speaking on the part of the Wobblies, and legal challenges proved impossible. So in the October of that year the Wobblies “sent out a call” in the labor organization’s newspaper, the Industrial Worker: “Wanted: Men To Fill The Jails of Spokane.”

Five days later, the Wobblies held a “Free Speech Day” rally, and managed to get 103 men arrested. By “the end of November 500 Wobblies were in jail.” Through the “get arrested” strategy, the laborers filled the city’s jail “to bursting and then a school was used for the overflow, and when that filled up the Army obligingly placed a barracks at the city’s command.” And so the Wobblies’ strategy was working: the “jail expenses threatened to bankrupt the treasuries of cities even as large as Spokane.” As American writer and teacher Archie Binns had put the same point in 1942: it “was costing thousands of dollars every week to feed” the prisoners, and so the city was becoming “one big jail.” In this way, the protestors threatened to “eat the capitalistic city out of house and home”—and so the “city fathers” of Spokane backed down, instituting a permitting system for public marches and assemblies. “Packing the jails” won.

What, however, has this history to do with the dispute between plain-speakers and bad writers? In the first place it demonstrates how our present-day academy would much rather talk about Martin Luther King, Jr. and CORE than Harvey O’Connor and the Wobblies. Writing ruefully about left-wing professors like himself, Walter Benn Michaels writes “We would much rather get rid of racism than get rid of poverty”; elsewhere he says, “American liberals … carry on about racism and sexism in order to avoid doing so about capitalism.” Despite the fact that, historically, the civil rights movement borrowed a lot from the labor movement, today’s left doesn’t have much to say about that—nor much about today’s inequality. So connecting the tactics of the Wobblies to those of the civil rights movement is important because it demonstrates continuity where today’s academy wants to see, just as much as any billionaire, a sudden break.

That isn’t the only point of bringing up the “packing the jails” tactic however—the real point is that writers like Butler are making use of a version of this argument without publicly acknowledging it. As laid out by Nussbaum and others, the unsaid argument or theory or idea or concept (whatever name you’d have for it) behind “bad” writing is a version of “packing the jails.” To be plain: that by filling enough academic seats (with the right sort of person) political change will somehow automatically follow, through a kind of osmosis.

Admittedly, no search of the writings of America’s professors, Judith Butler or otherwise, will discover a “smoking gun” regarding that idea—if there is one, presumably it’s buried in an email or in a footnote in a back issue of Diacritics from 1978. The thesis can only to be discovered in the nods and understandings of the “professionals.” On what warrant, then, can I claim that it is their theory? If that’s the plan, how do I know?

My warrant extends from a man who knew, as Garry Wills of Northwestern says,  something about “the plain style”: Abraham Lincoln. To Lincoln, the only possible method of interpretation is a judgment of intent: as Lincoln said in his speech at Peoria in 1858, “when we see a lot of framed timbers, different portions of which we know have been gotten out at different times and places by different workmen,” and “we see these timbers joined together, and see they exactly make the frame of a house or a mill,” why, “in such a case we find it impossible not to believe” that everyone involved “all understood each other from the beginning.” Or as Walter Benn Michaels has put the same point: “you can’t do textual interpretation without some appeal to authorial intention.” In other words, when we see a lot of people acting in similar ways, we should be able to make a guess about what they’re trying to do.

In the case of Butlerian feminists—and, presumably, other kinds of bad writers—bad writing allows them to “do politics in [the] safety of their campuses,” as Nussbaum says, by “making subversive gestures through speech.” Instead of “packing the jails” this pedagogy, this bad writing, teaches “packing the academy”: the theory presumably being that, just as Spokane could only jail so many people, the academy can only hold so many professors. (Itself an issue, because there are a lot fewer professorships available these days, and only liable to be fewer.) Since, as Abraham Lincoln said about what he saw in the late 1850s, we can only make a guess—but we must make a guess—about what those intentions are, I’d hazard that my guess is more or less what these bad writers have in mind.

Unfortunately, in the hands of Butler and others, bad writing is only a parody—it only mimics the very real differences between the act of going to jail and that of attempting to become the, say, Coca-Cola Professor of Rhetoric at Wherever State. A black person willing to go to jail in the South in 1960 was a person with a great deal of courage—and still would be today. But it’s also true that it’s unlikely the courageous civil rights volunteers would have conceived of, much less carried out, the act of attempting to “pack the jails” without the example of the Wobblies prior to them—just as it might be argued that, without the sense of being of the same race and gender as their oppressors, the Wobblies might not have had the courage to pack the jails of Spokane. So it certainly could be argued that the work of the “bad writers” is precisely to make those connections—and so create the preconditions for similar movements in the future.

Yet, as George Orwell might have asked, “where’s the omelette?” Where are the people in jail—and where are the decent pay and equal rights that might follow them? Butler and other “radical” critics don’t produce either: I am not reliably informed of Judith Butler’s arrest record, but I’d suspect it’s not much. So Nussbaum’s observation that while Butler’s pedagogy “instructs people that they can, right now, without compromising their security, do something bold” [emp. added] she wasn’t entirely snide then, and her words look increasingly prescient now. That’s what Nussbaum means when she says that “Butlerian feminism is in many ways easier than the old feminism”: it is a path that demonstrates to middle-class white people, women especially, just how they can “dissent” without giving up their status or power. Nussbaum thus implies that feminism or any other kind of “leftism” practiced along Butler’s lines is not only, quite literally, physically cowardly—but perhaps more importantly suggests just why the “left,” such as it is, is losing.

For surely the “Left” is losing: as many, many people besides Walter Benn Michaels have written, economic inequality has risen, and is rising, even as the sentences and jargon of today’s academics have become more complex—and the academy’s own power slowly dissolves into a mire of adjunct professorships and cut-rate labor policies. Emmanuel Saez of the University of California says that “U.S. income inequality has been steadily increasing since the 1970s, and now has reached levels not seen since 1928,” and Nobel Prize winner Paul Krugman says that even the wages of “highly educated Americans have gone nowhere since the late 1990s.” We witness the rise of plutocrats on a scale never seen before, perhaps at least since the fall of the Bourbons—or even the Antonines.

That is not to suggest, to be sure, that individual “bad writers” are or are not cowards: merely to be a black person or a woman requires levels of courage many people will never be aware of in their lifetimes. Yet, Walter Benn Michaels is surely correct when he says that as things now stand, the academic left in the United States today is largely “a police force for, than an alternative to, the right,” insofar as it “would much rather get rid of racism [or sexism] than get rid of poverty.” Fighting “power” by means of a program of bad, rather than good, writing—writing designed to appeal to great numbers of people—is so obviously stupid it could only have been invented by smart people.

The objection is that giving up the program of Butlerian bad writing requires giving up the program of “liberation” her prose suggests: what Nussbaum calls Butler’s “radical libertarian” dream of the “sadomasochistic rituals of parody.” Yet as Thomas Frank has suggested, it’s just that kind of libertarian dream that led the United States into this mess in the first place: America’s recent troubles have, Frank says, resulted from “the political power of money”—a political power that was achieved courtesy of “a philosophy of liberation as anarchic in its rhetoric as Occupy [Wall Street] was in reality” [emp. Frank’s]. By rejecting that dream, American academics might obtain “food, schools, votes” and (possibly) less rape and violence for both women and men alike. But how?

Well, I have a few ideas—but you’d have to read some plain language.

Telegraphing A Punch

From his intense interest in the telegraph, Lincoln developed what Garry Wills calls a ‘telegraphic eloquence,’ with a ‘monosyllabic and staccato beat’ that gave Lincoln a means of ‘say[ing] a great deal in the fewest words’”
—Sarah Luria, Capital Speculations: Writing and Building Washington D.C.



“Well,” I said, “I wanted to indicate to you that, while I had not shot the distance”—that is, used a rangefinder to measure it, since I don’t have one—“yet still I felt pretty confident about it.” We were on the eighteenth hole at Butler, which was our ninth hole. Mr. B., the member I was working for, was rebuking me for breaking one of the cardinal rules of looping: a good caddie never adds a caveat. All yardages are “154” or “87” or such; never “about 155” or “just shy of 90.” He was right: what I’d said was “either 123 or 24,” which isn’t exact in a narrow sense, but conveyed what I wanted it to convey. The point raised, however, became apparent only recently; in a broader sense because of the recent election, and in a more particular sense because of a party I’d attended shortly before.

The party was in Noble Square, one of those Chicago neighborhoods now infested with hipsters, women working at non-profits, and “all that dreary tribe of high-minded women and sandal-wearers and bearded fruit-juice drinkers who come flocking toward the smell of ‘progress’ like bluebottles to a dead cat,” as George Orwell once referred to the tribe. The food provided by the host was not just vegetarian but vegan, and so just that much more meritorious: the woman whose guest I was seemed to imply that the host, whose food we were eating, was somehow closer to the godhead than the rest of us. Of that “us,” there were not many; most were women while, of the three men present, one was certainly gay while the other wasn’t obviously so, and the third was me.

All of which sounds pretty awful, and almost certainly I’m going to catch hell for writing such so I’ll hurry to explain that everything wasn’t all bad. There was, for instance, a dog. So often, when attending such affairs, it’s necessary to listen to some explanation of the owner’s cat: how it cannot eat such and such, or needs such and such medicines, or how it was lost and became found—stories that, later, can become mixed up with said owner’s parallel stories of boyfriends discovered and discarded, so that it’s unclear whether that male of the species discovered in an alley outside of the Empty Bottle was of the human or feline variety. But a dog is usually a marker of some healthy kind of sense of irony about one’s beliefs: dogs, or so some might say, encourage a sociality that precludes the aloofness necessary for genocide.

I bring this up because one of the topics of conversation; one of the women present, a social worker by training, was discussing the distinction between her own leadership style and that of one her colleagues, both being supervisors of some kind. One of the other women present noted that it was difficult at times for women to assert that role: women, she said, often presented their ideas with diffidence and hesitancy. And so, as the women around nodded in agreement, sometimes ideas that were actually better got ignored, just because of what was essentially a better rhetorical technique.

As it happens, all of the women at said party—all of them, presumably, Obama voters—were white, which is just why I should have remembered the event just after the presidential election, upon reading a short John Cassidy piece in the New Yorker. There, Cassidy points out that, in the aftermath, a lot of ink has been spilled on the subject of why Obama lost the white male vote so disastrously—by twenty-seven percentage points—while simultaneously winning the women’s vote: Obama “carried the overall female vote by eleven [percentage] points” Cassidy notes. (The final total was 55% to 44%.) Yet the story of the “gender gap” papers over another disconnect: it’s true that Obama won over women as a distinct set of people, he actually lost one subset of women: white women.

Romney’s success among white women, in fact, is one reason why he did better among women in general than did the previous Republican candidate for the presidency, John McCain. In 2008, “Obama got fifty-six per cent of the female vote and John McCain got forty-three per cent,” which, even if the margin of error is taken into account, at least indicates that Obama did not make further inroads into the woman vote than he’d already made four years ago. And the reason Obama did not capture a greater percentage of women voters was that he lost white women: “Romney got fifty-six per cent of the white female vote; Obama got just forty-two per cent.” The question to be put to this fact is, obviously, what distinguishes white women from other women, or at least what it was about Mitt Romney that appealed to white women, and only white women.

Clearly there must be some commonality between Caucasian women and men: “Surely,” Cassidy says, “many of the same factors that motivated white male Romney supporters played into the decision-making of white female Romney supporters.” After all, both “are shaped by the same cultural and economic environment.” The explanatory factor that Cassidy finds is economic: “The reason Romney did a bit better … among white women is probably that they viewed him as a stronger candidate on economic issues, which are as important to women as to men.” Obviously, though, this begs the question: why did white women find Romney more persuasive on economic matters?

That, to be sure, is a very large question, and I wouldn’t presume to answer it here. What I would suggest, however, is that there might be a relationship between those results, the first presidential debate, and the anecdote with which I began this piece. If white people voted more for Romney, it might be because of one of the qualities he exhibited in the first presidential debate in early October: as many commenters noted afterward, Romney was “crisp and well-organized” in the words of one pundit, while President Obama was “boring [and] abstract” in the words of another. Romney was gut-punching, while Obama was waving his fists in the air.

Maybe Romney’s performance in that debate illustrated just why he should have been the candidate of white America: he, at least in early October if not elsewhere during the campaign, understood and used a particular rhetorical style to greater effect than Obama did. And, apparently, it worked: he did have greater success than Obama among the audience attuned to that appeal. In turn, what my experience at Butler might—perhaps—illustrate is just how that audience might be constructed: by what mechanisms is power sorted, and how are those mechanisms distributed?

If Romney achieved success among white Americans because he was briefer and more to the point than Obama—itself rather a whopper, it may be—then it remains to understand just why that should appeal to that particular audience. And maybe the experience of caddies demonstrates just why that should be so: as I mentioned at the start, the habit of saying “154” instead of “155 or so” is something that’s inculcated early among caddies, and while it might be taught in, say, the public schools, there’s something wonderfully clarifying about learning when there’s money at stake. White kids exposed to caddieing, in other words, probably take the lesson more to heart than other kids.

All of this of course is a gossamer of suppositions, but perhaps there’s something to it. Yet, if there is, Obama’s election in the teeth of Romney’s success among Caucasian voters also may forecast something else: that the old methods of doing things are no longer as significant, and may even be no longer viable. In caddieing, the old methods of saying a yardage aren’t as important: since everyone has a range finder (a device that tells the distance) it isn’t nearly as important to suppress uncertainties about what the actual distance is because there aren’t any uncertainties any more. (This actually isn’t true, because range finders themselves aren’t as accurate as, say, pin location sheets are, and anyway they still can’t tell you what club to hit.) Maybe, in part because of technologies and the rest, in the future it won’t be as necessary to compress information, and hence the ability to do that won’t be as prized. If so, we’ll exist in a world that’s unrecognizable to a lot of people in this one.

Including, I suspect, Mr. B.