Double Down

There is a large difference between our view of the US as a net creditor with assets of about 600 billion US dollars and BEA’s view of the US as a net debtor with total net debt of 2.5 trillion. We call the difference between these two equally arbitrary estimates dark matter, because it corresponds to assets that we know exist, since they generate revenue but cannot be seen (or, better said, cannot be properly measured). The name is taken from a term used in physics to account for the fact that the world is more stable than you would think if it were held together only by the gravity emanating from visible matter. In our measure the US owns about 3.1 trillion of unaccounted net foreign assets. [Emp. added]
—Ricardo Hausmann and Frederico Sturzenegger.
“U.S. and Global Imbalances: Can Dark Matter Prevent a Big Bang?”
13 November 2005.

 

Last month Wikileaks, the journalistic-like platform, released a series of emails that included (according to the editorial board of The Washington Post) “purloined emailed excerpts” of Hillary Clinton’s “paid speeches to corporate audiences” from 2013 to 2015—the years in which Clinton withdrew from public life while building a war-chest for her presidential campaign. In one of those speeches, she expressed what the board of the Post calls “her much-maligned view that ‘you need both a public and a private position’”—a position that, the Post harumphs, “is playing as a confession of two-facedness but is actually a clumsy formulation of obvious truth”: namely, that politics cannot operate “unless legislators can deliberate and negotiate candidly, outside the glare of publicity.” To the Post, in other words, thinking that people ought to believe the same things privately as they loudly assert publicly is the sure sign of a näivete verging on imbecility; almost certainly, the Post’s comments draw a dividing line in American life between those who “get” that distinction and those who don’t. Yet, while the Post sees fit to present Clinton’s comments as a sign of her status as “a knowledgeable, balanced political veteran with sound policy instincts and a mature sense of how to sustain a decent, stable democracy,” in point of fact it demonstrates—far more than Donald Trump’s ridiculous campaign—just how far from a “decent, stable democracy” the United States has become: because as those who, nearly a thousand years ago, first set in motion the conceptual revolution that resulted in democracy understood, there is no thought or doctrine more destructive of democracy than the idea that there is a “public” and a “private” truth.

That’s a notion that, likely, is difficult for the Post’s audience to encompass. Presumably educated at the nation’s finest schools, the Post’s audience can see no issue with Clinton’s position because the way towards it has been prepared for decades: it is, in fact, one of the foundational doctrines of current American higher education. Anyone who has attended an American institution of higher learning over the past several decades, in other words, is going to learn a version of Clinton’s belief that truth can come in two (or more) varieties, because that is what intellectuals of both the political left and the political right have asserted for more than half a century.

The African-American novelist James Baldwin asserted, for example, in 1949 that “literature and sociology are not the same,” while in 1958 the conservative political scientist Leo Strauss dismissed “the ‘scientific’ approach to society” as ignoring “the moral distinctions by which we take our bearings as citizens and”—in a now-regrettable choice of words—“as men.” It’s become so unconscious a belief among the educated, in fact, that even some scientists themselves have adopted this view: the biologist Stephen Jay Gould, for instance, towards the end of his life argued that science and religion constituted what he called “non-overlapping magisteria,” while John Carmody, a physician turned writer for The Australian, more prosaically—and seemingly modestly—asserted not long ago that “science and religion, as we understand them, are different.” The motives of those arguing for such a separation are usually thought to be inherently positive: agreeing to such a distinction, in fact, is nearly a requirement for admittance to polite society these days—which is probably why the Post can assert that Clinton’s admissions are a sign of her fitness for the presidency, instead of being disqualifying.

To the Post’s readers, in short, Hillary Clinton’s doubleness is a sign of her “sophistication” and “responsibility.” It’s a sign that she’s “one of us”—she, presumably unlike the trailer trash interested in Donald Trump’s candidacy, understands the point Rashomon! (Though, Kurosawa’s film does not—because logically it cannot—necessarily imply the view of ambiguity it’s often suggested it does: if Rashomon makes the claim that reality is ultimately unknowable, how can we know that?) But those who think thusly betray their own lack of sophistication—because, in the long history of humanity, this isn’t the first time that someone has tried to sell a similar doctrine.

Toward the height of the Middle Ages the works of Aristotle became re-discovered in Europe, in part through contacts with Muslim thinkers like the twelfth-century Andalusian Ibn-Rushd—better known in Europe as “Averroes.” Aristotle’s works were extremely exciting to students used to a steady diet of Plato and the Church Fathers—precisely because at points they contradicted, or at least appeared to contradict, those same Church Fathers. (Which was also, as it happened, what interested Ibn-Rushd about Aristotle—though in his case, the Greek philosopher appeared to contradict Muslim, instead of Christian, sources.) That however left Aristotle enthusiasts with a problem: if they continued to read the Philosopher (Aristotle) and his Commentator (Averroes), they would embark on a collision course with the religious authorities.

In The Harmony of Religion and Philosophy, it seems, Averroes taught that “philosophy and revelation do not contradict each other, and are essentially different means of reaching the same truth”—a doctrine that his later Christian followers turned into what became known as the doctrine of “double truth.” According to a lecturer at the University of Paris in the thirteenth century named Siger of Brabant, for instance, “there existed a ‘double truth’: a factual or ‘hard’ truth that is reached through science and philosophy, and a ‘religious’ truth that is reached through religion.” To Brabant and his crowd, according to Encyclopedia Britannica, “religion and philosophy, as separate sources of knowledge, might arrive at contradictory truths without detriment to either.” (Which was not the same as Averroes’ point, however: the Andalusian scholar “taught that there is only one truth, but reached in two different ways, not two truths.”) Siger of Brabant, in other words, would have been quite familiar with Hillary Clinton’s distinction between the “public” and the “private.”

To some today, of course, that would merely point to how contemporary Siger of Brabant was, and how fuddy-duddy were his opponents—like Stephen Tempier, the bishop of Paris. As if he were some 1950s backwoods Baptist preacher denouncing Elvis or the Beatles, in 1277 Tempier denounced those who “hold that something is true according to philosophy but not according to the Catholic faith, as if there are two contrary truths.” Yet, while some might want to portray Brabant, thusly, as a forerunner to today’s tolerant societies, in reality it was Tempier’s insistence that truth comes in mono, not stereo, that (seemingly paradoxically) led to the relatively open society we at present enjoy.

People who today would make that identification, that is, might be uneasy if they knew that part of the reason Brabant believed his doctrine was his belief in “the superiority of philosophers to the common people,” or that Averroes himself warned “against teaching philosophical methods to the general populace.” Two truths, in other words, easily translated into two different kinds of people—and make no mistake, these doctrines did not imply that these two differing types were “separate but equal.” Instead, they were a means of asserting the superiority of the one type over the other. The doctrine of “double truth,” in other words, was not a forerunner to today’s easygoing societies.

To George Orwell, in fact, it was prerequisite for totalitarianism: Brabant’s theory of “double truth,” in other words, may be the origin of the concept of “doublethink” as used in Orwell’s 1984. In that 1948 novel, “doublethink” is defined as

To know and not to know, to be conscious of complete truthfulness while telling carefully constructed lies, to hold simultaneously two opinions which cancelled out, knowing them to be contradictory and believing in both of them, to use logic against logic, to repudiate morality while laying claim to it, to believe that democracy was impossible and that the Party was the guardian of democracy, to forget whatever it was necessary to forget, then to draw it back into memory again at the moment when it was needed, and then promptly to forget it again, and above all, to apply the same process to the process itself – that was the ultimate subtlety: consciously to induce unconsciousness, and then, once again, to become unconscious of the act of hypnosis you had just performed. Even to understand the word ‘doublethink’ involved the use of doublethink.

It was a point Orwell had been thinking about for some time: in a 1946 essay entitled “Politics and the English Language,” he had denounced “unscrupulous politicians, advertisers, religionists, and other doublespeakers of whatever stripe [who] continue to abuse language for manipulative purposes.” To Orwell, the doctrine of the “double truth” was just a means of sloughing off feelings of guilt or shame naturally produced by human beings engaged in such manipulations—a technique vital to totalitarian regimes.

Many in today’s universities, to be sure, have a deep distrust for Orwell: Louis Menand—who not only teaches at Harvard and writes for The New Yorker, but grew up in a Hudson Valley town named for his own great-grandfather—perhaps summed up the currently fashionable opinion of the English writer when he noted, in a drive-by slur, that Orwell was “a man who believed that to write honestly he needed to publish under a false name.” The British novelist Will Self, in turn, has attacked Orwell as the “Supreme Mediocrity”—and in particular takes issue with Orwell’s stand, in “Politics and the English Language,” in favor of the idea “that anything worth saying in English can be set down with perfect clarity such that it’s comprehensible to all averagely intelligent English readers.” It’s exactly that part of Orwell’s position that most threatens those of Self’s view.

Orwell’s assertion, Self says flatly, is simply “not true”—an assertion that Self explicitly ties to issues of minority representation. “Only homogeneous groups of people all speak and write identically,” Self writes against Orwell; in reality, Self says, “[p]eople from different heritages, ethnicities, classes and regions speak the same language differently, duh!” Orwell’s big argument against “doublethink”—and thusly, totalitarianism—is in other words just “talented dog-whistling calling [us] to chow down on a big bowl of conformity.” Thusly, “underlying” Orwell’s argument “are good old-fashioned prejudices against difference itself.” Orwell, in short, is a racist.

Maybe that’s true—but it may also be worth noting that the sort of “tolerance” advocated by people like Self can also be interpreted, and has been for centuries, as in the first place a direct assault on the principle of rationality, and in the second place an abandonment of millions of people. Such, at least, is how Thomas Aquinas would have received Self’s point. The Angelic Doctor, as the Church calls him, asserted that Averroeists like Brabant could be refuted on their own terms: the Averroeists said they believed, Aquinas remarked, that philosophy taught them that truth must be one, but faith taught them the opposite—a position that would lead those who held it to think “that faith avows what is false and impossible.” According to Aquinas, the doctrine of the “double truth” would imply that belief in religion was as much as admitting that religion was foolish—at which point you have admitted that there is only a single truth, and it isn’t a religious one. Hence, Aquinas’ point was that, despite what Orwell feared in 1984, it simply is not psychologically possible to hold two opposed beliefs in one’s head simultaneously: whenever someone is faced with a choice like that, that person will inevitably choose one side or the other.

In this, Aquinas was merely following his predecessors. To the ancients, this was known as the “law of non-contradiction”—one of the ancient world’s three fundamental laws of thought. “No one can believe that the same thing can (at the same time) be and not be,” as Aristotle himself put that law in the Metaphysics; nobody can (sincerely) believe one thing and its opposite at the same time. As the Persian, Avicenna—demonstrating that this law was hardly limited to Europeans—put it centuries later: “Anyone who denies the law of non-contradiction should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned is not the same as not to be burned.” Or finally, as Arthur Schopenhauer wrote centuries after that in The World as Will and Representation (using the heavy-handed vocabulary of German philosophers), “every two concept-spheres must be thought of as either united or as separated, but never as both at once; and therefore, although words are joined together which express the latter, these words assert a process of thought which cannot be carried out” (emp. added). If anyone says the contrary, these philosophers implied,  somebody’s selling something.

The point that Aristotle, Aquinas, Avicenna, and Orwell were making, in other words, is that the law of non-contradiction is essentially identical to rationality itself: a nearly foolproof method of performing the most basic of intellectual tasks—above all, telling honest and rational people from dishonest and duplicitous ones. And that, in turn, would lead to their second refutation of Self’s argument: by abandoning the law of non-contradiction, people like Brabant (or Self) were also effectively setting themselves above ordinary people. As one commenter on Aquinas writes, the Good Doctor’s insisted that if something is true, then “it must make sense and it must make sense in terms which are related to the ordinary, untheological ways in which human beings try to make sense of things”—as Orwell saw, that position is related to the law of noncontradiction, and both are related to the notion of democratic government, because telling which candidate is the better one is exactly the very foundation of that form of government. When Will Self attacks George Orwell for being in favor of comprehensibility, in other words, he isn’t attacking Orwell alone: he’s actually attacking Thomas Aquinas—and ultimately the very possibility of self-governance.

While the supporters of Hillary Clinton like to describe her opponent as a threat to democratic government, in other words, Donald Trump’s minor campaign arguably poses far less threat to American freedoms than hers does: from one point of view, Clinton’s accession to power actually threatens the basic conceptual apparatus without which there can be no democracy. Of course, given that during this presidential campaign virtually no attention has been paid, say, to the findings of social scientists (like Ricardo Hausmann and Federico Sturzenegger) and journalists (like those who reported on The Panama Papers) that while many conservatives bemoan such deficits as the U.S. budget or trade imbalances, in fact there is good reason to suspect that such gaps are actually the result of billions (or trillions) of dollars being hidden by wealthy Americans and corporations beyond the reach of the Internal Revenue Service (an agency whose budget has been gutted in recent decades by conservatives)—well, let’s just say that there’s good reason to suspect that Hillary Clinton’s campaign may not be what it appears to be.

After all—she said so.

Double Vision

Ill deeds are doubled with an evil word.
The Comedy of Errors. III, ii

The century just past had been both one of the most violent ever recorded—and also perhaps the highest flowering of civilized achievement since Roman times. A great war had just ended, and the danger of starvation and death had receded for millions; new discoveries in agriculture meant that many more people were surviving into adulthood. Trade was becoming more than a local matter; a pioneering Westerner had just re-established a direct connection with China. As well, although most recent contact with Europe’s Islamic neighbors had been violent, there were also signs that new intellectual contacts were being made; new ideas were circulating from foreign sources, putting in question truths that had been long established. Under these circumstances a scholar from one of the world’s most respected universities made—or said something that allowed his enemies to make it appear he had made—a seemingly-astonishing claim: that philosophy, reason, and science taught one kind of truth, and religion another, and that there was no need to reconcile the two. A real intellect, he implied, had no obligation to be correct: he or she had only to be interesting. To many among his audience that appeared to be the height of both sheer brainpower and politically-efficacious intellectual work—but then, none of them were familiar with either the history of German auto-making, or the practical difficulties of the office of the United States Attorney for the Southern District of New York.

Some literary scholars of a previous generation, of course, will get the joke: it’s a reference to then-Johns Hopkins University Miltonist Stanley Fish’s assertion, in his 1976 essay “Interpreting ‘Interpreting the Variorum,’” that, as an interpreter, he has no “obligation to be right,” but “only that [he] be interesting.” At the time, the profession of literary study was undergoing a profound struggle to “open the canon” to a wide range of previously-neglected writers, especially members of minority groups like African-Americans, women, and homosexuals. Fish’s remark, then, was meant to allow literary scholars to study those writers—many of whom would have been judged “wrong” according to previous notions of literary correctness. By suggesting that the proper frame of reference was not “correct/incorrect,” or “right/wrong,” Fish implied that the proper standard was instead something less rigid: a criteria that thusly allowed for the importation of new pieces of writing and new ideas to flourish. Fish’s method, in other words, might appear to be an elegant strategy that allowed for, and resulted in, an intellectual flowering in recent decades: the canon of approved books has been revamped, and a lot of people who probably would not have been studied—along with a lot of people who might not have done the studying—entered the curriculum who might not have had the change of mind Fish’s remark signified not have become standard in American classrooms.

I put things in the somewhat cumbersome way I do in the last sentence because of course Fish’s line did not arrive in a vacuum: the way had been prepared in American thought long before 1976. Forty years prior, for example, F. Scott Fitzgerald had claimed, in his essay “The Crack-Up” for Esquire, that “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” In 1949 Fitzgerald’s fellow novelist, James Baldwin, similarly asserted that “literature and sociology are not the same.” And thirty years after Fish’s essay, the notion had become so accepted that American philosopher Richard Rorty could casually say that the “difference between intellectuals and the masses is the difference between those who can remember and use different vocabularies at the same time, and those who can remember only one.” So when Fish wrote what he wrote, he was merely putting down something that a number of American intellectuals had been privately thinking for some time—a notion that has, sometime between now and then, become American conventional wisdom.

Even some scientists have come to accept some version of the idea: before his death, the biologist Stephen Jay Gould promulgated the notion of what he called “non-overlapping magisteria”: the idea that while science might hold to one version of truth, religion might hold another. “The net of science,” Gould wrote in 1997, “covers the empirical universe,” while the “net of religion extends over questions of moral meaning and value.” Or, as Gould put it more flippantly, “we [i.e., scientists] study how the heavens go, and they [i.e., theologians] determine how to go to heaven.” “Science,” as medical doctor (and book reviewer) John Carmody put the point in The Australian earlier this year, “is our attempt to understand the physical and biological worlds of which we are a part by careful observation and measurement, followed by rigorous analysis of our findings,” while religion “and, indeed, the arts are, by contrast, our attempts to find fulfilling and congenial ways of living in our world.” The notion then that there are two distinct “realms” of truth is a well-accepted one: nearly every thinking, educated person alive today subscribes to some version of it. Indeed, it’s a belief that appears necessary to the pluralistic, tolerant society that many envision the United States is—or should be.

Yet, the description with which I began this essay, although it does in some sense apply to Stanley Fish’s United States of the 1970s, also applies—as the learned knew, but did not say, at the time of Fish’s 1976 remark—to another historical era: Europe’s thirteenth century. At that time, just as during Fish’s, the learned of the world were engaged in trying to expand the curriculum: in this case, they were attempting to recoup the work of Aristotle, largely lost to the West since the fall of Rome. But the Arabs had preserved Aristotle’s work: “In 832,” as Arthur Little, of the Jesuits, wrote in 1947, “the Abbaside Caliph, Almamun,” had the Greek’s work translated “into Arabic, roughly but not inaccurately,” in which language Aristotle’s works “spread through the whole Moslem world, first to Persia in the hand of Avicenna, then to Spain where its greatest exponent was Averroes, the Cordovan Moor.” In order to read and teach Aristotle without interference from the authorities, Little tells us, Averroes (Ibn Rushd) decided that “Aristotle’s doctrine was the esoteric doctrine of the Koran in opposition to the vulgar doctrine of the Koran defended by the orthodox Moslem priests”—that is, the Arabic scholar decided that there was one “truth” for the masses and another, far more subtle, for the learned. Averroes’ conception was, in turn, imported to the West along with the works of Aristotle: if the ancient Greek was at times referred to as the Master, his Arabic disciple was referred to as the Commentator.

Eventually, Aristotle’s works reached Paris, and the university there, sometime towards the end of the twelfth century. Gerard of Cremona, for example, had translated the Physics into Latin from the Arabic of the Spanish Moors sometime before he died in 1187; others had translated various parts of Aristotle’s Greek corpus either just before or just afterwards. For some time, it seems, they circulated in samizdat fashion among the young students of Paris: not part of the regular curriculum, but read and argued over by the brightest, or at least most well-read. At some point, they encountered a young man who would become known to history as Siger of Brabant—or perhaps rather, he encountered them. And like many other young, studious people, Siger fell in love with these books.

It’s a love story, in other words—and one that, like a lot of other love stories, has a sad, if not tragic, ending. For what Siger was learning by reading Aristotle—and Averroes’ commentary on Aristotle—was nearly wholly incompatible with what he was learning in his other studies through the rest of the curriculum—an experience that he was not, as the experience of Averroes before him had demonstrated, alone in having. The difference, however, is that whereas most other readers and teachers of the learned Greek sought to reconcile him to Christian beliefs (despite the fact that Aristotle long predated Christianity), Siger—as Richard E. Rubenstein puts it in his Aristotle’s Children—presented “Aristotle’s ideas about nature and human nature without attempting to reconcile them with traditional Christian beliefs.” And even more: as Rubenstein remarks, “Siger seemed to relish the discontinuities between Aristotelian scientia and Christian faith.” At the same time, however, Siger also held—as he wrote—that people ought not “try to investigate by reason those things which are above reason or to refute arguments for the contrary position.” But assertions like this also left Siger vulnerable.

Vulnerable, that is, to the charge that what he and his friends were teaching was what Rubenstein calls “the scandalous doctrine of Double Truth.” Or, in other words, the belief that “a proposition [that] could be true scientifically but false theologically, or the other way round.” Whether Siger and his colleagues did, or did not, hold to such a doctrine—there have been arguments about the point for centuries now— isn’t really material, however: as one commenter, Vincent P. Benitez, has put it, either way Siger’s work highlighted just how the “partitioning of Christian intellectual life in the thirteenth century … had become rather pronounced.” So pronounced, in fact, that it suggested that many supposed “intellectuals” of the day “accepted contradictories as simultaneously true.” And that—as it would not to F. Scott Fitzgerald later—posed a problem to the medievals, because it ran up against a rule of logic.

And not just any rule of logic: it’s one that Aristotle himself said was the most essential to any rational thought whatever. That rule of logic is usually known by the name the Law of Non-Contradiction, usually placed as the second of the three classical rules of logic in the ancient world. (The others being the Law of Identity—A is A—and the Law of the Excluded Middle—either A is A or it is not-A.) As Aristotle himself put it, the “most certain of all basic principles is that contradictory propositions are not true simultaneously.” Or—as another of Aristotle’s Arabic commenters, Avicenna (Ibn-Sina) put it in one of its most famous formulations—that rule goes like this: “Anyone who denies the law of non-contradiction should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned is not the same as not to be burned.” In short, a thing cannot be both true and not true at the same time.

Put in Avicenna’s way, of course, the Law of Non-Contradiction will sound distinctly horrible to most American undergraduates, perhaps particularly those who attend the most exclusive colleges: it sounds like—and, like a lot of things, has been—a justification for the worst kind of authoritarian, even totalitarian, rule, and even torture. In that sense, it might appear that attacking the law of non-contradiction could be the height of oppositional intellectual work: the kind of thing that nearly every American undergraduate attracted to the humanities aspires to do. Who is not, aside from members of the Bush Administration legal team (for that matter, nearly every regime known to history) and viewers of the television show 24, against torture? Who does not know that black-and-white morality is foolish, that the world is composed of various “shades of gray,” that “binary oppositions” can always be dismantled, and that it is the duty of the properly educated to instruct the lower orders in the world’s real complexity? Such views might appear obvious—especially if one is unfamiliar with the recent history of Volkswagen.

In mid-September of 2015, the Environmental Protection Agency of the United States issued a violation notice to the German automaker Volkswagen. The EPA had learned that, although the diesel engines Volkswagen built were passing U.S. emissions tests, they were doing it on the sly: each car’s software could detect when the car’s engine was being tested by government monitors, and if so could reduce the pollutants that engine was emitting. Just more than six months later, Volkswagen agreed to pay a settlement of 15.3 billion dollars in the largest auto-related class-action lawsuit in the history of the United States. That much, at least, is news; what interests me, however,  about this story in relation to this talk about academics and monks was a curious article put out by The New Yorker in October of 2015. Entitled “An Engineering Theory of the Volkswagen Scandal,” Paul Kedrosky—perhaps significantly—“a venture investor and a former equity analyst,” explains these events as perhaps not the result of “engineers … under orders from management to beat the tests by any means necessary.” Instead, the whole thing may simply have been the result of an “evolution” of technology that “subtly and stealthily, even organically, subverted the rules.” In other words, Kedrosky wishes us to entertain the possibility that the scandal ought to be understood in terms of the undergraduate’s idea of shades of gray.

Kedrosky takes his theory from a book by sociologist Diane Vaughn, about the Challenger space shuttle disaster of 1986. In her book, Vaughn describes how, over nine launches from 1983 onwards, the space shuttle organization had launched Challenger under colder and colder temperatures, until NASA’s engineers had “effectively declared the mildly abnormal normal,” Kedrosky says—and until, one very frigid January morning in Florida, the shuttle blew into thousands of pieces moments after liftoff. Kedrosky’s attempt at an analogy is that maybe the Volkswagen scandal developed similarly: “Perhaps it started with tweaks that optimized some aspect of diesel performance and then evolved over time.” If so, then “at no one step would it necessarily have felt like a vast, emissions-fixing conspiracy by Volkswagen engineers.” Instead—as this story goes—it would have felt like Tuesday.

The rest of Kedrosky’s thrust is relatively easy to play out, of course—because we have heard a similar story before. Take, for instance, another New Yorker story; this one, a profile of the United States Attorney for the Southern District of New York, Preet Bharara. Mr. Bharara, as the representative of the U.S. Justice Department in New York City, is in charge of prosecuting Wall Street types; because he took office in 2009, at the crest of the financial crisis that began in 2007, many thought he would end up arresting and charging a number of executives as a result of the widely-acknowledged chicaneries involved in creating the mess. But as Jeffrey Toobin laconically observes in his piece, “No leading executive was prosecuted.” Even more notable, however, is the reasoning Bharara gives for his inaction.

“Without going into specifics,” Toobin reports, Bharara told him “that his team had looked at Wall Street executives and found no evidence of criminal behavior.” Sometimes, Bharara went on to explain, “‘when you see a bad thing happen, like you see a building go up in flames, you have to wonder if there’s arson’”—but “‘sometimes it’s not arson, it’s an accident.’” In other words, to Bharara, it’s entirely plausible to think of the entire financial meltdown of 2007-8, which ended three giant Wall Street firms (Bear Stearns, Merrill Lynch, and Lehman Brothers) and two arms of the United States government (Fannie Mae and Freddie Mac), and is usually thought to have been caused by predatory lending practices driven by Wall Street’s appetite for complex financial instruments, as essentially analogous to Diane Vaughn’s view of the Challenger disaster—or Kedrosky’s view of Volkswagen’s cavalier thoughts about environmental regulation. To put it in another way, both Kedrosky and Bharara must possess, in Fitzgerald’s terms, “first-rate intelligences”: in Kedrosky’s version of Volkswagen’s actions or Bharara’s view of Wall Street, crimes were committed, but nobody committed them. They were both crimes and not-crimes at the same time.

These men can, in other words, hold opposed ideas in their head simultaneously. To many, that makes these men modern—or even, to some minds, “post-modern.” Contemporary intellectuals like to cite examples—like the “rabbit-duck” illusion referred to by Wittgenstein, which can be seen as either a rabbit or a duck, or the “Schroedinger’s Cat” thought experiment, whereby the cat is neither dead nor alive until the box is opened, or the fact that light is both a wave and a particle—designed to show how out-of-date the Law of Noncontradiction is. In that sense, we might as easily blame contemporary physics as contemporary work in the humanities for Kedrosky or Bharara’s difficulties in saying whether an act was a crime or not—and for that matter, maybe the similarity between Stanley Fish and Siger of Brabant is merely a coincidence. Still, in the course of reading for this piece I did discover another apparent coincidence in Arthur Little’s same article I previously cited. “Unlike Thomas Aquinas,” the Jesuit wrote 1947, “whose sole aim was truth, Siger desired most of all to find the world interesting.” The similarity to Stanley Fish’s 1976 remarks about himself—that he has no obligation to be right, only to be interesting—are, I think, striking. Like Bharara, I cannot demonstrate whether Fish knew of this article of Little’s, written thirty years before his own.

But then again, if I have no obligation to be right, what does it matter?

Old Time Religion

Give me that old time religion,
Give me that old time religion,
Give me that old time religion,
It’s good enough for me.
Traditional; rec. by Charles Davis Tilman, 1889
Lexington, South Carolina

… science is but one.
Lucius Annaeus Seneca.

New rule changes for golf usually come into effect on the first of the year; this year, the big news is the ban on “anchored” putters: the practice of holding one end of a putter in place against the player’s body. Yet as has been the case for nearly two decades, the real news from the game’s rule-makers this January is about a change that is not going to happen: the USGA is not going to create “an alternate set of rules to make the game easier for beginners and recreational players,” as for instance Mark King, then president and CEO of TaylorMade-Adidas Golf, called for in 2011. King argued then that something does need to happen because, as King correctly observed, “Even when we do attract new golfers, they leave within a year.” Yet, as nearly five years of stasis has demonstrated since, the game’s rulers will do no such thing. What that inaction suggests, I will contend, may simply be that—despite the fact that golf was at one time denounced as atheistical since so many golfers played on Sundays—golf’s powers-that-be are merely zealous adherents of the First Commandment. But it may also be, as I will show, that the United States Golf Association is a lot wiser than Mark King.

That might be a surprising conclusion, I suppose; it isn’t often, these days, that we believe that a regulatory body could have any advantage over a “market-maker” like King. Further, after the end of religious training it’s unlikely that many remember the contents, never mind the order, of Moses’ tablets. But while one might suppose that the list of commandments might begin with something important—like, say, a prohibition against murder?—most versions of the Ten Commandments begin with “Thou shalt have no other gods before me.” It’s a rather clingy statement, this first—and thus, perhaps the most significant—of the commandments. But there’s another way to understand the First Commandment: as not only the foundation of monotheism, but also a restatement of a rule of logic.

To understand a religious rule in this way, of course, would be to flout the received wisdom of the moment: for most people these days, it is well-understood that science and logic are separate from religion. Thus, for example, the famed biologist Stephen Jay Gould wrote first an essay (“Non-Overlapping Magisteria”), and then an entire book (Rock of Ages: Science and Religion In The Fullness Of Life), arguing that while many think religion and science are opposed, in fact there is “a lack of conflict between science and religion,” that science is “no threat to religion,” and further that “science cannot be threatened by any theological position on … a legitimately and intrinsically religious issue.” Gould argued this on the basis that, as the title of his essay says, each subject possesses a “non-overlapping magisteria”: that is, “each subject has a legitimate magisterium, or domain of teaching authority.” Religion is religion, in other words, and science is science—and never the twain shall meet.

To say then that the First Commandment could be thought of as a rendering of a logical rule seen as if through a glass darkly would be impermissible according to the prohibition laid down by Gould (among others): the prohibition against importing science into religion or vice versa. And yet some argue that such a prohibition is nonsense: for instance Richard Dawkins, another noted biologist, has said that in reality religion does not keep “itself away from science’s turf, restricting itself to morals and values”—that is, limiting itself to the magisterium Gould claimed for it. On the contrary, Dawkins writes: “Religions make existence claims, and this means scientific claims.” The border, Dawkins says, Gould draws between science and religion is drawn in a way that favors religion—or more specifically, to protect religion.

Supposing Dawkins, and not Gould, to be correct then is to allow for the notion that a religious idea can be a restatement of a logical or scientific one—but in that case, which one? I’d suggest that the First Commandment could be thought of as a reflection of what’s known as the “law of non-contradiction,” usually called the second of the three classical “laws of thought” of antiquity. At least as old as Plato, this law says that—as Aristotle puts it in the Metaphysics—the “most certain of all basic principles is that contradictory propositions are not true simultaneously.” Or to put it another, logical, way: thou shalt have no other gods before me.

What one could say, then, is that it is in fact Dawkins, and not Gould, who is the more “religious” here: while Gould wishes to allow room for multiple “truths,” Dawkins—precisely like the God of the ancient Hebrews—insists on a single path. Which, one might say, is just the stance of the United States Golf Association: taking a line from the film Highlander, and its many, many offspring, the golf rulemaking body is saying that there can be only one.

That is not, to say the least, a popular sort of opinion these days. We are, after all, supposed to be living in an age of tolerance and pluralism: so long ago as 1936 F. Scott Fitzgerald claimed, in Esquire, that “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” That notion has become so settled that, as the late philosopher Richard Rorty once remarked, today for many people a “sense of … moral worth is founded on … [the] tolerance of diversity.” In turn, the “connoisseurship of diversity has made this rhetoric”—i.e., the rhetoric used by the First Commandment, or the law of non-contradiction—“seem self-deceptive and sterile.” (And that, perhaps more than anything else, is why Richard Dawkins is often attacked for, as Jack Mirkinson put it in Salon this past September, “indulging in the most detestable kinds of bigotry.”) Instead, Rorty encouraged intellectuals to “urge the construction of a world order whose model is a bazaar surrounded by lots and lots of exclusive private clubs.”

Rorty in other words would have endorsed the description of golf’s problem, and its solution, proposed by Mark King: the idea that golf is declining in the United States because the “rules are making it too hard,” so that the answer is to create a “separate but equal” second set of rules. To create more golfers, it’s necessary to create more different kinds of golf. But the work of Nobel Prize-winning economist Joseph Stiglitz suggests another kind of answer: one that not only might be recognizable to both the ancient Hebrews and the ancient Greeks, but also would be unrecognizable to the founders of what we know today as “classical” economics.

The central idea of that form of economic study, as constructed by the followers of Adam Smith and David Ricardo, is the “law of demand.” Under that model, suppliers attempt to fulfill “demand,” or need, for their product until such time as it costs more to produce than the product would fetch in the market. To put it another way—as the entry at Wikipedia does—“as the price of product increases, quantity demanded falls,” and vice versa. But this model only works, Stiglitz correctly points out, only insofar as it can be assumed that there is, or can be, an infinite supply of the product. The Columbia professor described what he meant in an excerpt of his 2012 book The Price of Inequality printed in Vanity Fair: an article that is an excellent primer on the problem of monopoly—that is, what happens when the supply of a commodity is limited and not (potentially) infinite.

“Consider,” Stiglitz asks us, “someone like Mitt Romney, whose income in 2010 was $21.7 million.” Romney’s income might be thought of as the just reward for his hard work of bankrupting companies and laying people off and so forth, but even aside from the justice of the compensation, Stiglitz asks us to consider the effect of concentrating so much wealth in one person: “Even if Romney chose to live a much more indulgent lifestyle, he would spend only a fraction of that sum in a typical year to support himself and his wife.” Yet, Stiglitz goes on to observe, “take the same amount of money and divide it among 500 people … and you’ll find that almost all the money gets spent”—that is, it gets put back to productive use in the economy as a whole.

It is in this way, the Columbia University professor says, that “as more money becomes concentrated at the top, aggregate demand goes into a decline”: precisely the opposite, it can be noted, of the classical idea of the “law of demand.” Under that scenario, as money—or any commodity one likes—becomes rarer, it drives people to obtain more of it. But Stiglitz argues, while that might be true in “normal” circumstances, it is not true at the “far end” of the curve: when supply becomes too concentrated, people of necessity will stop bidding the price up, and instead look for substitutes for that commodity. Thus, the overall “demand” must necessarily decline.

That, for instance, is what happened to cotton after the year 1860. That year, cotton grown in the southern United States was America’s leading export, and constituted (as Eugen R. Dattel noted in Mississippi History Now not long ago) nearly 80 percent “of the 800 million pounds of cotton used in Great Britain” that year. But as the war advanced—and the Northern blockade took effect—that percentage plummeted: the South exported millions of pounds of cotton before the war, but merely thousands during it. Meanwhile, the share of other sources of supply rose: as Matthew Osborn pointed out in 2012 in Al Arabiya News, Egyptian cotton exports prior to the bombardment of Fort Sumter in 1861 resulted in merely $7 million dollars in exports—but by the end of the war in 1865, Egyptian profits were $77 million, as Europeans sought different sources of supply than the blockaded South. This, despite the fact that it was widely acknowledged that Egyptian cotton was inferior to American cotton: lacking a source of the “good stuff,” European manufacturers simply made do with what they could get.

The South thusly failed to understand that, while it did constitute the lion’s share of production prior to the war, it was not the sole place cotton could be grown—other models for production existed. In some cases, however—through natural or human-created means—an underlying commodity can have a bottleneck of some kind, creating a shortage. According to classical economic theory, in such a case demand for the commodity will grow; in Stiglitz’ argument, however, it is possible for a supply to become so constricted that human beings will simply decide to go elsewhere: whether it be an inferior substitute or, perhaps, giving up the endeavor entirely.

This is precisely the problem of monopoly: it’s possible, in other words, for a producer to have such a stranglehold on the market that it effectively kills that market. The producer in effect kills the golden egg—which is just what Stiglitz argues is happening today to the American economy.  “When one interest group holds too much power,” Stiglitz writes, “it succeeds in getting policies that help itself in the short term rather than help society as a whole over the long term.” Such a situation can have only one of two different solutions: either the monopoly is broken, or people turn to a completely different substitute. To use an idiom from baseball, they “take their ball and go home.”

As Mark King noted back in 2011, golfers have been going home since the sport hit its peak in 2005. That year, the National Golf Foundation’s yearly survey of participation found 30 million players; in 2014, by contrast, the numbers were slightly less than 25 million, according to a Golf Digest story by Mike Stachura. Mark King’s plan to gain those numbers back, as we’ve seen, is to invent a new set of rules to retain them—a plan with a certain similarity, I’d suggest, to the ideal of “diversity” championed by Rorty: a “bazaar surrounded by lots and lots of exclusive private clubs.” That is, if the old rules are not to your taste, you could take up another set of rules.

Yet, an examination of the sport of golf as it is, I’d say, would find that Rorty’s description of his ideal already is, more or less, a description of the current model for the sport of golf in the United States—golf already is, largely speaking, a “bazaar surrounded by private clubs.” Despite the fact that, as Chris Millard reported in 2008 for Golf Digest, “only 9 percent of all U.S. golfers are private-club members,” it’s also true that private clubs constitute around 30 percent of all golf facilities, and as Mike Stachura has noted (also in Golf Digest), even today “the largest percentage of all golfers (27 percent) have a household income over $125,000.” Golf doesn’t need any more private clubs: there are already plenty of them.

In turn, it is their creature—the PGA of America—that largely controls golf instruction in this country: that is, the means to play the game. To put it in Stiglitz’ terms, what this means is that the PGA of America—and the private clubs who hire PGA professionals to staff their operations—essentially constitute a monopoly on instruction, or in other words the basic education in how to accomplish the essential skill of the game: hitting the ball. It’s that ability—the capacity to send a golf ball in the direction one desires—that constitutes the thrill of the sport, the commodity that golfers pursue golf to enjoy. Unfortunately, it’s one that, for the most part, most golfers never achieve: as Rob Oller put it in the Columbus Dispatch not long ago, “it has been estimated that fewer than 25 percent of all golfers” ever break a score of 100. According to Mark King, all that is necessary to re-achieve the glory days of 2005 is to redefine what golf is—under King’s rules, I suppose it would be easy enough for nearly everyone to break 100.

I would suggest, however, that the reason golf’s participation rate has declined is not due to an unfair set of rules, but rather because golf’s model has more than a passing resemblance to Stiglitz’ description of a monopolized economy: one in which one participant has so much effective power that it effectively destroys the entire market. In situations like that Stiglitz (and many other economists) argue that regulatory intervention is necessary—a realization that, perhaps, the United States Golf Association is arriving at also through its continuing decision not to implement a second set of rules for the game.

Constructing such a set of rules could be, as Mark King or Richard Rorty might say, the “tolerant” thing to do—but it could also, arguably, have a less-than-tolerant effect by continuing to allow some to monopolize access to the pleasure of the sport. By refusing to allow an “escape hatch” by which the older model could cling to life the USGA is, consciously or not, speeding the day in which golf will become “all one thing or all the other,” as someone once said upon a vaguely similar occasion, invoking a similar sort of idea to the First Commandment or the law of non-contradiction. What the stand of the USGA in favor of a single set of rules—and thus, implicitly, in favor of the ancient idea of a single truth—appears to signify is that, to the golf organization, it just might be that fashionable praise for “diversity” is no different than, say, claiming your subprime mortgages are good, or that the figures of the police accurately reflect crime. For the USGA then, if no one else, that old time religion is good enough: despite being against anchoring, it seems that the golf organization still believes in anchors.

 

 

 

 

The Oldest Mistake

Monte Ward traded [Willie] Keeler away for almost nothing because … he made the oldest mistake in management: he focused on what the player couldn’t do, rather than on what he could.
The New Bill James Historical Baseball Abstract

 

 

What does an American “leftist” look like? According to academics and the inhabitants of Brooklyn and its spiritual suburbs, there are means of tribal recognition: unusual hair or jewelry; a mode of dress either strikingly old-fashioned or futuristic; peculiar eyeglasses, shoes, or other accessories. There’s a deep concern about food, particularly that such food be the product of as small, and preferably foreign, an operation as possible—despite a concomitant enmity of global warming. Their subject of study at college was at minimum one of the humanities, and possibly self-designed. If they are fans of sports at all, it is either extremely obscure, obscenely technical, and does not involve a ball—think bicycle racing—or it is soccer. And so on. Yet, while each of us has exactly a picture of such a person in mind—probably you know at least a few, or are one yourself—that is not what a real American leftist looks like at the beginning of the twenty-first century. In reality, a person of the actual left today drinks macro-, not micro-, brews, studied computer science or some other such discipline at university, and—above all—is a fan of either baseball or football. And why is that? Because such a person understands statistics intuitively—and the great American political battle of the twenty-first century will be led by the followers of Strabo, not Pyrrho.

Each of those two men were Greeks: the one, a geographer, the other a philosopher—the latter often credited with being one of the first “Westerners” to visit India. “Nothing really exists,” Pyrrho reportedly held, “but human life is governed by convention”—a philosophy very like that of the current American “cultural left,” governed as it is by the notion, as put by American literary critic Stanley Fish, that “norms and standards and rules … are in every instance a function or extension of history, convention, and local practice.” Arguably, most of the “political” work of the American academy over the past several generations has been done under that rubric: as Fish and others have admitted in recent years, it’s only by acceding to some version of that doctrine that anyone can work as an American academic in the humanities these days.

Yet while “official” leftism has prospered in the academy under a Pyrrhonian rose, in the meantime enterprises like fantasy football and above all, sabermetrics, have expanded as a matter of “entertainment.” But what an odd form of relaxation! It’s an bizarre kind of escapism that requires a familiarity with both acronyms and the formulas used to compute them: WAR, OPS, DIPS, and above all (with a nod to Greek antecedents), the “Pythagorean expectation.” Yet the work on these matters has, mainly, been undertaken as a purely amateur endeavor—Bill James spent decades putting out his baseball work without any remuneration, until finally being hired latterly by the Boston Red Sox in 2003 (the same year that Michael Lewis published Moneyball, a book about how the Oakland A’s were using methods pioneered by James and his disciples). Still, all of these various methods of computing the value of both a player and a team have a perhaps-unintended effect: that of training the mind in the principle of Greek geographer, Strabo.

“It is proper to derive our explanations from things which are obvious,” Strabo wrote two thousand years ago, in a line that would later be adopted by the Englishman who constructed geology, Charles Lyell. In Lyell’s Principles of Geology (which largely founded the field) Lyell held—in contrast to the mysteriousness of Pyrrho—that the causes of things are likely to those already around us, and not due to unique, unrepeatable events. Similarly, sabermetricians—as opposed to the old-school scouts depicted in the film version of Moneyball—judge players based on their performance on the field, not on their nebulous “promise” or “intangibles.” (In Moneyball scouts were said to judge players on such qualities as the relative attractiveness of their girlfriends, which was said to signify the player’s own confidence in his ability.) Sabermetricians disregard such “methods” of analysis in favor of examination of the acts performed by the player as recorded by statistics.

Why, however, would that methodological commitment lead sabermetricians to be politically “liberal”—or for that matter, why would it lead in a political direction at all? The answer to the latter question is, I suspect, inevitable: sabermetrics, after all, is a discipline well-suited for the purpose of discovering how to run a professional sports team—and in its broadest sense, managing organizations simply is what “politics” is. The Greek philosopher Aristotle, for that reason, defined politics as a “practical science”—as the discipline of organizing human beings for particular purposes. It seems inevitable then that at least some people who have spent time wondering about, say, how to organize a baseball team most effectively might turn their imaginations towards some other end.

Still, even were that so, why “liberalism,” however that is defined, as opposed to some other kind political philosophy? Going by anecdotal evidence, after all, the most popular such doctrine among sports fans might be libertarianism. Yet, beside the fact that libertarianism is the philosophy of twelve-year-old boys (not necessarily a knockdown argument against its success), it seems to me that anyone following the methods of sabermetrics will be led towards positions usually called “liberal” in today’s America because from that sabermetrical, Strabonian perspective, certain key features of the American system will nearly instantly jump out.

The first of those features will be that, as it now stands, the American system is designed in a fashion contrary to the first principle of sabermetrical analysis: the Pythagorean expectation. As Charles Hofacker described it in a 1983 article for Baseball Analyst, the “Pythagorean equation was devised by Bill James to predict winning percentage from … the critical difference between runs that [a team] scores and runs that it allows.” By comparing these numbers—the ratio of a team’s runs scored and runs allowed versus the team’s actual winning percentage—James found that a rough approximation of a team’s real value could be determined: generally, a large difference between those two sets of numbers means that something fluky is happening.

If a team scores a lot of runs while also preventing its opponents from scoring, in other words, and yet somehow isn’t winning as many games as those numbers would suggest, then that suggests that that team is either tremendously unlucky or there is some hidden factor preventing success. Maybe, for instance, that team is scoring most of its runs at home because its home field is particularly friendly to the type of hitters the team has … and so forth. A disparity between runs scored/runs allowed and actual winning percentage, in short, compels further investigation.

Weirdly however the American system regularly produces similar disparities—and yet while, in the case of a baseball team, that would set off alerts for a sabermetrician, no such alarms are set off in the case of the so-called “official” American left, which apparently has resigned itself to the seemingly inevitable. In fact, instead of being the subject of curiosity and even alarm, many of the features of the U.S. constitution, like the Senate and the Electoral College—not to speak of the Supreme Court itself—are expressly designed to thwart what Chief Justice Earl Warren said was “the clear and strong command of our Constitution’s Equal Protection Clause”: the idea that “Legislators represent people … [and] are elected by voters, not farms or cities or economic interests.” Whereas a professional baseball team, in the post-James era, would be remiss if it were to ignore a difference between its ratio of runs scored and allowed and its games won and lost, under the American political system the difference between the will of the electorate as expressed by votes cast and the actual results of that system as expressed by legislation passed is not only ignored, but actively encouraged.

“The existence of the United States Senate”—for example wrote Justice Harlan in his dissent to the 1962 case of Baker v. Carr—“is proof enough” that “those who have the responsibility for devising a system of representation may permissibly consider that factors other than bare numbers should be taken into account.” That is, the existence of the U.S. Senate, which sends two senators from each state regardless of each state’s population, is support enough for those who believe—as the American “cultural left” does—in the importance of factors like “history” or the like in political decisions, as opposed to, say, the will of the American voters as expressed by the tally of all American votes.

As Jonathan Cohn remarked in The New Republic not long ago, in the Senate “predominantly rural, thinly populated states like Arkansas and North Dakota have the exact same representation as more urban, densely populated states like California and New York”—meaning that voters in those rural states have more effective political power than voters in the urban ones do. In sum, the Senate is, as Cohn says, one of Constitution’s “levers for thwarting the majority.” Or to put it in sabermetrical terms, it is a means of hiding a severe disconnect in America’s Pythagorean expectation.

Some will defend that disconnect, as Justice Harlan did over fifty years ago, on the grounds of terms familiar to the “cultural left”: that of “history” and “local practice” and so forth. In other words, that is how the Constitution originally constructed the American state. Yet, attempting (in Cohn’s words) to “prevent majorities from having the power to determine election outcomes” is a dangerous undertaking; as the Atlantic’s Ta Nehisi-Coates wrote recently about certain actions taken by the Republican party designed to discourage voting, to “see the only other major political party in the country effectively giving up on convincing voters, and instead embarking on a strategy of disenfranchisement, is a bad sign for American democracy.” In baseball, the sabermetricians know, a team with a high difference between its “Pythagorean expectation” and its win-loss record will usually “snap back” to the mean. In politics, as everyone since before Aristotle has known, such a “snap back” is usually a bit more costly than, say, the price of a new pitcher—which is to say that, if you see any American revolutionaries around you right now, he or she is likely wearing, not a poncho or a black turtleneck, but an Oakland A’s hat.