Double Down

There is a large difference between our view of the US as a net creditor with assets of about 600 billion US dollars and BEA’s view of the US as a net debtor with total net debt of 2.5 trillion. We call the difference between these two equally arbitrary estimates dark matter, because it corresponds to assets that we know exist, since they generate revenue but cannot be seen (or, better said, cannot be properly measured). The name is taken from a term used in physics to account for the fact that the world is more stable than you would think if it were held together only by the gravity emanating from visible matter. In our measure the US owns about 3.1 trillion of unaccounted net foreign assets. [Emp. added]
—Ricardo Hausmann and Frederico Sturzenegger.
“U.S. and Global Imbalances: Can Dark Matter Prevent a Big Bang?”
13 November 2005.

 

Last month Wikileaks, the journalistic-like platform, released a series of emails that included (according to the editorial board of The Washington Post) “purloined emailed excerpts” of Hillary Clinton’s “paid speeches to corporate audiences” from 2013 to 2015—the years in which Clinton withdrew from public life while building a war-chest for her presidential campaign. In one of those speeches, she expressed what the board of the Post calls “her much-maligned view that ‘you need both a public and a private position’”—a position that, the Post harumphs, “is playing as a confession of two-facedness but is actually a clumsy formulation of obvious truth”: namely, that politics cannot operate “unless legislators can deliberate and negotiate candidly, outside the glare of publicity.” To the Post, in other words, thinking that people ought to believe the same things privately as they loudly assert publicly is the sure sign of a näivete verging on imbecility; almost certainly, the Post’s comments draw a dividing line in American life between those who “get” that distinction and those who don’t. Yet, while the Post sees fit to present Clinton’s comments as a sign of her status as “a knowledgeable, balanced political veteran with sound policy instincts and a mature sense of how to sustain a decent, stable democracy,” in point of fact it demonstrates—far more than Donald Trump’s ridiculous campaign—just how far from a “decent, stable democracy” the United States has become: because as those who, nearly a thousand years ago, first set in motion the conceptual revolution that resulted in democracy understood, there is no thought or doctrine more destructive of democracy than the idea that there is a “public” and a “private” truth.

That’s a notion that, likely, is difficult for the Post’s audience to encompass. Presumably educated at the nation’s finest schools, the Post’s audience can see no issue with Clinton’s position because the way towards it has been prepared for decades: it is, in fact, one of the foundational doctrines of current American higher education. Anyone who has attended an American institution of higher learning over the past several decades, in other words, is going to learn a version of Clinton’s belief that truth can come in two (or more) varieties, because that is what intellectuals of both the political left and the political right have asserted for more than half a century.

The African-American novelist James Baldwin asserted, for example, in 1949 that “literature and sociology are not the same,” while in 1958 the conservative political scientist Leo Strauss dismissed “the ‘scientific’ approach to society” as ignoring “the moral distinctions by which we take our bearings as citizens and”—in a now-regrettable choice of words—“as men.” It’s become so unconscious a belief among the educated, in fact, that even some scientists themselves have adopted this view: the biologist Stephen Jay Gould, for instance, towards the end of his life argued that science and religion constituted what he called “non-overlapping magisteria,” while John Carmody, a physician turned writer for The Australian, more prosaically—and seemingly modestly—asserted not long ago that “science and religion, as we understand them, are different.” The motives of those arguing for such a separation are usually thought to be inherently positive: agreeing to such a distinction, in fact, is nearly a requirement for admittance to polite society these days—which is probably why the Post can assert that Clinton’s admissions are a sign of her fitness for the presidency, instead of being disqualifying.

To the Post’s readers, in short, Hillary Clinton’s doubleness is a sign of her “sophistication” and “responsibility.” It’s a sign that she’s “one of us”—she, presumably unlike the trailer trash interested in Donald Trump’s candidacy, understands the point Rashomon! (Though, Kurosawa’s film does not—because logically it cannot—necessarily imply the view of ambiguity it’s often suggested it does: if Rashomon makes the claim that reality is ultimately unknowable, how can we know that?) But those who think thusly betray their own lack of sophistication—because, in the long history of humanity, this isn’t the first time that someone has tried to sell a similar doctrine.

Toward the height of the Middle Ages the works of Aristotle became re-discovered in Europe, in part through contacts with Muslim thinkers like the twelfth-century Andalusian Ibn-Rushd—better known in Europe as “Averroes.” Aristotle’s works were extremely exciting to students used to a steady diet of Plato and the Church Fathers—precisely because at points they contradicted, or at least appeared to contradict, those same Church Fathers. (Which was also, as it happened, what interested Ibn-Rushd about Aristotle—though in his case, the Greek philosopher appeared to contradict Muslim, instead of Christian, sources.) That however left Aristotle enthusiasts with a problem: if they continued to read the Philosopher (Aristotle) and his Commentator (Averroes), they would embark on a collision course with the religious authorities.

In The Harmony of Religion and Philosophy, it seems, Averroes taught that “philosophy and revelation do not contradict each other, and are essentially different means of reaching the same truth”—a doctrine that his later Christian followers turned into what became known as the doctrine of “double truth.” According to a lecturer at the University of Paris in the thirteenth century named Siger of Brabant, for instance, “there existed a ‘double truth’: a factual or ‘hard’ truth that is reached through science and philosophy, and a ‘religious’ truth that is reached through religion.” To Brabant and his crowd, according to Encyclopedia Britannica, “religion and philosophy, as separate sources of knowledge, might arrive at contradictory truths without detriment to either.” (Which was not the same as Averroes’ point, however: the Andalusian scholar “taught that there is only one truth, but reached in two different ways, not two truths.”) Siger of Brabant, in other words, would have been quite familiar with Hillary Clinton’s distinction between the “public” and the “private.”

To some today, of course, that would merely point to how contemporary Siger of Brabant was, and how fuddy-duddy were his opponents—like Stephen Tempier, the bishop of Paris. As if he were some 1950s backwoods Baptist preacher denouncing Elvis or the Beatles, in 1277 Tempier denounced those who “hold that something is true according to philosophy but not according to the Catholic faith, as if there are two contrary truths.” Yet, while some might want to portray Brabant, thusly, as a forerunner to today’s tolerant societies, in reality it was Tempier’s insistence that truth comes in mono, not stereo, that (seemingly paradoxically) led to the relatively open society we at present enjoy.

People who today would make that identification, that is, might be uneasy if they knew that part of the reason Brabant believed his doctrine was his belief in “the superiority of philosophers to the common people,” or that Averroes himself warned “against teaching philosophical methods to the general populace.” Two truths, in other words, easily translated into two different kinds of people—and make no mistake, these doctrines did not imply that these two differing types were “separate but equal.” Instead, they were a means of asserting the superiority of the one type over the other. The doctrine of “double truth,” in other words, was not a forerunner to today’s easygoing societies.

To George Orwell, in fact, it was prerequisite for totalitarianism: Brabant’s theory of “double truth,” in other words, may be the origin of the concept of “doublethink” as used in Orwell’s 1984. In that 1948 novel, “doublethink” is defined as

To know and not to know, to be conscious of complete truthfulness while telling carefully constructed lies, to hold simultaneously two opinions which cancelled out, knowing them to be contradictory and believing in both of them, to use logic against logic, to repudiate morality while laying claim to it, to believe that democracy was impossible and that the Party was the guardian of democracy, to forget whatever it was necessary to forget, then to draw it back into memory again at the moment when it was needed, and then promptly to forget it again, and above all, to apply the same process to the process itself – that was the ultimate subtlety: consciously to induce unconsciousness, and then, once again, to become unconscious of the act of hypnosis you had just performed. Even to understand the word ‘doublethink’ involved the use of doublethink.

It was a point Orwell had been thinking about for some time: in a 1946 essay entitled “Politics and the English Language,” he had denounced “unscrupulous politicians, advertisers, religionists, and other doublespeakers of whatever stripe [who] continue to abuse language for manipulative purposes.” To Orwell, the doctrine of the “double truth” was just a means of sloughing off feelings of guilt or shame naturally produced by human beings engaged in such manipulations—a technique vital to totalitarian regimes.

Many in today’s universities, to be sure, have a deep distrust for Orwell: Louis Menand—who not only teaches at Harvard and writes for The New Yorker, but grew up in a Hudson Valley town named for his own great-grandfather—perhaps summed up the currently fashionable opinion of the English writer when he noted, in a drive-by slur, that Orwell was “a man who believed that to write honestly he needed to publish under a false name.” The British novelist Will Self, in turn, has attacked Orwell as the “Supreme Mediocrity”—and in particular takes issue with Orwell’s stand, in “Politics and the English Language,” in favor of the idea “that anything worth saying in English can be set down with perfect clarity such that it’s comprehensible to all averagely intelligent English readers.” It’s exactly that part of Orwell’s position that most threatens those of Self’s view.

Orwell’s assertion, Self says flatly, is simply “not true”—an assertion that Self explicitly ties to issues of minority representation. “Only homogeneous groups of people all speak and write identically,” Self writes against Orwell; in reality, Self says, “[p]eople from different heritages, ethnicities, classes and regions speak the same language differently, duh!” Orwell’s big argument against “doublethink”—and thusly, totalitarianism—is in other words just “talented dog-whistling calling [us] to chow down on a big bowl of conformity.” Thusly, “underlying” Orwell’s argument “are good old-fashioned prejudices against difference itself.” Orwell, in short, is a racist.

Maybe that’s true—but it may also be worth noting that the sort of “tolerance” advocated by people like Self can also be interpreted, and has been for centuries, as in the first place a direct assault on the principle of rationality, and in the second place an abandonment of millions of people. Such, at least, is how Thomas Aquinas would have received Self’s point. The Angelic Doctor, as the Church calls him, asserted that Averroeists like Brabant could be refuted on their own terms: the Averroeists said they believed, Aquinas remarked, that philosophy taught them that truth must be one, but faith taught them the opposite—a position that would lead those who held it to think “that faith avows what is false and impossible.” According to Aquinas, the doctrine of the “double truth” would imply that belief in religion was as much as admitting that religion was foolish—at which point you have admitted that there is only a single truth, and it isn’t a religious one. Hence, Aquinas’ point was that, despite what Orwell feared in 1984, it simply is not psychologically possible to hold two opposed beliefs in one’s head simultaneously: whenever someone is faced with a choice like that, that person will inevitably choose one side or the other.

In this, Aquinas was merely following his predecessors. To the ancients, this was known as the “law of non-contradiction”—one of the ancient world’s three fundamental laws of thought. “No one can believe that the same thing can (at the same time) be and not be,” as Aristotle himself put that law in the Metaphysics; nobody can (sincerely) believe one thing and its opposite at the same time. As the Persian, Avicenna—demonstrating that this law was hardly limited to Europeans—put it centuries later: “Anyone who denies the law of non-contradiction should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned is not the same as not to be burned.” Or finally, as Arthur Schopenhauer wrote centuries after that in The World as Will and Representation (using the heavy-handed vocabulary of German philosophers), “every two concept-spheres must be thought of as either united or as separated, but never as both at once; and therefore, although words are joined together which express the latter, these words assert a process of thought which cannot be carried out” (emp. added). If anyone says the contrary, these philosophers implied,  somebody’s selling something.

The point that Aristotle, Aquinas, Avicenna, and Orwell were making, in other words, is that the law of non-contradiction is essentially identical to rationality itself: a nearly foolproof method of performing the most basic of intellectual tasks—above all, telling honest and rational people from dishonest and duplicitous ones. And that, in turn, would lead to their second refutation of Self’s argument: by abandoning the law of non-contradiction, people like Brabant (or Self) were also effectively setting themselves above ordinary people. As one commenter on Aquinas writes, the Good Doctor’s insisted that if something is true, then “it must make sense and it must make sense in terms which are related to the ordinary, untheological ways in which human beings try to make sense of things”—as Orwell saw, that position is related to the law of noncontradiction, and both are related to the notion of democratic government, because telling which candidate is the better one is exactly the very foundation of that form of government. When Will Self attacks George Orwell for being in favor of comprehensibility, in other words, he isn’t attacking Orwell alone: he’s actually attacking Thomas Aquinas—and ultimately the very possibility of self-governance.

While the supporters of Hillary Clinton like to describe her opponent as a threat to democratic government, in other words, Donald Trump’s minor campaign arguably poses far less threat to American freedoms than hers does: from one point of view, Clinton’s accession to power actually threatens the basic conceptual apparatus without which there can be no democracy. Of course, given that during this presidential campaign virtually no attention has been paid, say, to the findings of social scientists (like Ricardo Hausmann and Federico Sturzenegger) and journalists (like those who reported on The Panama Papers) that while many conservatives bemoan such deficits as the U.S. budget or trade imbalances, in fact there is good reason to suspect that such gaps are actually the result of billions (or trillions) of dollars being hidden by wealthy Americans and corporations beyond the reach of the Internal Revenue Service (an agency whose budget has been gutted in recent decades by conservatives)—well, let’s just say that there’s good reason to suspect that Hillary Clinton’s campaign may not be what it appears to be.

After all—she said so.

Double Vision

Ill deeds are doubled with an evil word.
The Comedy of Errors. III, ii

The century just past had been both one of the most violent ever recorded—and also perhaps the highest flowering of civilized achievement since Roman times. A great war had just ended, and the danger of starvation and death had receded for millions; new discoveries in agriculture meant that many more people were surviving into adulthood. Trade was becoming more than a local matter; a pioneering Westerner had just re-established a direct connection with China. As well, although most recent contact with Europe’s Islamic neighbors had been violent, there were also signs that new intellectual contacts were being made; new ideas were circulating from foreign sources, putting in question truths that had been long established. Under these circumstances a scholar from one of the world’s most respected universities made—or said something that allowed his enemies to make it appear he had made—a seemingly-astonishing claim: that philosophy, reason, and science taught one kind of truth, and religion another, and that there was no need to reconcile the two. A real intellect, he implied, had no obligation to be correct: he or she had only to be interesting. To many among his audience that appeared to be the height of both sheer brainpower and politically-efficacious intellectual work—but then, none of them were familiar with either the history of German auto-making, or the practical difficulties of the office of the United States Attorney for the Southern District of New York.

Some literary scholars of a previous generation, of course, will get the joke: it’s a reference to then-Johns Hopkins University Miltonist Stanley Fish’s assertion, in his 1976 essay “Interpreting ‘Interpreting the Variorum,’” that, as an interpreter, he has no “obligation to be right,” but “only that [he] be interesting.” At the time, the profession of literary study was undergoing a profound struggle to “open the canon” to a wide range of previously-neglected writers, especially members of minority groups like African-Americans, women, and homosexuals. Fish’s remark, then, was meant to allow literary scholars to study those writers—many of whom would have been judged “wrong” according to previous notions of literary correctness. By suggesting that the proper frame of reference was not “correct/incorrect,” or “right/wrong,” Fish implied that the proper standard was instead something less rigid: a criteria that thusly allowed for the importation of new pieces of writing and new ideas to flourish. Fish’s method, in other words, might appear to be an elegant strategy that allowed for, and resulted in, an intellectual flowering in recent decades: the canon of approved books has been revamped, and a lot of people who probably would not have been studied—along with a lot of people who might not have done the studying—entered the curriculum who might not have had the change of mind Fish’s remark signified not have become standard in American classrooms.

I put things in the somewhat cumbersome way I do in the last sentence because of course Fish’s line did not arrive in a vacuum: the way had been prepared in American thought long before 1976. Forty years prior, for example, F. Scott Fitzgerald had claimed, in his essay “The Crack-Up” for Esquire, that “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” In 1949 Fitzgerald’s fellow novelist, James Baldwin, similarly asserted that “literature and sociology are not the same.” And thirty years after Fish’s essay, the notion had become so accepted that American philosopher Richard Rorty could casually say that the “difference between intellectuals and the masses is the difference between those who can remember and use different vocabularies at the same time, and those who can remember only one.” So when Fish wrote what he wrote, he was merely putting down something that a number of American intellectuals had been privately thinking for some time—a notion that has, sometime between now and then, become American conventional wisdom.

Even some scientists have come to accept some version of the idea: before his death, the biologist Stephen Jay Gould promulgated the notion of what he called “non-overlapping magisteria”: the idea that while science might hold to one version of truth, religion might hold another. “The net of science,” Gould wrote in 1997, “covers the empirical universe,” while the “net of religion extends over questions of moral meaning and value.” Or, as Gould put it more flippantly, “we [i.e., scientists] study how the heavens go, and they [i.e., theologians] determine how to go to heaven.” “Science,” as medical doctor (and book reviewer) John Carmody put the point in The Australian earlier this year, “is our attempt to understand the physical and biological worlds of which we are a part by careful observation and measurement, followed by rigorous analysis of our findings,” while religion “and, indeed, the arts are, by contrast, our attempts to find fulfilling and congenial ways of living in our world.” The notion then that there are two distinct “realms” of truth is a well-accepted one: nearly every thinking, educated person alive today subscribes to some version of it. Indeed, it’s a belief that appears necessary to the pluralistic, tolerant society that many envision the United States is—or should be.

Yet, the description with which I began this essay, although it does in some sense apply to Stanley Fish’s United States of the 1970s, also applies—as the learned knew, but did not say, at the time of Fish’s 1976 remark—to another historical era: Europe’s thirteenth century. At that time, just as during Fish’s, the learned of the world were engaged in trying to expand the curriculum: in this case, they were attempting to recoup the work of Aristotle, largely lost to the West since the fall of Rome. But the Arabs had preserved Aristotle’s work: “In 832,” as Arthur Little, of the Jesuits, wrote in 1947, “the Abbaside Caliph, Almamun,” had the Greek’s work translated “into Arabic, roughly but not inaccurately,” in which language Aristotle’s works “spread through the whole Moslem world, first to Persia in the hand of Avicenna, then to Spain where its greatest exponent was Averroes, the Cordovan Moor.” In order to read and teach Aristotle without interference from the authorities, Little tells us, Averroes (Ibn Rushd) decided that “Aristotle’s doctrine was the esoteric doctrine of the Koran in opposition to the vulgar doctrine of the Koran defended by the orthodox Moslem priests”—that is, the Arabic scholar decided that there was one “truth” for the masses and another, far more subtle, for the learned. Averroes’ conception was, in turn, imported to the West along with the works of Aristotle: if the ancient Greek was at times referred to as the Master, his Arabic disciple was referred to as the Commentator.

Eventually, Aristotle’s works reached Paris, and the university there, sometime towards the end of the twelfth century. Gerard of Cremona, for example, had translated the Physics into Latin from the Arabic of the Spanish Moors sometime before he died in 1187; others had translated various parts of Aristotle’s Greek corpus either just before or just afterwards. For some time, it seems, they circulated in samizdat fashion among the young students of Paris: not part of the regular curriculum, but read and argued over by the brightest, or at least most well-read. At some point, they encountered a young man who would become known to history as Siger of Brabant—or perhaps rather, he encountered them. And like many other young, studious people, Siger fell in love with these books.

It’s a love story, in other words—and one that, like a lot of other love stories, has a sad, if not tragic, ending. For what Siger was learning by reading Aristotle—and Averroes’ commentary on Aristotle—was nearly wholly incompatible with what he was learning in his other studies through the rest of the curriculum—an experience that he was not, as the experience of Averroes before him had demonstrated, alone in having. The difference, however, is that whereas most other readers and teachers of the learned Greek sought to reconcile him to Christian beliefs (despite the fact that Aristotle long predated Christianity), Siger—as Richard E. Rubenstein puts it in his Aristotle’s Children—presented “Aristotle’s ideas about nature and human nature without attempting to reconcile them with traditional Christian beliefs.” And even more: as Rubenstein remarks, “Siger seemed to relish the discontinuities between Aristotelian scientia and Christian faith.” At the same time, however, Siger also held—as he wrote—that people ought not “try to investigate by reason those things which are above reason or to refute arguments for the contrary position.” But assertions like this also left Siger vulnerable.

Vulnerable, that is, to the charge that what he and his friends were teaching was what Rubenstein calls “the scandalous doctrine of Double Truth.” Or, in other words, the belief that “a proposition [that] could be true scientifically but false theologically, or the other way round.” Whether Siger and his colleagues did, or did not, hold to such a doctrine—there have been arguments about the point for centuries now— isn’t really material, however: as one commenter, Vincent P. Benitez, has put it, either way Siger’s work highlighted just how the “partitioning of Christian intellectual life in the thirteenth century … had become rather pronounced.” So pronounced, in fact, that it suggested that many supposed “intellectuals” of the day “accepted contradictories as simultaneously true.” And that—as it would not to F. Scott Fitzgerald later—posed a problem to the medievals, because it ran up against a rule of logic.

And not just any rule of logic: it’s one that Aristotle himself said was the most essential to any rational thought whatever. That rule of logic is usually known by the name the Law of Non-Contradiction, usually placed as the second of the three classical rules of logic in the ancient world. (The others being the Law of Identity—A is A—and the Law of the Excluded Middle—either A is A or it is not-A.) As Aristotle himself put it, the “most certain of all basic principles is that contradictory propositions are not true simultaneously.” Or—as another of Aristotle’s Arabic commenters, Avicenna (Ibn-Sina) put it in one of its most famous formulations—that rule goes like this: “Anyone who denies the law of non-contradiction should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned is not the same as not to be burned.” In short, a thing cannot be both true and not true at the same time.

Put in Avicenna’s way, of course, the Law of Non-Contradiction will sound distinctly horrible to most American undergraduates, perhaps particularly those who attend the most exclusive colleges: it sounds like—and, like a lot of things, has been—a justification for the worst kind of authoritarian, even totalitarian, rule, and even torture. In that sense, it might appear that attacking the law of non-contradiction could be the height of oppositional intellectual work: the kind of thing that nearly every American undergraduate attracted to the humanities aspires to do. Who is not, aside from members of the Bush Administration legal team (for that matter, nearly every regime known to history) and viewers of the television show 24, against torture? Who does not know that black-and-white morality is foolish, that the world is composed of various “shades of gray,” that “binary oppositions” can always be dismantled, and that it is the duty of the properly educated to instruct the lower orders in the world’s real complexity? Such views might appear obvious—especially if one is unfamiliar with the recent history of Volkswagen.

In mid-September of 2015, the Environmental Protection Agency of the United States issued a violation notice to the German automaker Volkswagen. The EPA had learned that, although the diesel engines Volkswagen built were passing U.S. emissions tests, they were doing it on the sly: each car’s software could detect when the car’s engine was being tested by government monitors, and if so could reduce the pollutants that engine was emitting. Just more than six months later, Volkswagen agreed to pay a settlement of 15.3 billion dollars in the largest auto-related class-action lawsuit in the history of the United States. That much, at least, is news; what interests me, however,  about this story in relation to this talk about academics and monks was a curious article put out by The New Yorker in October of 2015. Entitled “An Engineering Theory of the Volkswagen Scandal,” Paul Kedrosky—perhaps significantly—“a venture investor and a former equity analyst,” explains these events as perhaps not the result of “engineers … under orders from management to beat the tests by any means necessary.” Instead, the whole thing may simply have been the result of an “evolution” of technology that “subtly and stealthily, even organically, subverted the rules.” In other words, Kedrosky wishes us to entertain the possibility that the scandal ought to be understood in terms of the undergraduate’s idea of shades of gray.

Kedrosky takes his theory from a book by sociologist Diane Vaughn, about the Challenger space shuttle disaster of 1986. In her book, Vaughn describes how, over nine launches from 1983 onwards, the space shuttle organization had launched Challenger under colder and colder temperatures, until NASA’s engineers had “effectively declared the mildly abnormal normal,” Kedrosky says—and until, one very frigid January morning in Florida, the shuttle blew into thousands of pieces moments after liftoff. Kedrosky’s attempt at an analogy is that maybe the Volkswagen scandal developed similarly: “Perhaps it started with tweaks that optimized some aspect of diesel performance and then evolved over time.” If so, then “at no one step would it necessarily have felt like a vast, emissions-fixing conspiracy by Volkswagen engineers.” Instead—as this story goes—it would have felt like Tuesday.

The rest of Kedrosky’s thrust is relatively easy to play out, of course—because we have heard a similar story before. Take, for instance, another New Yorker story; this one, a profile of the United States Attorney for the Southern District of New York, Preet Bharara. Mr. Bharara, as the representative of the U.S. Justice Department in New York City, is in charge of prosecuting Wall Street types; because he took office in 2009, at the crest of the financial crisis that began in 2007, many thought he would end up arresting and charging a number of executives as a result of the widely-acknowledged chicaneries involved in creating the mess. But as Jeffrey Toobin laconically observes in his piece, “No leading executive was prosecuted.” Even more notable, however, is the reasoning Bharara gives for his inaction.

“Without going into specifics,” Toobin reports, Bharara told him “that his team had looked at Wall Street executives and found no evidence of criminal behavior.” Sometimes, Bharara went on to explain, “‘when you see a bad thing happen, like you see a building go up in flames, you have to wonder if there’s arson’”—but “‘sometimes it’s not arson, it’s an accident.’” In other words, to Bharara, it’s entirely plausible to think of the entire financial meltdown of 2007-8, which ended three giant Wall Street firms (Bear Stearns, Merrill Lynch, and Lehman Brothers) and two arms of the United States government (Fannie Mae and Freddie Mac), and is usually thought to have been caused by predatory lending practices driven by Wall Street’s appetite for complex financial instruments, as essentially analogous to Diane Vaughn’s view of the Challenger disaster—or Kedrosky’s view of Volkswagen’s cavalier thoughts about environmental regulation. To put it in another way, both Kedrosky and Bharara must possess, in Fitzgerald’s terms, “first-rate intelligences”: in Kedrosky’s version of Volkswagen’s actions or Bharara’s view of Wall Street, crimes were committed, but nobody committed them. They were both crimes and not-crimes at the same time.

These men can, in other words, hold opposed ideas in their head simultaneously. To many, that makes these men modern—or even, to some minds, “post-modern.” Contemporary intellectuals like to cite examples—like the “rabbit-duck” illusion referred to by Wittgenstein, which can be seen as either a rabbit or a duck, or the “Schroedinger’s Cat” thought experiment, whereby the cat is neither dead nor alive until the box is opened, or the fact that light is both a wave and a particle—designed to show how out-of-date the Law of Noncontradiction is. In that sense, we might as easily blame contemporary physics as contemporary work in the humanities for Kedrosky or Bharara’s difficulties in saying whether an act was a crime or not—and for that matter, maybe the similarity between Stanley Fish and Siger of Brabant is merely a coincidence. Still, in the course of reading for this piece I did discover another apparent coincidence in Arthur Little’s same article I previously cited. “Unlike Thomas Aquinas,” the Jesuit wrote 1947, “whose sole aim was truth, Siger desired most of all to find the world interesting.” The similarity to Stanley Fish’s 1976 remarks about himself—that he has no obligation to be right, only to be interesting—are, I think, striking. Like Bharara, I cannot demonstrate whether Fish knew of this article of Little’s, written thirty years before his own.

But then again, if I have no obligation to be right, what does it matter?

The Commanding Heights

The enemy increaseth every day; 
We, at the height, are ready to decline.
Julius Caesar. Act IV, Scene 3.

 

“It’s Toasted”: the two words that began the television series Mad Men. The television show’s protagonist, Don Draper, comes up with them in a flash of inspiration during a meeting with the head of Draper’s advertising firm’s chief client, cigarette brand Lucky Strikes: like all cigarette companies, Luckies have to come up with a new campaign in the wake of a warning from the Surgeon General regarding the health risks of smoking. Don’s solution is elegant: by simply describing the manufacturing process of making Luckies—a process that is essentially the same as all other cigarettes—the brand does not have to make any kind of claim about smokers’ health at all, and thusly can bypass any consideration of scientific evidence. It’s a great way to introduce a show about the advertising business, as well as one of the great conflicts of that business: the opposition between reality, as represented by the Surgeon General’s report, and rhetoric, as represented by Draper’s inspirational flash. It’s also what makes Mad Men a work of historical fiction: in the first place, as documented by Thomas Frank’s The Conquest of Cool: Business Culture, Counterculture, and the Rise of Hip Consumerism, there really was, during the 1950s and 60s, a conflict in the advertising industry between those who trusted in a “scientific” approach to advertising and those who, in Frank’s words, “deplored conformity, distrusted routine, and encouraged resistance to established power.” But that conflict also enveloped more than the advertising field: in those years many rebelled against a “scientism” that was thought confining—a rebellion that in many ways is with us still. Yet, though that rebellion may have been liberating in some senses, it may also have had certain measurable costs to the United States. Among those costs, it seems, might be height.

Height, or a person’s stature, of course is a thing that most people regard as something that is akin to the color of the sky or the fact of gravity: a baseline foundation to the world incapable of change. In the past, such results that lead one person to tower over others—or look up to them in turn—might have been ascribed to God; today some might view height as the inescapable result of genetics. In one sense, this is true: as Burkhard Bilger says in the New Yorker story that inspired my writing here, the work of historians, demographers and dietitians have shown that with regard to height, “variations within a population are largely genetic.” But while height differences within a population are, in effect, a matter of genetic chance, that is not so when it comes to comparing different populations to each other.

“Height,” says Bilger, “is a kind of biological shorthand: a composite code for all the factors that make up a society’s well-being.” In other words, while you might be a certain height, and your neighbor down the street might be taller or shorter, both of you will tend to be taller or shorter than people from a different country—and the degree of shortness or tallness can be predicted by what sort of country you live in. That doesn’t mean that height is independent of genetics, to be sure: all human bodies are genetically fixed to grow at only three different stages in our lives—infancy, between the ages of six and eight, and as adolescents. But as Bilger notes, “take away any one of forty-five or fifty essential nutrients”—at any of these stages—“and the body stops growing.” (Like iodine, which can also have an effect on mental development.) What that means is that when large enough populations are examined, it can be seen whether a population as a whole is getting access to those nutrients—which in turn means it’s possible to get a sense of whether a given society is distributing resources widely … or not.

One story Bilger tells, about Guatemala’s two main ethnic groups, illustrates the point: one of them, the Ladinos, who claim descent from the Spanish colonizers of Central America, were averagely tall. But the other group, the Maya, who are descended from indigenous people, “were so short that some scholars called them the pygmies of Central America: the men averaged only five feet two, the women four feet eight.” Since the two groups shared the same (small) country, with essentially the same climate and natural resources, researchers initially assumed that the difference between them was genetic. But that assumption turned out to be false: when anthropologist Barry Bogin measured Mayans who had emigrated to the United States, he found that they were “about as tall as Guatemalan Ladinos.” The difference between the two ethnicities was not genetic: “The Ladinos,” Bilger writes, “who controlled the government, had systematically forced the Maya into poverty”—and poverty, because it can limit access to the nutrients essential during growth spurts, is systemically related to height.

It’s in that sense that height can literally be a measurement of the degree of freedom a given society enjoys: historically, Guatemala has been a hugely stratified country, with a small number of landowners presiding over a great number of peasants. (Throughout the twentieth century, in fact, the political class was engaged in a symbiotic relationship with the United Fruit Company, an American company that possessed large-scale banana plantations in the country—hence the term “banana republic.”) Short people are, for the most part, oppressed people; tall people, conversely, are mostly free people: it’s not an accident that as citizens of one of the freest countries in the world, the Netherlands, Dutch people are also the tallest.

Americans, at one time, were the tallest people in the world: in the eighteenth century, Bilger reports, Americans were “a full three inches taller than the average European.” Even so late as the First World War, he also says, “the average American soldier was still two inches taller than the average German.” Yet, a little more than a generation later, that relation began to change: “sometime around 1955 the situation began to reverse.” Since then all Europeans have been growing, as have Asians: today “even the Japanese—once the shortest industrialized people on earth—have nearly caught up with us, and Northern Europeans are three inches taller and rising.” Meanwhile, American men are “less than an inch taller than the average soldier during the Revolutionary War.” And that difference, it seems, is not due to the obvious source: immigration.

The people that work in this area are obviously aware that, because the United States is a nation of immigrants, that might skew the height data: clearly, if someone grows up in, say, Guatemala and then moves to the United States, that could conceivably warp the results. But the researchers Bilger consulted have considered the point: one only includes native-born, English-speaking Americans in his studies, for example, while another says that, because of the changes to immigration law during the twentieth century, the United States now takes in far too few immigrants to bias the figures. But if not immigration, then what?

For my own part, I find the coincidence of 1955 too much to ignore: it’s around the mid-1950s that Americans began to question a previous view of the sciences that had grown up a few generations previously. In 1898, for example, the American philosopher John Dewey could reject “the idea of a dualism between the cosmic and the ethical,” and suggested that “the spiritual life … [gets] its surest and most ample guarantees when it is learned that the laws and conditions of righteousness are implicated in the working processes of the universe.” Even so late as 1941, intellectual magazine The New Republic could publish an obituary of the famed novelist James Joyce—author of what many people feel is the finest novel in the history of the English language, Ulysses—that proclaimed Joyce “the great research scientist of letters, handling words with the same freedom and originality that Einstein handles mathematical symbols.” “Literature as pure art,” the magazine then said, “approaches the nature of pure science”—suggesting, as Dewey said, that reality and its study did not need to be opposed to some other force, whether that be considered to be religion and morality or art and beauty. But just a few years later, elite opinion began to change.

In 1949, for instance, the novelist James Baldwin would insist, against the idea of The New Republic’s obituary, that “literature and sociology are not the same,” while a few years later, in 1958, the philosopher and political scientist Leo Strauss would urge that the “indispensable condition of ‘scientific’ analysis is then moral obtuseness”—an obtuseness that, Strauss would go on to say, “is not identical with depravity, but […] is bound to strengthen the forces of depravity.” “By the middle of the 1950s,” as Thomas Frank says, “talk of conformity, of consumerism, and of the banality of mass-produced culture were routine elements of middle-class American life”—so that “the failings of capitalism were not so much exploitation and deprivation as they were materialism, wastefulness, and soul-deadening conformity”: a sense that Frank argues provided fuel for the cultural fires of the 1960s that were to come, and that the television show Mad Men documents. In other words, during the 1950s and afterwards, Americans abandoned a scientific outlook, and meanwhile, Americans also have grown shorter—at least relative to the rest of the world. Correlation, as any scientist will tell you, does not imply causation, but it does imply that Lucky Strikes might not be unique any more—though as any ad man would tell you, “America: It’s Toast!” is not a winning slogan.

Hot Shots

 

… when the sea was calm all boats alike
Show’d mastership in floating …
—William Shakespeare.
     Coriolanus Act IV, Scene 3 (1608).

 

 

“Indeed,” wrote the Canadian scholar Marshall McLuhan in 1964, “it is only too typical that the ‘content’ of any medium blinds us to the character of the medium.” Once, it was a well-known line among literate people, though much less now. It occurred to me recently however as I read an essay by Walter Benn Michaels of the University of Illinois at Chicago, in the course of which Michaels took issue with Matthew Yglesias of Vox. Yglesias, Michaels tells us, tried to make the argument that

although “straight white intellectuals” might tend to think of the increasing economic inequality of the last thirty years “as a period of relentless defeat for left-wing politics,” we ought to remember that the same period has also seen “enormous advances in the practical opportunities available to women, a major decline in the level of racism … and wildly more public and legal acceptance of gays and lesbians.”

Michaels replies to Yglesias’ argument that “10 percent of the U.S. population now earns just under 50 percent of total U.S. income”—a figure that is, unfortunately, just the tip of the economic iceberg when it comes to inequality in America. But the real problem—the problem that Michaels’ reply does not do justice to—is that there just is a logical flaw in the kind of “left” that we have now: one that advocates for the rights of minorities rather than labors for the benefit of the majority. That is, a “cultural” left rather than a scientific one: the kind we had when, in 1910, American philosopher John Dewey could write (without being laughed at), that Darwin’s Origin of Species “introduced a mode of thinking that in the end was bound to transform the logic of knowledge, and hence the treatment of morals, politics, and religion.” When he was just twenty years old the physicist Freeman Dyson discovered why, when Winston Churchill’s government paid him to think about what was really happening in the flak-filled skies over Berlin.

The British had a desperate need to know, because they were engaged in bombing Nazi Germany at least back to the Renaissance. Hence they employed Dyson as a statistician, to analyze the operations of Britain’s Bomber Command. Specifically, Dyson was to investigate whether bomber crews “learned by experience”: if whether the more missions each crew flew, the better each crew became at blowing up Germany—and the Germans in it. Obviously, if they did, then Bomber Command could try to isolate what those crews were doing and teach what it was to the others so that Germany and the Germans might be blown up better.

The bomb crews themselves believed, Dyson tells us, that as “they became more skillful and more closely bonded, their chances of survival would improve”—a belief that, for obvious reasons, was “essential to their morale.” But as Dyson went over the statistics of lost bombers, examining the relation between experience and loss rates while controlling for the effects of weather and geography, he discovered the terrible truth:

“There was no effect of experience on loss rate.”

The lives of each bomber crew, in other words, were dependent on chance, not skill, and the belief in their own expertise was just an illusion in the face of horror—an illusion that becomes the more awful when you know that, out of the 125,000 air crews who served in Bomber Command, 55,573 were killed in action.

“Statistics and simple arithmetic,” Dyson therefore concluded, “tell us more about ourselves than expert intuition”: a cold lesson to learn, particularly at the age of twenty—though that can be tempered by the thought that at least it wasn’t Dyson’s job to go to Berlin. Still, the lesson is so appalling that perhaps it is little wonder that, after the war, it was largely forgotten, and has only been taken up again by a subject nearly as joyful as the business of killing people on an industrial scale is horrifying: sport.

In one of the most cited papers in the history of psychology, “The Hot Hand in Basketball: On the Misperception of Random Sequences,” Thomas Gilovich, Robert Vallone and Amos Tversky studied how “players and fans alike tend to believe that a player’s chance of hitting a shot are greater following a hit than following a miss on the previous shot”—but “detailed analysis … provided no evidence for a positive correlation between the outcomes of successive shots.” Just as, in other words, the British airmen believed some crews had “skill” that kept them in the air, when in fact all that kept them aloft was, say, the poor aim of a German anti-aircraft gunner or a happily-timed cloud, so too did the three co-authors find that, in basketball, people believed some shooters could get “hot.” That is, reel off seemingly impossible numbers of shots in a row, like when Ben Gordon, then with the Chicago Bulls, knocked down 9 consecutive three-pointers against Washington in 2006. But in fact hits and misses are reliant on a player’s skill, not his “luck”: toss a coin enough times and the coin will produce “runs” of heads and tails too.

The “hot hand” concept in fact applies to more than simply the players: it extends to coaches also. “In sports,” says Leonard Mlodinow in his book The Drunkard’s Walk: How Randomness Rules Our Lives, “we have developed a culture in which, based on intuitive feelings of correlation, a team’s success or failure is often attributed largely to the ability of the coach”—a reality that perhaps explains just why, as Florida’s Lakeland Ledger reported in in 2014, the average tenure of NFL coaches over the past decade has been 38 months. Yet as Mlodinow also says, “[m]athematical analysis of firings in all major sports … has shown that those firings had, on average, no effect on team performance”: fans (and perhaps more importantly, owners) tend to think of teams rising and falling based on their coach, while in reality a team’s success has more to do with the talent the team has.

Yet while sports are a fairly trivial part of most peoples’ lives, that is not true when it comes to our “coaches”: the managers that run large corporations. As Diane Stafford found out for the Kansas City Star a few years back, it turns out that American corporations have as little sense of the real value of CEOs as NFL owners have of their coaches: the “pay gap between large-company CEOs and average American employees,” Stafford said, “vaulted from 195 to 1 in 1993 to 354 to 1 in 2012.” Meanwhile, more than a third “of the men who appeared on lists ranking America’s 25 highest-paid corporate leaders between 1993 and 2012 have led companies bailed out by U.S. taxpayers, been fired for poor performance or led companies charged with fraud.” Just like the Lancasters flown by Dyson’s aircrews, American workers (and their companies’ stockholders) have been taken for a ride by men flying on the basis of luck, not skill.

Again, of course, many in what’s termed the “cultural” left would insist that they too, stand with American workers against the bosses, that they too, wish things were better, and they too, think paying twenty bucks for a hot dog and a beer is an outrage. What matters however isn’t what professors or artists or actors or musicians or the like say—just as it didn’t matter what Britain’s bomber pilots thought about their own skills during the war. What matters is what their jobs say. And the fact of the matter is that cultural production, whether it be in academia or in New York or in Hollywood, simply is the same as thinking you’re a hell of a pilot, or you must be “hot,” or Phil Jackson is a genius. That might sound counterintuitive, of course—I thought writers and artists and, especially, George Clooney were all on the side of the little guy!—but, like McLuhan says, what matters is the medium, not the message.

The point is likely easiest to explain in terms of the academic study of the humanities, because at least there people are forced to explain themselves in order to keep their jobs. What one finds, across the political spectrum, is some version of the same dogma: students in literary studies can, for instance, refer to American novelist James Baldwin’s insistence, in the 1949 essay “Everybody’s Protest Novel,” that “literature and sociology are not the same,” while, at the other end of the political spectrum, political science students can refer to Leo Strauss’ attack on “the ‘scientific’ approach to society” in his 1958 Thoughts on Machiavelli. Every discipline in the humanities has some version of the point, because without such a doctrine they couldn’t exist: without them, there’s just a bunch of people sitting in a room reading old books.

The effect of these dogmas can perhaps be best seen by reference to the philosophical version of it, which has the benefit of at least being clear. David Hume called it the “is-ought problem”; as the Scotsman claimed in  A Treatise of Human Nature, “the distinction of vice and virtue is not founded merely on the relations of objects.” Later, in 1903’s Principe Ethica, British philosopher G.E. Moore called the same point the “naturalistic fallacy”: the idea that, as J.B. Schneewind of Johns Hopkins has put it, “claims about morality cannot be derived from statements of facts.” The advantage for philosophers is clear enough: if it’s impossible to talk about morality or ethics strictly by the light of science, that certainly justifies talking about philosophy to the exclusion of anything else. But in light of the facts about shooting hoops or being killed by delusional Germans, I would hope that the absurdity of Moore’s “idea” ought to be self-evident: if it can be demonstrated that something is a matter of luck, and not skill, that changes the moral calculation drastically.

That then is the problem with running a “left” based around the study of novels or rituals or films or whatever: at the end of the day, the study of the humanities, just like the practice of the arts, discourages the thought that, as Mlodinow puts it, “chance events are often conspicuously misinterpreted as accomplishments or failures.” And without such a consideration, I would suggest, any talk of “values” or “morality” or whatever you would like to call it, is empty. It matters if your leader is lucky or skillful, it matters if success is the result of hard work or who your parents are—and a “left” built on the opposite premises is not, to my mind, a “left” at all. Although many people in the “cultural left,” then, might have the idea that their overt exhortations to virtue might outweigh the covert message being told by their institutional positions, reality tells a different tale: by telling people they can fly, you should not be shocked when they crash.

The Weakness of Shepherds

 

Woe unto the pastors that destroy and scatter the sheep of my pasture! saith the LORD.
Jeremiah 23:1

 

Laquan McDonald was killed by Chicago police in the middle of Chicago’s Pulaski Road in October of last year; the video of his death was not released, however, until just before Thanksgiving this year. In response, mayor of Chicago Rahm Emanuel fired police superintendent Gerry McCarthy, while many have called for Emanuel himself to resign—actions that might seem to demonstrate just how powerful a single document can be; for example, according to former mayoral candidate Chuy Garcia, who forced Emanuel to the electoral brink earlier this year, had the video of McDonald’s death been released before the election he (Garcia) might have won. Yet, so long ago as 1949, the novelist James Baldwin was warning against believing in the magical powers of any one document to transform the behavior of the Chicago police, much less any larger entities: the mistake, Baldwin says, of Richard Wright’s 1940 novel Native Son—a book about the Chicago police railroading a black criminal—is that, taken far enough, a belief in the revolutionary benefits of a “report from the pit” eventually allows us “a very definite thrill of virtue from the fact that we are reading such a book”—or watching such a video—“at all.” It’s a penetrating point, of course—but, in the nearly seventy years since Baldwin wrote, perhaps it might be observed that the real problem isn’t the belief in the radical possibilities of a book or a video, but the very belief in “radicalness” at all: for more than a century, American intellectuals have beat the drum for dramatic phase transitions, while ignoring the very real and obvious political changes that could be instituted were there only the support for them. Or to put it another way, American intellectuals have for decades supported Voltaire against Leibniz—even though it’s Leibniz who likely could do more to prevent deaths like McDonald’s.

To say so of course is to risk seeming to speak in riddles: what do European intellectuals from more than two centuries ago have to do with the death of a contemporary American teenager? Yet, while it might be agreed that McDonald’s death demands change, the nature of that change is likely to be determined by our attitudes towards change itself—attitudes that can be represented by the German philosopher and scientist Gottfried Leibniz on the one hand, and on the other by the French philosophe Francois-Marie Arouet, who chose the pen-name Voltaire. The choice between these two long-dead opponents will determine whether McDonald’s death will register as anything more than another nearly-anonymous casualty.

Leibniz, the older of the two, is best known for his work inventing (at the same time as the Englishman Isaac Newton) calculus; a mathematical tool not only immensely important to the history of the world—virtually everything technological, from genetics research to flights to the moon, owes itself to Leibniz’s innovation—but also because it is “the mathematical study of change,” as Wikipedia has put it. Leibniz’ predecessor, Johannes Kepler, had shown how to calculate the area of a circle by treating the shape as an infinite-sided polygon with “infinitesimal” sides: sides so short as to be unmeasurable, but still possessing a length. Liebniz’s (and Newton’s) achievement, in turn, showed how to make this sort of operation work in other contexts also, on the grounds that—as Leibniz wrote—“whatever succeeds for the finite, also succeeds for the infinite.” In other words, Liebniz showed how to take—by lumping together—what might otherwise be considered to be beneath notice (“infinitesimal”) or so vast and august as to be beyond merely human powers (“infinite”) and make it useful for human purposes. By treating change as a smoothly gradual process, Leibniz found he could apply mathematics in places previously thought of as too resistant to mathematical operations.

Leibniz justified his work on the basis of what the biologist Stephen Jay Gould called “a deeply rooted bias of Western thought,” a bias that “predisposes us to look for continuity and gradual change: natura non facit saltum (“nature does not make leaps”), as the older naturalists proclaimed.” “In nature,” Leibniz wrote in his New Essays, “everything happens by degrees, nothing by jumps.” Leibniz thusly justified the smoothing operation of calculus on the basis of reality itself was smooth.

Voltaire, by contrast, ridiculed Leibniz’s stance. In Candide, the French writer depicted the shock of the Lisbon earthquake of 1755—and, thusly, refuted the notion that nature does not make leaps. At the center of Lisbon, after all, the earthquake opened five meter wide fissures in the earth—an earth which, quite literally, leaped. Today, many if not most scholars take a Voltairean, rather than Leibnizian, view of change: take, for instance, the writer John McPhee’s big book of the state of geology, Annals of the Former Earth.

“We were taught all wrong,” McPhee cites Anita Harris, a geologist with the U.S. Geologic Survey as saying in his book, Annals of the Former World: “We were taught,” says Harris, “that changes on the face of the earth come in a slow steady march.” Yet through the arguments of people like Bretz and Alvarez, that is no longer accepted doctrine within geology; what the field now says is that the “steady march” just “isn’t what happens.” Instead, the “slow steady march of geologic time is punctuated with catastrophes.” In fields from English literature to mathematics, the reigning ideas are in favor of sudden, or Voltairean, rather than gradual, or Leibnizian, change.

Consider, for instance, how McPhee once described the very river to which Chicago owes a great measure of its existence, the Mississippi: “Southern Louisiana exists in its present form,” McPhee wrote, “because the Mississippi River has jumped here and there … like a pianist playing with one hand—frequently and radically changing course, surging over the left or the right bank to go off in utterly new directions.” J. Harlen Bretz is famous within geology for his work interpreting what are now known as the Channeled Scablands—Bretz found that the features he was seeing were the result of massive and sudden floods, not a gradual and continual process—and Luis Alvarez proposed that the extinction event at the end of the Cretaceous Period of the Mesozoic Era, popularly known as the end of the dinosaurs, was caused by the impact of an asteroid near what is now Chicxulub, Mexico. And these are only examples of a Voltairean view within the natural sciences.

As the former editor of The Baffler, Thomas Frank, has made a career of saying, the American academy is awash in scholars hostile to Leibniz, with or without realizing it. The humanities for example are bursting with professors “unremittingly hostile to elitism, hierarchy, and cultural authority.” And not just the academy: “the official narratives of American business” also “all agree that we inhabit an age of radical democratic transformation,” and “[c}ommercial fantasies of rebellion, liberation, and outright ‘revolution’ against the stultifying demands of mass society are commonplace almost to the point of invisibility in advertising, movies, and television programming.” American life generally, one might agree with Frank, is “a 24-hour carnival, a showplace of transgression and inversion of values.” We are all Voltaireans now.

But, why should that matter?

It matters because under a Voltairean, “catastrophic” model, a sudden eruption like a video of a shooting, one that provokes the firing of the head of the police, might be considered a sufficient index of “change.” Which, in a sense, it obviously is: there will now be someone else in charge. Yet, in another—as James Baldwin knew—it isn’t at all: I suspect that no one would wager that merely replacing the police superintendent significantly changes the odds of there being, someday, another Laquan McDonald.

Under a Leibnizian model, however, it becomes possible to tell the kind of story that Radley Balko told in The Washington Post in the aftermath of the shooting of Michael Brown by police officer Darren Wilson. In a story headlined “Problem of Ferguson isn’t racism—it’s de-centralization,” Balko described how Brown’s death wasn’t the result of “racism,” exactly, but rather due to the fact that the St. Louis suburbs are so fragmented, so Balkanized, that many of them are dependent on traffic stops and other forms of policing in order to make their payrolls and provide services. In short, police shootings can be traced back to weak governments—governments that are weak precisely because they do not gather up that which (or those who) might be thought to be beneath notice. The St. Louis suburbs, in other words, could be said to be analogous to the state of mathematics before the arrival of Leibniz (and Newton): rather than collecting the weak into something useful and powerful, these local governments allow the power of their voters to be diffused and scattered.

A Leibnizian investigator, in other words, might find that the problems of Chicago could be related to the fact that, in a survey of local governments conducted by the Census Bureau and reported by the magazine Governing, “Illinois stands out with 6,968 localities, about 2000 more than Pennsylvania, with the next-most governments.” As a recent study by David Miller, director of the Center for Metropolitan Studies at the University of Pittsburgh, the greater Chicago area is the most governmentally fragmented place in the United States, scoring first in Miller’s “metropolitan power diffusion index.” As Governing put what might be the salient point: “political patronage plays a role in preserving many of the state’s existing structures”—that is, by dividing up government into many, many different entities, forces for the status quo are able to dilute the influence of the state’s voters and thus effectively insulate themselves from reality.

“My sheep wandered through all the mountains, and upon every high hill,” observes the Jehovah of Ezekiel 34; “yea, my flock was scattered upon all the face of the earth, and none did search or seek after them.” But though in this way the flock “became a prey, and my flock became meat to every beast of the field,” the Lord Of All Existence does not then conclude by wiping out said beasts. Instead, the Emperor of the Universe declares: “I am against the shepherds.” Jehovah’s point is, one might observe, the same as Leibniz’s: no matter how powerless an infinitesimal sheep might be, gathered together they can become powerful enough to make journeys to the heavens. What Laquan McDonald’s death indicts, therefore, is not the wickedness of wolves—but, rather, the weakness of shepherds.

Left Behind

Banks and credit companies are, strictly speaking, the direct source of their illusory “income.” But considered more abstractly, it is their bosses who are lending them money. Most households are net debtors, while only the very richest are net creditors. In an overall sense, in other words, the working classes are forever borrowing from their employers. Lending replaces decent wages, masking income disparities even while aggravating them through staggering interest rates.
Kim Phillips-Fein “Chapters of Eleven”
    The Baffler No. 11, 1998


Note: Since I began this blog by writing about golf, I originally wrote a short paragraph tying what follows to the FIFA scandal, on the perhaps-tenuous connection that the Clinton Foundation had accepted money from FIFA and Bill had been the chairman of the U.S. bid for the 2022 World Cup. But I think the piece works better without it.

“Why is it that women still get paid less than men for doing the same work?” presidential candidate Hillary Clinton asked recently in, of all places, Michigan. But the more natural question in the Wolverine State might seem to be the question a lot of economists are asking these days: “Why is everyone getting paid less?” Economists like Emmanuel Saez of the University of California, who says that “U.S. income inequality has been steadily increasing since the 1970s, and now has reached levels not seen since 1928.” Or Nobel Prize winner Paul Krugman, who says that even the wages of “highly educated Americans have gone nowhere since the late 1990s.” But while it’s not difficult to imagine that Clinton  asks the question she asks in a cynical fashion—in other words, to think that she is a kind of Manchurian candidate for Wall Street—it’s at least possible to think she asks it innocently. All Americans, says scholar Walter Benn Michaels, have been the victims of a “trick” over the last generation: the trick of responding to “economic inequality by insisting on the importance of … identity.” But how was the trick done?

The dominant pedagogy of the American university suggests one way: if it’s true that, as the professors say, reality is a function of the conceptual tools available, then maybe Hillary Clinton cannot see reality because she doesn’t have the necessary tools. As well she might not: in Clinton’s case, one might as well ask why a goldfish can’t see water. Raised in a wealthy Chicago suburb, on to Ivy League colleges; then the governor’s mansion in Little Rock, Arkansas and the White House; followed by Westchester County, then back to D.C. It’s true of course that Clinton did write a college thesis about Saul Alinsky’s community organizing tactics, so she cannot possibly be unfamiliar with the question of economic inequality. But it’s also easy to see how economics is easily obscured in such places.

What’s perhaps stranger though is that economics, as a subject, should have become more obscure, not less, since Clinton left New Haven—and even if Clinton should have been wholly ignorant of the subject, that doesn’t explain how she could then become a national candidate for president of the party. Yet at about the same time that Clinton was at Yale, another young woman with bright academic credentials was living practically just down the road in Hartford, Connecticut—and the work she did has helped to ensure that, as Michaels says, “for the last 30 years, while the gap between the rich and the poor has grown larger, we’ve been urged to respect people’s identities.” That doesn’t mean of course that the story I am going to tell explains everything about why Hillary asked the question she asked in Michigan, instead of the one she should have asked, but it is, I think, illustrative—by telling this one story in depth, it becomes possible to understand how what Michaels calls the “trick” was pulled.

“In 1969,” Jane Tompkins tells us in “Sentimental Power: Uncle Tom’s Cabin and the Politics of Literary History,” she “lived in the basement of a house on Forest Street in Hartford, Connecticut, which had belonged to Isabella Beecher Hooker—Harriet Beecher Stowe’s half-sister.” Living where she did sent Tompkins off on an intellectual journey that eventually led to the essay “Sentimental Power”—an essay that took up the question of why, as Randall Fuller observed not long ago in the magazine Humanities, “Uncle Tom’s Cabin was seen by most literary professionals as a cultural embarrassment.” Her conclusion was that Uncle Tom’s Cabin was squelched by a “male-dominated scholarly tradition that controls both the canon of American literature … and the critical perspective that interprets the canon for society.” To Tompkins, Uncle Tom’s Cabin was “repressed” on the basis of “identity”: Stowe’s work was called “trash”—as the Times of London did at the time it was published—because it was written by a woman.

To make her argument, however, required Tompkins to make several moves that go some way towards explaining why Hillary Clinton asks the question she asks, rather than the one she should ask. Most significant is Tompkins’ argument against the view she ascribes to her opponents: that “sentimental novels written by women in the nineteenth century”—like Uncle Tom’s Cabin—“were responsible for a series of cultural evils whose regrets still plague us,” among them the “rationalization of an unjust economic order.” Already, Tompkins is telling her readers that she is going to argue against those critics who used Uncle Tom’s Cabin to discuss the economy; already, we are not far from Hillary Clinton’s question.

Next, Tompkins takes her critical predecessors to task for ignoring the novel’s “enormous popular success”: it was, as Tompkins points out, the first novel to sell “over a million copies.” So part of her argument is not only the bigotry, but also the snobbishness of her opponents—an argument familiar enough to anyone who listens to right-wing talk radio. The distance from Tompkins’ argument to those who “argue” that quality is guaranteed by popularity, and vice versa—the old “if you’re so smart, why ain’t you rich” line—is about as far from the last letter in this sentence to its period. So Tompkins deprecates the idea that value can be independent of “success”—the idea that there can be slippage between an economic system and reality.

Yet perhaps the largest step Tompkins takes on the road to Hillary’s question simply concerns how she ascribes criticisms of Uncle Tom’s Cabin to sexism, or Stowe’s status as a woman—despite the fact that perhaps the best-known critical text on the novel, James Baldwin’s 1949 essay “Everybody’s Protest Novel,” was not only written by a gay black man, but Baldwin’s based his criticism of Stowe’s novel on rules originally applied to a white male author: James Fenimore Cooper, the object of Mark Twain’s scathing 1895 essay, “Fenimore Cooper’s Literary Offenses.” That essay, with which Twain sought to bury Cooper, furnished the critical precepts Baldwin uses to attempt to bury Stowe.

Stowe’s work, Baldwin says, is “a very bad novel” for two reasons: first, it is full of “excessive and spurious emotion.” Secondly, the novel “is activated by what might be called a theological terror,” so that “the spirit that breathes in this book … is not different from that spirit of medieval times which sought to exorcise evil by burning witches.” Both of these reasons derive from principles propounded by Twain in “Fenimore Cooper’s Literary Offenses.”

“Eschew surplusage” is number fourteen of Twain’s rules, so when Baldwin says Stowe’s writing is “excessive,” he is implicitly accusing Stowe of breaking this rule. Even Tompkins admits that Uncle Tom’s Cabin breaks this rule when she says that Stowe’s novel possesses “a needless proliferation of incident.” Then, number nine on Twain’s list is “that the personages of a tale shall confine themselves to possibilities and let miracles alone”—the rule that Baldwin invokes when he criticizes Uncle Tom’s Cabin for its “theological terror.” When burning witches, after all, it is necessary to have a belief in miracles—i.e., the supernatural—and certainly Stowe, who not only famously claimed that “God wrote” her novel but also suffused her novel with supernatural events, believed in the supernatural. So, if Baldwin—who remember was both black and homosexual—is condemning Stowe on the basis of rules originally used against a white male writer, it’s difficult to see how Stowe is being unfairly singled out on the basis of her sex. But that is what Tompkins says.

I take such time on these points because ultimately Twain’s rules go back much further than Twain himself—and it’s ultimately these roots that are both Tompkin’s object and, I suspect, the reason why Hillary asks the question she asks instead of the one she should. Twain’s ninth rule, concerning miracles, is more or less a restatement of what philosophers call naturalism: the belief “that reality has no place for ‘supernatural’ or other ‘spooky’ kinds of entity” according to the Stanford Encyclopedia of Philosophy. And the roots of that idea trace back to the original version of Twain’s fourteenth rule (“Eschew surplusage.”): Thomas Aquinas, in his Summa Theologica, gave one example of it when wrote that if “a thing can be done adequately by means of one, it is superfluous to do by several.” (In a marvelous economy, in other words, Twain reduced Aquinas’ rule—sometimes known as “Occam’s Razor,” to two words.) So it’s possible to say that Baldwin’s criticisms of Stowe are actually the same criticism: that “excessive” writing leads to, or perhaps more worrisomely just is, a belief in the supernatural.

It’s this point that Tompkins ultimately wants to address—she calls Uncle Tom’s Cabin “the Summa Theologica of nineteenth-century America’s religion of domesticity,” after all. Also, Tompkins doesn’t try to defend Stowe against Baldwin on the same grounds that two other critics tried to defend Cooper against Twain. In an essay named “Fenimore Cooper’s Literary Defenses,” Lance Schachterle and Kent Ljungquist argue that Twain doesn’t do justice to Cooper because he doesn’t take into account the different literary climate of Cooper’s time. While “Twain valued economy of style,” they write, “such concision simply was not a characteristic of many early nineteenth-century novelists’ work.” They’re willing to allow, in other words, the merits of Twain’s rules—they’re just arguing that it isn’t fair to apply those rules to writers who could not have been aware of them. Tompkins however takes a different tack: she says that in Uncle Tom’s Cabin, “it is the spirit alone that is finally real.” According to Tompkins, the novel is not just unaware of naturalism: Uncle Tom’s Cabin actively rejects naturalism.

To Tompkins, Stowe’s anti-naturalism is somehow a virtue. Stowe’s rejection of naturalism leads her to recommend, Tompkins says, “not specific alterations in the current political and economic arrangements but rather a change of heart … as the necessary precondition for sweeping social change.” To Stowe, attempts to “alter the anti-abolitionist majority in the Senate,” for instance, are absurdities: “Reality, in Stowe’s view, cannot be changed by manipulating the physical environment.” Apparently, this is a point in Stowe’s favor.

Without naturalism and its corollaries—basic intellectual tools—it’s difficult to think a number of things: that all people are people, first of all. That is, members of a species that has had, more or less, the same cognitive abilities for at least the last 100,000 years or so, which implies that most people’s cognitive abilities aren’t much different than anyone else’s—nor are they much different from anyone in history’s. Which, one might say, is prerequisite to running a democratic state—as opposed to, say, a monarchy or aristocracy, in which one person is better than another by blood right. But if naturalism is dead, then the growth of “identity” politics is perhaps easy to understand: without the conceptual category of “human being” available, other categories have to be substituted.

Without grouping votes on some basis, how could they be gathered into large enough clumps to make a difference? Hillary Clinton must ask for votes on the basis of some commonality between voters large enough to ensure her election. Assuming that she does, in fact, wish to be elected, it’s enlightening to observe that Clinton is appealing for votes on the basis of the next largest category after “human being”—“woman,” the category of 51 percent of the population according to most figures. That alone might explain why Hillary Clinton should ask “Why are women paid less” rather than “Why is everyone paid less?”

Yet the effects of Tompkins’ argument, as I suspect will be drearily apparent to the reader by now, are readily observable in many more places than Hillary Clinton’s campaign in today’s world. Think of it this way: what else are contemporary phenomena like unpaid internships, “doing it for the exposure,” or just trying to live on a minimum wage or public assistance, but attempts to live without material substance—that is, attempts to live as a “spirit?” Or for that matter, what is credit card debt, which Kim Phillips-Fein was explaining in The Baffler so long ago as 1998 as what happened when “people began to borrow to make up for stagnant wages.” These are all matters in which what matters isn’t matter—i.e., the material—but the “spirit.”

In the same way, what else was the “long-time” Occupy Wall Street camper named “Ketchup” doing when she said, to Josh Harkinson at Mother Jones, that the “‘whole big desire for demands is something people want to use to co-opt us’” but, as Tompkins would put it, refusing to delineate “specific alterations in the current political and economic arrangements?” That’s why Occupy, as Thomas Frank memorably wrote in his essay, “To the Precinct Station,” “seems to have had no intention of doing anything except building ‘communities’ in public spaces and inspiring mankind with its noble refusal to have leaders.” The values described by Tompkins’ essay are, specifically, anti-naturalist: Occupy Wall Street, and its many, many sympathizers, was an anti-naturalist—a religious—movement.

It may, to be sure, be little wonder that feminists like Tompkins should look to intellectual traditions explicitly opposed to the intellectual project of naturalism—most texts written by women have been written by religious women. So have most texts written by most people everywhere—to study a “minority” group virtually requires studying texts written by people who believed in a supernatural being. It’s wholly understandable, then, that anti-naturalism should have become the default mode of people who claim to be on the “left.” But while it’s understandable, it’s no way to, say, raise wages. Whatever Jane Tompkins says about her male literary opponents, Harriet Beecher Stowe didn’t free anybody. Abraham Lincoln—by all accounts an atheist—did.

Which is Hillary Clinton’s model?