Best Intentions

L’enfer est plein de bonnes volontés ou désirs
—St. Bernard of Clairvaux. c. 1150 A.D.

And if anyone knows Chang-Rae Lee,” wrote Penn State English professor Michael Bérubé back in 2006, “let’s find out what he thinks about Native Speaker!” The reason Bérubé gives for doing that asking is, first, that Lee wrote the novel under discussion, Native Speaker—and second, that Bérubé “once read somewhere that meaning is identical with intention.” But this isn’t the beginning of an essay about Native Speaker. It’s actually the end of an attack on a fellow English professor: the University of Illinois at Chicago’s Walter Benn Michaels, who (along with with Steven Knapp, now president of George Washington University), wrote the 1982 essay “Against Theory”—an essay that  argued that “the meaning of a text is simply identical to the author’s intended meaning.” Bérubé’s closing scoff then is meant to demonstrate just how politically conservative Michaels’ work is— earlier in the same piece, Bérubé attempted to tie Michaels’ work to Arthur Schlesinger, Jr.’s The Disuniting of America, a book that, because it argued that “multiculturalism” weakened a shared understanding of the United States, has much the same status among some of the intelligentsia that Mein Kampf has among Jews. Yet—weirdly for a critic who often insists on the necessity of understanding historical context—it’s Bérubé’s essay that demonstrates a lack of contextual knowledge, while it’s Michaels’ view—weirdly for a critic who has echoed Henry Ford’s claim that “History is bunk”—that demonstrates a possession of it. In historical reality, that is, it’s Michaels’ pro-intention view that has been the politically progressive one, while it’s Bérubé’s scornful view that shares essentially everything with traditionally conservative thought.

Perhaps that ought to have been apparent right from the start. Despite the fact that, to many English professors, the anti-intentionalist view has helped to unleash enormous political and intellectual energies on behalf of forgotten populations, the reason it could do so was that it originated from a forgotten population that, to many of those same professors, deserves to be forgotten: white Southerners. Anti-intentionalism, after all, was a key tenet of the critical movement called the New Criticism—a movement that, as Paul Lauter described in a presidential address to the American Studies Association in 1994, arose “largely in the South” through the work of Southerners like John Crowe Ransom, Allen Tate, and Robert Penn Warren. Hence, although Bérubé, in his essay on Michaels, insinuates that intentionalism is politically retrograde (and perhaps even racist), it’s actually the contrary belief that can be more concretely tied to a conservative politics.

Ransom and the others, after all, initially became known through a 1930 book entitled I’ll Take My Stand: The South and the Agrarian Tradition, a book whose theme was a “central attack on the impact of industrial capitalism” in favor of a vision of a specifically Southern tradition of a society based around the farm, not the factory. In their vision, as Lauter says, “the city, the artificial, the mechanical, the contingent, cosmopolitan, Jewish, liberal, and new” were counterposed to the “natural, traditional, harmonious, balanced, [and the] patriachal”: a juxtaposition of sets of values that wouldn’t be out of place in a contemporary Republican political ad. But as Lauter observes, although these men were “failures in … ‘practical agitation’”—i.e., although I’ll Take My Stand was meant to provoke a political revolution, it didn’t—“they were amazingly successful in establishing the hegemony of their ideas in the practice of the literature classroom.” Among the ideas that they instituted in the study of literature was the doctrine of anti-intentionalism.

The idea of anti-intentionalism itself, of course, predates the New Criticism: writers like T.S. Eliot (who grew up in St. Louis) and the University of Cambridge don F.R. Leavis are often cited as antecedents. Yet it did not become institutionalized as (nearly) official doctrine of English departments  (which themselves hardly existed) until the 1946 publication of W.K. Wimsatt and Monroe Beardsley’s “The Intentional Fallacy” in The Sewanee Review. (The Review, incidentally, is a publication of Sewanee: The University of the South, which was, according to its Wikipedia page, originally founded in Tennessee in 1857 “to create a Southern university free of Northern influences”—i.e., abolitionism.) In “The Intentional Fallacy,” Wimsatt and Beardsley explicitly “argued that the design or intention of the author is neither available nor desirable as a standard for judging the success of a work of literary art”—a doctrine that, in the decades that followed, did not simply become a key tenet of the New Criticism, but also largely became accepted as the basis for work in English departments. In other words, when Bérubé attacks Michaels in the guise of acting on behalf of minorities, he also attacks him on behalf of the institution of English departments—and so just who the bully is here isn’t quite so easily made out as Bérubé makes it appear.

That’s especially true because anti-intentionalism wasn’t just born and raised among conservatives—it has also continued to be a doctrine in conservative service. Take, for instance, the teachings of conservative Supreme Court justice Antonin Scalia, who throughout his career championed a method of interpretation he called “textualism”—by which he meant (!) that, as he said in 1995, it “is the law that governs, not the intent of the lawgiver.” Scalia argued his point throughout his career: in 1989’s Green v. Bock Laundry Mach. Co., for instance, he wrote that the

meaning of terms on the statute books ought to be determined, not on the basis of which meaning can be shown to have been understood by the Members of Congress, but rather on the basis of which meaning is … most in accord with context and ordinary usage … [and is] most compatible with the surrounding body of law.

Scalia thusly argued that interpretation ought to proceed from a consideration of language itself, apart from those who speak it—a position that would place him, perhaps paradoxically from Michael Bérubé’s position, among the most rarified heights of literary theorists: it was after all the formidable German philosopher Martin Heidegger—a twelve-year member of the Nazi Party and sometime-favorite of Bérubé’s—who wrote the phrase “Die Sprache spricht”: “Language [and, by implication, not speakers] speaks.” But, of course, that may not be news Michael Bérubé wishes to hear.

Like Odysseus’ crew, there’s a simple method by which Bérubé could avoid hearing the point: all of the above could be dismissed as an example of the “genetic fallacy.” First defined by Morris Cohen and Ernest Nagel in 1934’s An Introduction to Logic and Scientific Method, the “genetic fallacy” is “the supposition that an actual history of any science, art, or social institution can take the place of a logical analysis of its structure.” That is, the arguments above could be said to be like the argument that would dismiss anti-smoking advocates on the grounds that the Nazis were also anti-smoking: just because the Nazi were against smoking is no reason not to be against smoking also. In the same way, just because anti-intentionalism originated among conservative Southerners—and also, as we saw, committed Nazis—is no reason to dismiss the thought of anti-intentionalism. Or so Michael Bérubé might argue.

That would be so, however, only insofar as the doctrine of anti-intentionalism were independent from the conditions from which it arose: the reasons to be against smoking, after all, have nothing to do with anti-Semitism or the situation of interwar Germany. But in fact the doctrine of anti-intentionalism—or rather, to put things in the correct order, the doctrine of intentionalism—has everything to do with the politics of its creators. In historical reality, the doctrine enunciated by Michaels—that intention is central to interpretation—was in fact created precisely in order to resist the conservative political visions of Southerners. From that point of view, in fact, it’s possible to see the Civil War itself as essentially fought over this principle: from this height, “slavery” and “states’ rights” and the rest of the ideas sometimes advanced as reasons for the war become mere details.

It was, in fact, the very basis upon which Abraham Lincoln would fight the Civil War—though to see how requires a series of steps. They are not, however, especially difficult ones: in the first place, Lincoln plainly said what the war was about in his First Inaugural Address. “Unanimity is impossible,” as he said there, while “the rule of a minority, as a permanent arrangement, is wholly inadmissable.” Not everyone will agree all the time, in other words, yet the idea of a “wise minority” (Plato’s philosopher-king or the like) has been tried for centuries—and been found wanting; therefore, Lincoln continued, by “rejecting the majority principle, anarchy or despotism in some form is all that is left.” Lincoln thereby concluded that “a majority, held in restraint by constitutional checks and limitations”—that is, bounds to protect the minority—“is the only true sovereign of a free people.” Since the Southerners, by seceding, threatened this idea of government—the only guarantee of free government—therefore Lincoln was willing to fight them. But where did Lincoln obtain this idea?

The intellectual line of descent, as it happens, is crystal clear: as Wills writes, “Lincoln drew much of his defense of the Union from the speeches of [Daniel] Webster”: after all, the Gettysburg Address’ famous phrase, “government of the people, by the people, for the people” was an echo of Webster’s Second Reply to Hayne, which contained the phrase “made for the people, made by the people, and answerable to the people.” But if Lincoln got his notions of the Union (and thusly, his reasons for fighting the war) from Webster, then it should also be noted that Webster got his ideas from Supreme Court Justice Joseph Story: as Theodore Parker, the Boston abolitionist minister, once remarked, “Mr. Justice Story was the Jupiter Pluvius [Raingod] from whom Mr. Webster often sought to elicit peculiar thunder for his speeches and private rain for his own public tanks of law.” And Story, for his part, got his notions from another Supreme Court justice: James Wilson, who—as Linda Przybyszewski notes in passing in her book, The Republic According to John Marshall Harlan (a later Supreme Court justice)—was “a source for Joseph Story’s constitutional nationalism.” So in this fashion Lincoln’s arguments concerning the constitution—and thus, the reasons for fighting the war—ultimately derived from Wilson.

 

JamesWilson
Not this James Wilson.

Yet, what was that theory—the one that passed by a virtual apostolic succession from Wilson to Story to Webster to Lincoln? It was derived, most specifically, from a question Wilson had publicly asked in 1768, in his Considerations on the Nature and Extent of the Legislative Authority of the British Parliament. “Is British freedom,” Wilson had there asked, “denominated from the soil, or from the people, of Britain?” Nineteen years later, at the Constitutional Convention of 1787, Wilson would echo the same theme: “Shall three-fourths be ruled by one-fourth? … For whom do we make a constitution? Is it for men, or is it for imaginary beings called states?” To Wilson, the answer was clear: constitutions are for people, not for tracts of land, and as Wills correctly points out, it was on that doctrine that Lincoln prosecuted the war.

James Wilson (1742-1798)
This James Wilson.

Still, although all of the above might appear unobjectionable, there is one key difficulty to be overcome. If, that is, Wilson’s theory—and Lincoln’s basis for war—depends on a theory of political power derived from people, and not inanimate objects like the “soil,” that requires a means of distinguishing between the two—which perhaps is why Wilson insisted, in his Lectures on Law in 1790 (the very first such legal works in the United States), that “[t]he first and governing maxim in the interpretation of a statute is to discover the meaning of those who made it.” Or—to put it another way—the intention of those who made it. It’s intention, in other words, that enables Wilson’s theory to work—as Knapp and Michaels well-understand in “Against Theory.”

The central example of “Against Theory,” after all, is precisely about how to distinguish people from objects. “Suppose that you’re walking along a beach and you come upon a curious sequence of squiggles in the sand,” Michaels and his co-author ask. These “squiggles,” it seems, appear to be the opening lines of Wordsworth’s “A Slumber”: “A slumber did my spirit seal.” That wonder, then, is reinforced by the fact that, in this example, the next wave leaves, “in its wake,” the next stanza of the poem. How to explain this event, Knapp and Michaels ask?

There are, they say, only two alternatives: either to ascribe “these marks to some agent capable of intentions,” or to “count them as nonintentional effects of mechanical processes,” like some (highly unlikely) process of erosion or wave action or the like. Which, in turn, leads up to the $64,000 question: if these “words” are the result of “mechanical processes” and not the actions of an actor, then “will they still seem to be words?”

The answer, of course, is that they will not: “They will merely seem to resemble words.” Thus, to deprive (what appear to be) the words “of an author is to convert them into accidental likenesses of language.” Intention and meaning are, in this way, identical to each other: no intention, no meaning—and vice versa. Similarly, I suggest, to Lincoln (and his intellectual antecedents), the state is identical to its people—and vice versa. Which, clearly, then suggests that those who deny intention are, in their own fashion—and no matter what they say—secessionists.

If so, then that would, conversely, make those who think—along with Knapp and Michaels—that it is intention that determines meaning, and—along with Lincoln and Wilson—that it is people that constitutes states, then it would follow that those who thought that way really could—unlike the sorts of “radicals” Bérubé is attempting to cover for—construct the United States differently, in a fashion closer to the vision of James Wilson as interpreted by Abraham Lincoln. There are, after all, a number of things about the government of the United States that still lend themselves to the contrary theory, that power derives from the inanimate object of the soil: the Senate, for one. The Electoral College, for another. But the “radical” theory espoused by Michael Bérubé and others of his ilk does not allow for any such practical changes in the American constitutional architecture. In fact, given its collaboration—a word carefully chosen—with conservatives like Antonin Scalia, it does rather the reverse.

Then again, perhaps that is the intention of Michael Bérubé. He is, after all, an apparently-personable man who nevertheless asked, in a 2012 essay in the Chronicle of Higher Education explaining why he resigned the Paterno Family Professorship in Literature at Pennsylvania State University, us to consider just how horrible the whole Jerry Sandusky scandal was—for Joe Paterno’s family. (Just “imagine their shock and grief” at finding out that the great college coach may have abetted a child rapist, he asked—never mind the shock and grief of those who discovered that their child had been raped.) He is, in other words, merely a part-time apologist for child rape—and so, I suppose, on his logic we ought to give a pass to his slavery-defending, Nazi-sympathizing, “intellectual” friends.

They have, they’re happy to tell us after all, only the best intentions.

Advertisements

Comedy Bang Bang

In other words, the longer a game of chance continues the larger are the spells and runs of luck in themselves,
but the less their relative proportions to the whole amounts involved.
—John Venn. The Logic of Chance. (1888). 

 

“A probability that is very small for a single operation,” reads the RAND Corporation paper mentioned in journalist Sharon McGrayne’s The Theory That Would Not Die, “say one in a million, can become significant if this operation will occur 10,000 times in the next five years.” The paper, “On the Risk of an Accidental or Unauthorized Nuclear Detonation,” was just what it says on the label: a description of the chances of an unplanned atomic explosion. Previously, American military planners had assumed “that an accident involving an H-bomb could never occur,” but the insight of this paper was that overall risk changes depending upon volume—an insight that ultimately depended upon a discovery first described by mathematician Jacob Bernoulli in 1713. Now called the “Law of Large Numbers,” Bernoulli’s thought was that “it is not enough to take one or another observation … but that a large number of them are needed”—it’s what allows us to conclude, Bernoulli wrote, that “someone who intends to throw at once three sixes with three dice, should be considered reckless even if winning by chance.” Yet, while recognizing the law—which predicted that even low-probability events become likely if there are many of them—considerably changed how the United States handled nuclear weapons, it has had essentially no impact on how the United States handles certain conventional weapons: the estimated 300 million guns held by its citizens. One possible reason why that may be, suggests the work of Vox.com founder Ezra Klein, is that arguments advanced by departments of literature, women’s studies, African-American studies and other such academic “disciplines” more or less openly collude with the National Rifle Association to prevent sensible gun control laws.

The inaugural “issue” of Vox contained Klein’s article “How Politics Makes Us Stupid”—an article that asked the question, “why isn’t good evidence more effective in resolving political debates?” According to the consensus wisdom, Klein says, “many of our most bitter political battles are mere misunderstandings” caused by a lack of information—in this view, all that’s required to resolve disputes is more and better data. But, Klein also writes, current research shows that “the more information partisans get, the deeper their disagreements become”—because there are some disagreements “where people don’t want to find the right answer so much as they want to win the argument.” In other words, while some disagreements can be resolved by considering new evidence—like the Strategic Air Command changed how it handled nuclear weapons in light of a statistician’s recall of Bernoulli’s work—some disagreements, like gun control, cannot.

The work Klein cites was conducted by Yale Law School professor Daniel Kahan, along with several co-authors, and it began—Klein says—by collecting 1,000 Americans and then surveying both their political views and their mathematical skills. At that point, Kahan’s group gave participants a puzzle, which asked them to judge an experiment designed to show whether a new skin cream was more or less likely to make a skin condition worse or better, based on the data presented. The puzzle, however, was jiggered: although many more people got better using the skin cream than got worse using the skin cream, the percentage of people who got worse using the skin cream against those who did not use it was actually higher. In other words, if you paid attention merely to numbers, the data might appear to indicate one thing, while a calculation of percentages showed something else. As it turns out, most people relied on the raw numbers—and were wrong; meanwhile, people with higher mathematical skill were able to work through the problem to the right answer.

Interestingly, however, the results of this study did not demonstrate to Kahan that perhaps it is necessary to increase scientific and mathematical education. Instead, Kahan argues that the attempt by “economists and other empirical social scientists” to shear the “emotional trappings” from the debate about gun control in order to make it “a straightforward question of fact: do guns make society less safe or more” is misguided. Rather, because guns are “not just ‘weapons or pieces of sporting equipment,’” but “are also symbols,” the proper terrain to contest is not the grounds of empirical fact, but the symbolic: “academics and others who want to help resolve the gun controversy should dedicate themselves to identifying with as much precision as possible the cultural visions that animate this dispute.” In other words, what ought to structure this debate is not science, but culture.

To many on what’s known as the “cultural left,” of course, this must be welcome news: it amounts to a recognition of “academic” disciplines like “cultural studies” and the like that have argued for decades that cultural meanings trump scientific understanding. As Canadian philosopher Ian Hacking put it some years ago in The Social Construction of What?, a great deal of work in those fields of “study” have made claims that approach saying “that scientific results, even in fundamental physics, are social constructs.” Yet though the point has, as I can speak from personal experience, become virtual commonsense in departments of the humanities, there are several means of understanding the phrase “social construct.”

As English professor Michael Bérubé has remarked, much of that work can be traced as  “following the argument Heidegger develops at the end of the first section of Being and Time,” where the German philosopher (and member of the Nazi Party) argued that “we could also say that the discovery of Neptune in 1846 cold plausibly be described, from a strictly human vantage point, as the ‘invention’ of Neptune.” In more general terms New York University professor Andrew Ross—the same Ross later burned in what’s become known as the “Sokal Affair”—described one fashion in which such an argument could go: by tracing how a “scientific theory was advanced through power, authority, persuasion and responsiveness to commercial interests.” Of course, as a journalistic piece by Joy Pullmann—writing in the conservative Federalist—described recently, as such views have filtered throughout the academy they have led at least one doctoral student to claim in her dissertation at the education department of the University of North Dakota that “language used in the syllabi” of eight science classes she reviewed

reflects institutionalized STEM teaching practices and views about knowledge that are inherently discriminatory to women and minorities by promoting a view of knowledge as static and unchanging, a view of teaching that promotes the idea of a passive student, and by promoting a chilly climate that marginalizes women.

The language of this description, interestingly, equivocates between the claim that some, or most, scientists are discriminatory (a relatively safe claim) and the notion that there is something inherent about science itself (the radical claim)—which itself indicates something of the “cultural” view. Yet although, as in this latter example, claims regarding the status of science are often advanced on the grounds of discrimination, it seems to escape those making such claims just what sort of ground is conceded politically by taking science as one’s adversary.

For example, here is the problem with Kahan’s argument over gun control: by agreeing to contest on cultural grounds pro-gun control advocates would be conceding their very strongest argument: the Law of Large Numbers is not an incidental feature of science, but one of its very foundations. (It could perhaps even be the foundation, because science proceeds on the basis of replicability.) Kahan’s recommendation, in other words, might not appear so much as a change in tactics as an outright surrender: it’s only in the light of the Law of Large Numbers that the pro gun-control argument is even conceivable. Hence, it is very difficult to understand how an argument can be won if one’s best weapon is, I don’t know, controlled. In effect, conceding the argument made in the RAND paper quoted above is more or less to give up on the very idea of reducing the numbers of firearms, so that American streets could perhaps be safer—and American lives protected.

Yet another, and even larger-scale problem with taking the so-called “cultural turn,” as Kahan advises, however, is that abandoning the tools of the Law of Large Numbers does not merely concede ground on the gun control issue alone. It also does so on a host of other issues—perhaps foremost of them on matters of political representation itself. For example, it prevents an examination of the Electoral College from a scientific, mathematically-knowledgable point of view—as I attempted to do in my piece, “Size Matters,” from last month. It may help to explain what Congressman Steve Israel of New York meant when journalist David Daley, author of a recent book on gerrymandering, interviewed him on the practical effects of gerrymandering in the House of Representatives (a subject that requires strong mathematical knowledge to understand): “‘The Republicans have always been better than Democrats at playing the long game.’” And there are other issues also—all of which is to say that, by attacking science itself, the “cultural left” may literally be preventing government from interceding on the part of the very people for whom they claim to speak.

Some academics involved in such fields have, in fact, begun to recognize this very point: all the way back in 2004, one of the chiefs of this type of specialist, Bruno Latour, dared to ask himself “Was I wrong to participate in the invention of this field known as science studies?” The very idea of questioning the institution of that field can, however, seem preposterous: even now, as Latour also wrote then, there are

entire Ph.D. programs … still running to make sure that good American kids are learning the hard way that facts are made up, that there is no such thing as natural, unmediated, unbiased access to truth, that we are always prisoners of language, that we always speak from a particular standpoint, and so on, while dangerous extremists are using the very same argument of social construction to destroy hard-won evidence that could save our lives.

Indeed. It’s actually to the point, in fact, that it would be pretty easy to think that the supposed “left” doesn’t really want to win these arguments at all—that, perhaps, they just wish to go out …

With a bang.

The End Of The Beginning

The essential struggle in America … will be between city men and yokels.
The yokels hang on because the old apportionments give them unfair advantages. …
But that can’t last.
—H.L. Mencken. 23 July 1928.

 

“It’s as if,” the American philosopher Richard Rorty wrote in 1998, “the American Left could not handle more than one initiative at a time, as if it either had to ignore stigma in order to concentrate on money, or vice versa.” Penn State literature professor Michael Bérubé sneered at Rorty at the time, writing that Rorty’s problem is that he “construes leftist thought as a zero-sum game,” as if somehow

the United States would have passed a national health-care plan, implemented a family-leave policy, and abolished ‘right to work’ laws if only … left-liberals in the humanities hadn’t been wasting our time writing books on cultural hybridity and popular music.

Bérubé then essentially asked Rorty, “where’s the evidence?”—knowing, of course, that it is impossible to prove a counterfactual, i.e. what didn’t happen. But even in 1998, there was evidence to think that Rorty was not wrong: that, by focusing on discrimination rather than on inequality, “left-liberals” have, as Rorty accused then, effectively “collaborated with the Right.” Take, for example, what are called “majority-minority districts,” which are designed to increase minority representation, and thus combat “stigma”—but have the effect of harming minorities.

A “majority-minority district,” according to Ballotpedia, “is a district in which a minority group or groups comprise a majority of the district’s total population.” They were created in response to Section Two of the Voting Rights Act of 1965, which prohibited drawing legislative districts in a fashion that would “improperly dilute minorities’ voting power.”  Proponents of their use maintain that they are necessary in order to prohibit what’s sometimes called “cracking,” or diluting a constituency so as to ensure that it is not a majority in any one district. It’s also claimed that “majority-minority” districts are the only way to ensure minority representation in the state legislatures and Congress—and while that may or may not be true, it is certainly true that after drawing such districts there were more minority members of Congress than there were before: according to the Congressional Research Service, prior to 1969 (four years after passage) there were less than ten black members of Congress, a number that then grew until, after the 106th Congress (1999-01), there have consistently been between 39 and 44 African-American members of Congress. Unfortunately, while that may have been good for individual representatives, it may not be all that great for their constituents.

That’s because while “majority-minority” districts may increase the number of black and minority congressmen and women, they may also decrease the total numbers of Democrats in Congress. As The Atlantic put the point in 2013: after the redistricting process following the Census of 1990, the “drawing of majority-minority districts not only elected more minorities, it also had the effect of bleeding minority voters out of all the surrounding districts”—making them virtually impregnably Republican. In 2012, for instance, Barack Obama won 44 Congressional districts by more than 50 percent of the vote, while Mitt Romney won only eight districts by such a large percentage. Figures like these could seem overwhelmingly in favor of the Democrats, of course—until it is realized that, by winning congressional seats by such huge margins in some districts, Democrats are effectively losing votes in others.

That’s why—despite the fact that he lost the popular vote—in 2012 Romney’s party won 226 of 435 Congressional districts, while Obama’s party won 209. In this past election, as I’ve mention in past posts, Republicans won 55% of the seats (241) despite getting 49.9% of the vote, while Democrats won 44% of the seats despite getting 47.3% of the vote. That might not seem like a large difference, but it is suggestive when these percentages always point in a single direction: going back to 1994, the year of the “Contract With America,” Republicans have consistently outperformed their share of the popular vote, while Democrats have consistently underperformed theirs.

From the perspective of the Republican party, that’s just jake, despite being—according to a lawsuit filed by the NAACP in North Carolina—due to “an intentional and cynical use of race.” Whatever the ethics of the thing, it’s certainly had major results. “In 1949,” as Ari Berman pointed out in The Nation not long ago, “white Democrats controlled 103 of 105 House seats in the former Confederacy,” while the last white Southern congressman not named Steve Cohen exited the House in 2014. Considered all together, then, as “majority-minority districts” have increased, the body of Southern congressmen (and women) has become like an Oreo: a thin surface of brown Democrats on the outside, thickly white and Republican on the inside—and nothing but empty calories.

Nate Silver, to be sure, discounted all this worry as so much ado about nothing in 2013: “most people,” he wrote then, “are putting too much weight on gerrymandering and not enough on geography.” In other words, “minority populations, especially African-Americans, tend to be highly concentrated in certain geographic areas,” so much so that it would a Herculean task “not to create overwhelmingly minority (and Democratic) districts on the South Side of Chicago, in the Bronx or in parts of Los Angeles or South Texas.” Furthermore, even if that could be accomplished such districts would violate “nonpartisan redistricting principles like compactness and contiguity.” But while Silver is right on the narrow ground he contests, it merely begs the question: why should geography have anything to do with voting? Silver’s position essentially ensures that African-American and other minority votes count for less. “Majority minority districts” imply that minority votes do not have as much effect on policy as votes in other kinds of districts: they create, as if the United States were some corporation with common and preferred shares, two kinds of votes.

Like discussions about, for example, the Electoral College—in which a vote in Wyoming is much more valuable than one in California—Silver’s position in other words implies that minority votes will remain less valuable than other votes because a vote in a “majority-minority” district will have less probability of electing a congressperson who is a member of a majority in Congress. What does it matter to African-Americans if one of their number is elected to Congress, if Congress can do nothing for them?  To Silver, there isn’t any issue with majority-minority districts because they reflect their underlying proportions of people—but what matters is whether whoever’s elected can get policies that benefit them.

Right here, in other words, we get to the heart of the dispute between the deceased Rorty and his former student Bérubé: the difference between procedural and substantive justice. To some left-liberal types like Michael Bérubé, that might appear just swell: to coders in the Valley (represented by California’s 17th, the only majority-Asian district in the continental United States) or cultural-studies theorists in Boston, what might be important is simply the numbers of minority representatives, not the ability to pass a legislative agenda that’s fair for all Americans. It all might seem like no skin off their nose. (More ominously, it conceivably might even be in their economic interests: the humanities and the arts after all are intellectually well-equipped for a politics of appearances—but much less so for a politics of substance.) But ultimately this also affects them, and for a similar reason: urban professionals are, after all, urban—which means that their votes are, like majority-minority districts, similarly concentrated.

“Urban Democrat House members”—as The Atlantic also noted in 2013—“win with huge majorities, but winning a district with 80 percent doesn’t help the party gain any more seats than winning with 60 percent.” As Silver put the same point, “white voters in cities with high minority populations tend to be quite liberal, yielding more redundancy for Democrats.” Although these percentages might appear heartening to some of those within such districts, they ought to be deeply worrying: individual votes are not translating into actual political power. The more geographically concentrated Democrats are the less and less capable their party becomes of accomplishing its goals. While winning individual races by huge margins might be satisfying to some, no one cares about running up the score in a junior varsity game.

What “left-liberal” types ought to be contesting, in other words, isn’t whether Congress has enough black and other minority people in it, but instead the ridiculous, anachronistic idea that voting power should be tied to geography. “People, not land or trees or pastures vote,” Chief Justice of the Supreme Court Earl Warren wrote in 1964; in that case, Wesberry v. Sanders, the Supreme Court ruled that, as much as possible, “one man’s vote in a Congressional election is to be worth as much as another’s.” By shifting discussion to procedural issues of identity and stigma, “majority-minority districts” obscure that much more substantive question of power. Like some gaggle of left-wing Roy Cohns, people like Michael Bérubé want to talk about who people are. His opponents ought to reply by saying they’re interested in what people could be—and building a real road to get there.

Stormy Weather

They can see no reasons …
—“I Don’t Like Mondays” 
The Boomtown Rats.
The Fine Art of Surfacing. 1979.

 

“Since Tuesday night,” John Cassidy wrote in The New Yorker this week, “there has been a lot of handwringing about how the media, with all its fancy analytics, failed to foresee Donald Trump’s victory”: as the New York Times headline had it, “How Data Failed Us in Calling an Election.” The failure of Nate Silver and other statistical analysts in the lead-up to Election Day rehearses, once again, a seemingly-ancient argument between what are now known as the sciences and the humanities—an argument sometimes held to be as old as the moment when Herodotus (the “Father of History”) asserted that his object in telling the story of the Greco-Persian Wars of 2500 years ago was “to set forth the reasons why [the Greeks and Persians] wage war on each other.” In other words, Herodotus thought that, to investigate war, it was necessary to understand the motives of the people who fought it—just as Cassidy says the failure of the press to get it right about this election was, Cassidy says, “a failure of analysis, rather than of observation.” The argument both Herodotus and Cassidy are making is the seemingly unanswerable one that it is the interpretation of the evidence, rather than the evidence itself, that is significant—a position that seems inarguable so long as you aren’t in the Prussian Army, dodging Nazi bombs during the last year of the Second World War, or living in Malibu.

The reason why it seems inarguable, some might say, is because the argument both Herodotus and Cassidy are making is inescapable: obviously, given Herodotus’ participation, it is a very ancient one, and yet new versions are produced all the time. Consider for instance a debate conducted by English literature professor Michael Bérubé and philosopher John Searle some years ago, about a distinction between what Searle called “brute fact” and “social fact.” “Brute facts,” Bérubé wrote later, are “phenomena like Neptune, DNA, and the cosmic background radiation,” while the second kind are “items whose existence and meaning are obviously dependent entirely on human interpretation,” such as “touchdowns and twenty-dollar bills.” Like Searle, most people might like to say that “brute fact” is clearly more significant than “social fact,” in the sense that Neptune doesn’t care what we think about it, whereas touchdowns and twenty dollar bills are just as surely entirely dependent on what we think of them.

Not so fast, said Bérubé: “there’s a compelling sense,” the professor of literature argued, in which social facts are “prior to and even constitutive of” brute facts—if social facts are the means by which we obtain our knowledge of the outside world, then social facts could “be philosophically prior to and certainly more immediately available to us humans than the world of brute fact.” The only way we know about Neptune is because a number of human beings thought it was important enough to discover; Neptune doesn’t give a damn one way or the other.

“Is the distinction between social facts and brute facts,” Bérubé therefore asks, “a social fact or a brute fact?” (Boom! Mic drop.) That is, whatever the brute facts are, we can only interpret them in the light of social facts—which would seem to grant priority to those disciplines dealing with social facts, rather than those disciplines that deal with brute fact; Hillary Clinton, Bérubé might say, would have been better off hiring an English professor, rather than a statistician, to forecast the election. Yet, despite the smugness with which Bérubé delivers what he believes is a coup de grâce, it does not seem to occur to him that traffic between the two realms can also go the other way: while it may be possible to see how “social facts” subtly influence our ability to see “brute facts,” it’s also possible to see how “brute facts” subtly influence our ability to see “social facts.” It’s merely necessary to understand how the nineteenth-century Prussian Army treated its horses.

The book that treats that question about German military horsemanship is called The Law of Small Numbers, which was published in 1898 by one Ladislaus Bortkiewicz: a Pole who lived in the Russian Empire and yet conducted a study on data about the incidence of deaths caused by horse kicks in the nineteenth-century Prussian Army. Apparently, this was a cause of some concern to military leaders: they wanted to know whether, say, if an army corp that experienced several horse kick deaths in a year—an exceptional number of deaths from this category—was using bad techniques, or whether they happened to buy particularly ornery horses. Why, in short, did some corps have what looked like an epidemic of horse kick deaths in a given year, while others might go for years without a single death? What Bortkiewicz found answered those questions—though perhaps not in a fashion the army brass might have liked.

Bortkiewicz began by assembling data about the number of fatal horse kicks in fourteen Prussian army corps over twenty years, which he then combined into “corp years”: the number of years together with the number of corps. What he found—as E.J. Gumbel pus it in the International Encyclopedia of the Social Sciences—was that for “over half the corps-year combinations there were no deaths from horse kicks,” while “for the other combinations the number of deaths ranged up to four.” In most years, in other words, no one was killed in any given corps by a horse kick, while in some years someone was—and in terrible years four were. Deaths by horse kick, then, were uncommon, which meant they were hard to study: given that they happened so rarely, it was difficult to determine what caused them—which was why Bortkiewicz had to assemble so much data about them. By doing so, the Russian Pole hoped to be able to isolate a common factor among these deaths.

In the course of studying these deaths, Bortkiewicz ended up independently re-discovering something that a French mathematician, Simeon Denis Poisson, had already, in 1837, used in connection with discussing the verdicts of juries: an arrangement of data now known as the Poisson distribution. And as the mathematics department at the University of Massachusetts is happy to tell us (https://www.umass.edu/wsp/resources/poisson/), the Poisson distribution applies when four conditions are met: first, “the event is something that can be counted in whole numbers”; second, “occurrences are independent, so that one occurrence neither diminishes nor increases the chance of another”; third, “the average frequency of occurrence for the time period in question is known”; and finally “it is possible to count how many events have occurred.” If these things are known, it seems, the Poisson distribution will tell you how often the event in question will happen in the future—a pretty useful feature for, say, predicting the results of an election. But that what wasn’t was intriguing about Bortkiewicz’ study: what made it important enough to outlast the government that commissioned it was that Bortkiewicz found that the Poisson distribution “may be used in reverse”—a discovery ended up telling us about far more than the care of Prussian horses.

What “Bortkiewicz realized,” as Aatish Bhatia of Wired wrote some years ago, was “that he could use Poisson’s formula to work out how many deaths you could expect to see” if the deaths from horse kicks in the Prussian army were random. The key to the Poisson distribution, in other words, is the second component, “occurrences are independent, so that one occurrence neither diminishes nor increases the chance of another”: a Poisson distribution describes processes that are like the flip of a coin. As everyone knows, each flip of a coin is independent of the one that came before; hence, the record of successive flips is the record of a random process—a process that will leave its mark, Bortkiewicz understood.

A Poisson distribution maps a random process; therefore, if the process in question maps a Poisson distribution, then it must be a random process. A distribution that matches the results a Poisson distribution would predict must also be a process in which each occurrence is independent of those that came before. As the UMass mathematicians say, “if the data are lumpy, we look for what might be causing the lump,” while conversely, if  “the data fit the Poisson expectation closely, then there is no strong reason to believe that something other than random occurrence is at work.” Anything that follows a Poisson distribution is likely the result of a random process; hence, what Bortkiewicz had discovered was a tool to find randomness.

Take, for example, the case of German V-2 rocket attacks on London during the last years of World War II—the background, as it happens, to novelist Thomas Pynchon’s Gravity’s Rainbow. As Pynchon’s book relates, the flying missiles were falling in a pattern: some parts of London were hit multiple times, while others were spared. Some Londoners argued that this “clustering” demonstrated that the Germans must have discovered a way to guide these missiles—something that would have been highly, highly advanced for mid-twentieth century technology. (Even today, guided missiles are incredibly advanced: much less than ten percent of all the bombs dropped during the 1991 Gulf War, for instance, had “smart bomb” technology.) So what British scientist R. D. Clarke did was to look at the data for all the targets of V-2s that fell on London. What he found was that the results matched a Poisson distribution—the Germans did not possess super-advanced guidance systems. They were just lucky.

Daniel Kahneman, the Israeli psychologist, has a similar story: “‘During the Yom Kippur War, in 1973,’” Kahneman told New Yorker writer Atul Gawande, he was approached by the Israeli Air Force to investigate why, of two squads that took to the skies during the war, “‘one had lost four planes and the other had lost none.’” Kahneman told them not to waste their time, because a “difference of four lost planes could easily have occurred by chance.” Without knowing about Bortkiewicz, that is, the Israeli Air Force “would inevitably find some measurable differences between the squadrons and feel compelled to act on them”—differences that, in reality, mattered not at all. Presumably, Israel’s opponents were bound to hit some of Israel’s warplanes; it just so happened that they were clustered in one squadron and not the other.

Why though, should any of this matter in terms of the distinction between “brute” and “social” facts? Well, consider what Herodotus wrote more than two millennia ago: what matters, when studying war, is the reasons people had for fighting. After all, wars are some of the best examples of a “social fact” anywhere: wars only exist, Herodotus is claiming, because of what people think about them. But what if it could be shown that, actually, there’s a good case to be made for thinking of war as a “brute fact”—something more like DNA or Neptune than like money or a home run? As it happens, at least one person, following in Bortkiewicz’ footsteps, already has.

In November of 1941, the British meteorologist and statistician Lewis Fry Richardson published, in the journal Nature, a curious article entitled “Frequency of Occurrence of Wars and Other Quarrels.” Richardson, it seems, had had enough of the endless theorizing about war’s causes: whether it be due to, say, simple material greed, or religion, or differences between various cultures or races. (Take for instance the American Civil War: according to some Southerners, the war could be ascribed to the racial differences between Southern “Celtics” versus Northern “Anglo-Saxons”; according to William Seward, Abraham Lincoln’s Secretary of State, the war was due to the differences in economic systems between the two regions—while to Lincoln himself, perhaps characteristically, it was all due to slavery.) Rather than argue with the historians, Richardson decided to instead gather data: he compiled a list of real wars going back centuries, then attempted to analyze the data he had collected.

What Richardson found was, to say the least, highly damaging to Herodotus: as Brian Hayes puts it in a recent article in American Scientist about Richardson’s work, when Richardson compared a group of wars with similar amounts of casualties to a Poisson distribution, he found that the “match is very close.” The British scientist also “performed a similar analysis of the dates on which wars ended—the ‘outbreaks of peace’—with the same result.” Finally, he checked another data set concerning wars, this one compiled by the University of Chicago’s Quincy Wright—“and again found good agreement.” “Thus,” Hayes writes, “the data offer no reason to believe that wars are anything other than randomly distributed accidents.” Although Herodotus argued that the only way to study wars is to study the motivations of those who fought them, there may in reality be no more “reason” for the existence of war than the existence of a forest fire in Southern California.

Herodotus, to be sure, could not have seen that: the mathematics of his time were nowhere near sophisticated enough to run a Poisson distribution. Therefore, the Greek historian was eminently justified in thinking that wars must have “reasons”: he literally did not have the conceptual tools necessary to think that wars may not have reasons at all. That was an unavailable option. But through the work of Bortkiewizc and his successors, that has now become an option: indeed, the innovation of these statisticians has been to show that our default assumption ought to be what statisticians call the “null hypothesis,” which is defined by the Cambridge Dictionary of Statistics to be “the ‘no difference’ or ‘no association’ hypothesis.” Unlike Herodotus, who presumed that explanations must equal causes, we now assume that we ought to be first sure that there is anything to explain before trying to explain it.

In this case, then, it may be that the “brute fact” of the press’ Herodotian commitment to discovering “reasons” that explains why nobody in the public sphere predicted Donald Trump’s victory: because the press is already committed to the supremacy of analysis over observation, it could not perform the observations necessary to think Trump could win. Or, as Cassidy put it, when a reporter saw the statistical election model of choice “registering the chances of the election going a certain way at ninety per cent, or ninety-five per cent, it’s easy to dismiss the other outcome as a live possibility—particularly if you haven’t been schooled in how to think in probabilistic terms, which many people haven’t.” Just how powerful the assumption of the force of analysis over data can be is demonstrated by the fact that—even despite noting the widespread lack of probabilistic thinking—Cassidy still thinks it possible that “F.B.I. Director James Comey’s intervention ten days before the election,” in which Comey announced his staff was still investigating Clinton’s emails, “may have proved decisive.” In other words, despite knowing something about the impact of probability, Cassidy still thinks it possible that a letter from the F.B.I. director was somehow more important to the outcome of this past election than the evidence of their own lives were to million of Americans—or, say, the effect of a system in which the answer to the question where outweighs that of how many?

Probabilistic reasoning, of course, was unavailable to Herodotus, who lived two millennia before the mathematical tools necessary were even invented—which is to say that, while some like to claim that the war between interpretation and data is eternal, it might not be. Yet John Cassidy—and Michael Bérubé—don’t live before those tools were invented, and yet they persist in writing as if they do. While that’s fine, so far as it is their choice as private citizens, it ought to be quite a different thing insofar as it is their jobs as journalist and teacher, respectively—particularly in the case, as say in the 2016 election, when it is of importance to the continued health of the nation as a whole that there be a clear public understanding of events. Some people appear to think that continuing the quarrels of people whose habits of mind, today, would barely qualify them to teach Sunday school is something noble; in reality, it may just be a measure of how far we have yet to travel.

 

Lions For Lambs

And the remnant of Jacob shall be among the Gentiles in the midst of many people as a lion among the beasts of the forest, as a young lion among the flocks of sheep …
Micah 5:8

Micah was the first prophet to predict the downfall of Jerusalem. According to him, the city was doomed because its beautification was financed by dishonest business practices, which impoverished the city’s citizens. He also called to account the prophets of his day, whom he accused of accepting money for their oracles.
“Micah.” Wikipedia.

 

“Before long I’ll be dead, and you and your brother and your sister and all of her children, all of us dead, all of us rotting underground,” says the villainous patriarch of the aristocratic Lannister clan, Tywin, to his son Jaime in a conversation during the first season of the hit HBO show, Game of Thrones. “It’s the family name that lives on,” Tywin continues—a sentence that not only does much to explain the popularity of the show, but also overturns the usual explanation for that interest: the narrative uncertainty, or the way in which, at least in the first several seasons, it was never obvious which characters were the heroes, and so would survive to the end of the tale. But if Tywin is right, the attraction of the show isn’t that it is so unpredictable. It’s rather that the show’s uncertainty about the various characters’ fates is balanced by a matching certainty that they are in peril: either from the political machinations that end up destroying many of the characters the show had led us to think were protagonists (Ned and his son Robb Stark in particular)—or from the horror that, the opening minutes of the show’s very first episode display, has awakened in the frozen north of Thrones’ fictional world. Hence, the uncertainty about what is going to happen is mirrored by a certainty that something will happen—a certainty signified by the motto of the family to which many fan-favorite characters belong, House Stark: “Winter is Coming.” It’s that motto, I think, that furnishes much of the show’s power—because it is such a direct riposte to much of today’s conventional wisdom, a dogma that unites the supposed “radical left” of the contemporary university with their seeming ideological opposites: the financial elite of Wall Street.

To put it plainly, the relevant division in America today is not between Republicans and Democrats, but instead between those who (still) think the notion encapsulated by the phrase “Winter Is Coming” matters—and those who don’t. For the idea contained within the phrase “Winter Is Coming,” after all, is much older than George Martin’s series of fantasy novels. It is, for example, much the same as an idea expressed by the English writer George Orwell, author of 1984 and Animal Farm, in 1946:

… we are all capable of believing things which we know to be untrue, and then, when we are finally proved wrong, impudently twisting the facts so as to show that we were right. Intellectually, it is possible to carry on this process for an indefinite time: the only check on it is that sooner or later a false belief bumps up against solid reality, usually on a battlefield.

What Orwell expresses here, I’d say, is the Stark idea—the idea that, sooner or later, one’s beliefs run up against reality, whether that reality comes in the form of the weather or war or something else. It’s the notion that, sooner or later, things converge towards reality: a notion that many contemporary intellectuals have abandoned. To them, the view expressed by Orwell and the Starks is what’s known as “foundationalism”: something that all recent students in the humanities have been trained, over the past several generations, to boo and hiss.

“Foundationalism,” according to Pennsylvania State University literature professor Michael Bérubé, for example—a person I often refer to because, unlike the work of a lot others, he at least expresses what he’s saying clearly, and also because he represents a university well-known for its commitment to openness and transparency and occasionally less-than-enthusiastic opposition to child abuse—is the notion that there is a “principle that is independent of all human minds.” That is opposed, for people who think about this sort of thing, to “antifoundationalism”: the idea that a lot of stuff (maybe everything) is simply a matter of “human deliberation and consensus.” Also known as “social constructionism,” it’s an idea that Orwell, or the Starks, would have looked at slant-eyed: winter, for instance, doesn’t particularly care what people think about it, and while war is like both a seminar and a hurricane, the things that happen in war—like, say, having the technology to turn an entire city into a fireball—are not appreciably different from the impact of a tsunami.

Within the humanities however the “anti-foundationalist” or “social constructionist” idea has largely taken the field. “Notwithstanding,” as literature professor Mark Bauerlein of Emory University has remarked, “the diversity trumpeted by humanities departments these days, when it comes to conceptions of knowledge, one standpoint reigns supreme: social constructionism.” To those who hold it, it is a belief that straightforwardly powers what Bauerlein calls “a moral obligation to social justice”: in this view, either you are on the side of antifoundationalism, or you are a yahoo who thinks that the problem with the world is that there isn’t enough Donald Trump in it. Yet antifoundationalism, or the idea that everything is a matter of human discussion, is not necessarily so obviously on the side of good and not evil as the professors of the nation’s universities appear to believe.

In fact, while Bauerlein says that this dogma is “a party line, a tribal glue distinguishing humanities professors from their colleagues in the business school, the laboratory, the chapel, and the computing center, most of whom believe that at least some knowledge is independent of social conditions,” there’s actually good reason to think that a disbelief in an underlying reality isn’t all that unfamiliar to the business school. Arguably, there’s no portion of the university that pays more homage to the dogma of “social construction” than the business school.

Take, for instance, the idea Eugene Fama has built his career upon: the “random walk” theory of the stock market, also known as the “efficient market hypothesis.” Today, Fama is a Nobel Prize-laureate (well, winner of the Swedish National Bank’s Prize in Economic Sciences in Memory of Alfred Nobel, a prize not established by Alfred Nobel in his 1895 will), a professor at the University of Chicago’s Booth School of Business, and the so-called “Father of Finance, ” but in 1965 he was an obscure graduate student—at least, until he wrote the paper that established him within his profession that year, “The Behavior of Stock-Market Prices.” In that paper, Fama argued that “the future path of the price level of a security is no more predictable than the path of a series of cumulated random numbers,” which had the consequence that “the series of price changes has no memory.” (Which is what stock prospectuses mean when they say that “past performance cannot predict future performance.”) What Fama meant was that, no matter how many times he went back over the data, he could find no means by which to predict the future path of a particular stock. Hence he concluded that, when it comes to the market, “the past cannot be used to predict the future in any meaningful way”—an idea with some notably anti-foundationalist consequences.

Those consequences can be be viewed in such papers as Fama’s 2010 study with colleague Kenneth French: “Luck versus Skill in the Cross-Section of Mutual Fund Returns”—a study that set out to examine whether it was true that the managers of mutual funds can actually do what they claim they can do, and outperform the stock market. In “Luck versus Skill,” Fama and French say that the evidence shows those managers can’t: “For fund investors the … results are disheartening,” because “few active funds produce … returns that cover their costs.” Maybe there are really intelligent people out there who are smarter than the market, Fama is suggesting—but if there are, he can’t find them.

Now, so far Fama’s idea might sound pretty unexceptional: to readers of this blog, it might even sound like common sense. It’s a fairly close idea to the one explored, for instance, by psychologist Amos Tversky and his co-authors in the paper, “The Hot Hand in Basketball,” which was about how what appeared to be a “hot,” or “clutch,” basketball shooter was simply an effect of randomness: if your skill level is such that you expect to make a certain percentage of your shots, then—simply through the laws of probability—it is likely that you will make a certain number of baskets in a row. Similarly, if there are enough mutual funds in the market, some number of them will have gaudy track records to report: “Given the multitude of funds,” as Fama writes, “many have extreme returns by chance.” If there’s enough participants in any competition, some will be winners—or to put it another way, if a monkey throws enough shit at a wall, some of it will stick.

That, Fama might say, doesn’t mean that the monkey has somehow gotten in touch with Reality: if no one person can outperform the market, then there is nothing anyone can know that would help them to become a better stock-picker. What that must mean in turn is (as the Wikipedia article on the subject notes) that “market prices reflect all available information,” or that “stocks always trade at their fair value”—which is right about where that the work of seemingly-conservative professors in economics departments and business schools, and their seeming-liberal opponents in departments of the humanities begins to converge.

Fama, after all, denies the existence of what are known as “bubbles”: “speculative bubbles, market bubbles, price bubbles, financial bubbles, speculative manias or balloons” as Wikipedia terms them. “Bubbles” describe situations in which a given asset—like, I don’t know, a house—is traded “at a price or price range that strongly deviates from the corresponding asset’s intrinsic value.” The classic example is the Dutch tulip craze of the seventeenth century, during which a single tulip bulb might have sold for ten times the yearly wage of a workman. (Other instances might be closer to the reader’s mind than that.) But according to Fama there can be no such thing as a “bubble”: when John Cassidy of The New Yorker said to Fama in an interview that the chief problem during the financial crisis of 2008 was that “there was a credit bubble that inflated and ultimately burst,” Fama replied by saying, “I don’t know what a credit bubble means. I don’t even know what a bubble means. These words have become popular. I don’t think they have any meaning.” Although a careful reader might note that what Fama is saying here is something like that there is a bubble in the concept of bubbles, what he intends is to deny that there are bubbles, and thus that there is any “intrinsic value” to a given asset.

It’s at this point, I think, that the connection between Eugene Fama’s contention about the “efficient market hypothesis” and the doctrine in the humanities known as “antifoundationalism” becomes clear: both are denials of the Starks’ “Winter Is Coming” motto. After all, a bubble only makes sense if there is some kind of “intrinsic,” or “foundational,” value to something; similarly, a “foundationalist” thinks that there is some nonhuman reality. But why does this obscure and esoteric doctrinal dispute among a few intellectuals matter, aside from being the latest turn of the wheel of fashion within the walls of the academy?

Well, it matters because what they are really discussing—the real meaning of “intrinsic value”—is whether to allow ordinary people to have any say about the future of their lives.

Many liberals, for instance, have warned about the Republican assault on the right to vote in such matters as the Supreme Court’s 2013 ruling in Shelby County vs. Holder, which essentially gutted the Voting Rights Act of 1965, or the passage of “voter ID laws” in many states—sold as “protections” but in reality a means of preventing voting. What’s far less-often discussed, however, is that intellectuals of the supposed academic left have begun—quietly, to be sure—to question the very idea of voting.

Oxford don Mary Beard, for example—a scholar of the ancient world and avowed feminist—recently wrote a column for the London Review of Books concerning the “Brexit” referendum, in which the people of Great Britain decided whether to stay in the European Union or not. Beard’s sort—educated, with “progressive” opinions—thought that Britain ought to remain in the Union; when the results came in, however, the nation had decided to leave, or “Brexit.” “Handing us a referendum,” Beard wrote in response, “is not a way to reach a responsible decision”—“for God’s sake,” one can almost hear Beard lecturing, “how can you let an important decision be up to the [insert condescending adjective here] voters?” But while that might sound like a one-time response to a very particular situation, in fact many smart people who share Beard’s general views also share her distrust of elections.

What is an election, anyway, but an event analogous to a battle, or a hurricane? To people inclined to dismiss the significance of real events, it’s easy enough to dismiss the notion of elections. “Importantly”— wrote Princeton University’s Lawrance S. Rockefeller Professor of Politics, Stephen Macedo, recently—“majority rule is not a fundamental principle of either democracy or fairness, nor is it required by any basic principle of democracy or fairness.” According to Macedo, “the basic principle of democracy” isn’t elections, but instead “political equality,” or a “respect [for] minority rights and … fair and inclusive deliberation.” In other words, so long as “minority rights” are respected and there is “fair and inclusive deliberation,” it doesn’t matter if anyone votes or not—which is to say that to very many smart, and supposedly “liberal” or “leftist” people, the very notion that voting has any kind of “intrinsic value” to it at all has become irrelevant.

That, more or less, is what the characters on Game of Thrones think too. After all, as Tywin says to Jaime at one point during the conversation I began this essay with, a “lion doesn’t concern himself with the opinion of a sheep.” Which, one supposes, is not a very surprising sentiment on a show that, while it sometimes depicts depicts dragons and magic, mostly concerns the doings of a handful of aristocrats in a feudal age. What might be pretty surprising, however—depending on your level of distrust—is that, today, a great many of the people entrusted to be society’s shepherds appear to agree with them.

So Small A Number

How chance the King comes with so small a number?
The Tragedy of King Lear. Act II, Scene 4.

 

Who killed Michael Brown, in Ferguson, Missouri, in 2014? According to the legal record, it was police officer Darren Wilson who, in August of that year, fired twelve bullets at Brown during an altercation in Ferguson’s streets—the last being, said the coroner’s report, likely the fatal one. According to the protesters against the shooting (the protest that evolved into the #BlackLivesMatter movement), the real culprit was the racism of the city’s police department and civil administration; a charge that gained credibility later when questionable emails written, and sent to, city employees became public knowledge. In this account, the racism of Ferguson’s administration itself simply mirrored the racism that is endemic to the United States; Darren Wilson’s thirteenth bullet, in short, was racism. Yet, according to the work of Radley Balko of the Washington Post, among others, the issue that lay behind Brown’s death was not racism, per se, but rather a badly-structured political architecture that fails to consider a basic principle of reality banally familiar to such bastions of sophisticated philosophic thought as Atlantic City casinos and insurance companies: the idea that, in the words of the New Yorker’s Malcolm Gladwell, “the safest and most efficient way to provide [protection]” is “to spread the costs and risks … over the biggest and most diverse group possible.” If that is so, then perhaps it could be said that Brown’s killer was whoever caused Americans to forget that principle—if so, a case could be made that Brown’s killer was a Scottish philosopher who lived more than two centuries ago: the sage of skepticism, David Hume.

Hume is well-known in philosophical circles for, among other contributions, describing something he called the “is-ought problem”: in his early work, A Treatise of Human Nature, Hume said his point was that “the distinction of vice and virtue is not founded merely on the relations of objects”—or, that just because reality is a certain way, that does not mean that it ought to be that way. British philosopher G.E. Moore later called the act of mistaking is with ought the “naturalistic fallacy”: in 1903’s Principia Ethica, Moore asserted (as J.B. Schneewind of Johns Hopkins has paraphrased it) that “claims about morality cannot be derived from statements of facts.” It’s a claim, in other words, that serves to divide questions of morality, or values, from questions of science, or facts—and, as should be self-evident, the work of the humanities requires a intellectual claim of this form in order to exist. If morality, after all, is amenable to scientific analysis there would be little reason for the humanities.

Yet, there is widespread agreement among many intellectuals that the humanities are not subject to scientific analysis, and specifically because only the humanities can tackle subjects of “value.” Thus, for instance, we find professor of literature Michael Bérubé, of Pennsylvania State University—an institution noted for its devotion to truth and transparency—scoffing “as if social justice were a matter of discovering the physical properties of the universe” when faced with doubters like Harvard biologist E. O. Wilson, who has had the temerity to suggest that the humanities could learn something from the sciences. And, Wilson and others aside, even some scientists ascribe to some version of this split: the biologist Stephen Jay Gould, for example, echoed Moore in his essay “Non-Overlapping Magisteria” by claiming that while the “net of science covers the empirical universe: what is it made of (fact) and why does it work this way (theory),” the “net of religion”—which I take in this instance as a proxy for the humanities generally—“extends over questions of moral meaning and value.” Other examples could be multiplied.

How this seemingly-arid intellectual argument affected Michael Brown can be directly explained, albeit not easily. Perhaps the simplest route is by reference to the Malcolm Gladwell article I have already cited: the 2006 piece entitled “The Risk Pool.” In a superficial sense, the text is a social history about the particulars of how social insurance and pensions became widespread in the United States following the Second World War, especially in the automobile industry. But in a more inclusive sense, “The Risk Pool” is about what could be considered a kind of scientific law—or, perhaps, a law of the universe—and how, in a very direct sense, that law affects social justice.

In the 1940s, Gladwell tells us, the leader of the United Auto Workers union was Walter Reuther—a man who felt that “risk ought to be broadly collectivized.” Reuther thought that providing health insurance and pensions ought to be a function of government: that way, the largest possible pool of laborers would be paying into a system that could provide for the largest possible pool of recipients. Reuther’s thought, that is, most determinedly centered on issues of “social justice”: the care of the infirm and the aged.

Reuther’s notions however also could be thought of in scientific terms: as an instantiation of what is called, by statisticians, the “law of large numbers.” According to Caltech physicist Leonard Mlodinow, the law of large numbers can be described as “the way results reflect underlying probabilities when we make a large number of observations.” A more colorful way to think of it is the way trader and New York University professor Nassim Taleb puts it in his book, Fooled By Randomness: The Hidden Role of Chance in Life and in the Markets: there, Taleb observes that, were Russian roulette a game in which the survivors gained the savings of the losers, then “if a twenty-five-year-old played Russian roulette, say, once a year, there would be a very slim possibility of his surviving until his fiftieth birthday—but, if there are enough players, say thousands of twenty-five-year-old players, we can expect to see a handful of (extremely rich) survivors (and a very large cemetery).” In general, the law of large numbers is how casinos (or investment banks) make money legally (and bookies make it illegally): by taking enough bets (which thereby cancel each other out) the institution, whether it is located in a corner tavern or Wall Street, can charge customers for the privilege of betting—and never take the risk of failure that would accrue were that institution to bet one side or another. Less concretely, the same law is what allows us to assert confidently a belief in scientific results: because they can be repeated again and again, we can trust that they reflect something real.

Reuther’s argument about social insurance and pensions more or less explicitly mirrors that law: like a casino, the idea of social insurance is that, by including enough people, there will be enough healthy contributors paying into the fund to balance out the sick people drawing from it. In the same fashion, a pension fund works by ensuring that there are enough productive workers paying into the pension to cancel out the aged people receiving from it. In both casinos and pension funds, in other words, the only means by which they can work is by having enough people included in them—if there are too few, the fund or casino takes the risk that the numbers of those drawing out exceed those paying in, at which point the operation fails. (In gambling, this is called “breaking the bank”; Ward Wilson pithily explains why that doesn’t happen very often in his learned tome, Gambling for Winners; Your Hard-Headed, No B.S., Guide to Gaming Opportunities With a Long-Term, Mathematical, Positive Expectation: “the casino has more money than you.”) Both casinos and insurance funds must have large numbers of participants in order to function: as numbers decrease, the risk of failure increases. Reuther therefore thought that the safest possible way to provide social protection for all Americans was to include all Americans.

Yet, according to those following Moore’s concept of the “naturalistic fallacy,” Reuther’s argument would be considered an illicit intrusion of scientific ideas into the realm of politics, or “value.” Again, that might appear to be an abstruse argument between various schools of philosophers, or between varieties of intellectuals, scientific and “humanistic.” (It’s an argument that, in addition to accruing to the humanities the domain of “value,” also cedes categories like stylish writing—as if scientific arguments can only be expressed by equations rather than quality of expression, and as if there weren’t scientists who were brilliant writers and humanist scholars who weren’t awful ones.) But while in one sense this argument takes place in very rarified air, in another it takes place on the streets where we live. Or, more specifically, the streets where Michael Brown was shot and killed.

The problem of Ferguson, Radley Balko’s work for the Washington Post tells us, is not one of “race,” but instead a problem of poor people. More exactly, a problem of what happens when poor people are excluded from larger population pools—or in other words, when the law of large numbers is excluded from discussions of public policy. Balko’s story draws attention to two inarguable facts: the first, that there “are 90 municipalities in St. Louis County”—Ferguson’s county—and nearly all of them “have their own police force, mayor, city manager and town council,” while 81 of those towns also have their municipal court capable of sentencing lawbreakers to paying fines. By contrast, Balko draws attention to the second-largest, by population, Missouri urban county: Kansas City’s Jackson County, which is both “geographically larger than St. Louis County and has about two-thirds the population”—and yet “has just 19 municipalities, and just 15 municipal courts.” Comparing the two counties, that is, implies that St. Louis County is far more segmented than Jackson County is: there are many more population pools in the one than in the other.

Knowing what is known about the law of large numbers then, it might not be surprising that a number of the many municipalities of St. Louis County are worse off than the few municipalities of Jackson County: in St. Louis County some towns, Balko reports, “can derive 40 percent or more of their annual revenue from the petty fines and fees collected by their municipal courts”—rather than, say, property taxes. That, it seems likely, is due to the fact that instead of many property owners paying taxes, there are instead a large number of renters paying rent to a small number of landlords, who in turn are wealthy enough to minimize their tax burden by employing tax lawyers and other maneuvers. Because these towns thusly cannot depend on property tax revenue, they must instead depend on the fines and fees the courts can recoup from residents: an operation that, because of the chaos that necessarily implies for the lives of those citizens, usually results in more poverty. (It’s difficult to apply for a job, for example, if you are in jail due to failure to pay a parking ticket.) Yet, if the law of large numbers is excluded a priori from political discussion—as some in the humanities insist it must be, whether out of disciplinary self-interest or some other reason—that necessarily implies that residents of Ferguson cannot address the real causes of their misery, a fact that may explain just why those addressing the problems of Ferguson focus so much on “racism” rather than the structural issues raised by Balko.

The trouble however with identifying “racism” as an explanation for Michael Brown’s death is that it leads to a set of “solutions” that do not address the underlying issue. In the November following Brown’s death, for example, Trymaine Lee of MSNBC reported that the federal Justice Department “held a two-day training with St. Louis area police on implicit racial bias and fair and impartial policing”—as if the problem of Ferguson was wholly to blame on the police department or even the town administration as a whole. Not long afterwards, the Department of Justice reported (according to Ray Sanchez of CNN) that, while Ferguson is 67% African-American, in the two years prior to Brown’s death “85% of people subject to vehicle stops by Ferguson police were African-American,” while “90% of those who received citations were black and 93% of people arrested were black”—data that seems to imply that, were those numbers only closer to 67%, then there would be no problem in Ferguson.

Yet, even if the people arrested in Ferguson were proportionately black, that would have no effect on the reality that—as Mike Maciag of Governing reported shortly after Brown’s death—“court fine collections [accounted] for one-fifth of [Ferguson’s] total operating revenue” in the years leading up to the shooting.  The problem of Ferguson isn’t that its residents are black, and so that the town’s problems could be solved by, say, firing all the white police officers and hiring all black ones. Instead, Ferguson’s difficulty is not just that the town’s citizens are poor—but that they are politically isolated.

There is, in sum, a fundamental reason that the doctrine of “separate but equal” is not merely bad for American schools, as the Supreme Court held in the 1954 decision of Brown v. Board of Education, the landmark case that ended Jim Crow in the American South. That reason is the same at all scales: from the nuclear supercollider at CERN exploring the essential particles of the universe to the roulette tables of Las Vegas to the Social Security Administration, the greater the number of inputs the greater the certainty, and hence safety, of the results. Instead of affirming that law of the universe, however, the work of people like Michael Bérubé and others is devoted to questioning whether universal laws exist—in other words, to resisting the encroachment of the sciences on their turf. Perhaps that resistance is somehow helpful in some larger sense; perhaps it is so that, as is often claimed, the humanities enlarge our sense of what it means to be human, among other sometimes-described possible benefits—I make no claims on that score.

What’s absurd, however, is the monopolistic claim sometimes retailed by Bérubé and others that the humanities have an exclusive right to political judgment: if Michael Brown’s death demonstrates anything, it ought (a word I use without apology) to show that, by promoting the idea of the humanities as distinct from the sciences, humanities departments have in fact collaborated (another word I use without apology) with people who have a distinct interest in promoting division and discord for their own ends. That doesn’t mean, of course, that anyone who has ever read a novel or seen a film helped to kill Michael Brown. But, just as it is so that institutions that cover up child abuse—like the Catholic Church or certain institutions of higher learning in Pennsylvania—bear a responsibility to their victims, so too is there a danger in thinking that the humanities have a monopoly on politics. Darren Wilson did have a thirteenth bullet, though it wasn’t racism. Who killed Michael Brown? Why, if you think that morality should be divided from facts … you did.

Of Pale Kings and Paris

 

I saw pale kings and princes too …
—John Keats.
“La Belle Dame Sans Merci” (1819).

… and the pale King glanced across the field
Of battle, but no man was moving there …
Alfred Tennyson.
Idylls of the King: The Passing of Arthur (1871).

 

“It’s difficult,” the lady from Boston was saying a few days after the attacks in Paris, “to play other courses when your handicap is established at an easy course like this one.” She was referring to the golf course to which I have repaired following an excellent autumn season at Medinah Country Club: the Chechessee Creek Club, just south of Beaufort, South Carolina—a course that, to some, might indeed appear to be an easy course. Chechessee measures just barely more than 6000 yards from the member tees and, like all courses in the Lowcountry, it is virtually tabletop flat—but appearances are deceptive. For starters, the course is short on the card because it has five par-three holes, not the usual four, and the often-humid and wet conditions of the seacoast mean that golf shots don’t travel as they do in drier and more elevated locations. So in one sense, the lady was right—in precisely the same sense, as I suspect the lady was not aware, that Martin Heidegger, writing at an earlier moment of terror and the movements of peoples, was right.

Golf course architecture of course might be viewed as remote from the preoccupations of Continental theory as the greens of the Myopia Hunt Club, the lady’s home golf course, are from, say, the finish line of the Boston Marathon. Yet, just as Martin Heidegger is known as an exceptionally, even historically, difficult writer, Myopia Hunt Club is justly known to the elect as an exceptionally, even historically, difficult golf course. At the seventh U.S. Open in 1901—the only Open in which no competitor managed to break 80—the course established the record for highest winning score of a U.S. Open: a 331 shot by both Willie Anderson (who died tragically young) and Alex Smith that was resolved by the first playoff in the Open’s history. (Anderson’s 85 just edged Smith’s 86). So the club earned its reputation for difficulty.

The nature of those difficulties are, in fact, the very same ones those who like the Chechessee Creek Club trumpet: the deeper mysteries of angles, of trompe l’oiel, the various artifices by which the architects of golf’s Golden Age created the golf courses still revered today and whose art Coore and Crenshaw, Chechesee’s designers, have devoted their careers to recapture. Like Chechessee, Myopia Hunt isn’t, and never was, especially long: for most of its history, it has played around 6500 yards, which even at the beginning of the twentieth century wasn’t remarkable. Myopia Hunt is a difficult golf course for reasons entirely different than difficult golf courses like Medinah or Butler National are difficult: they are not easily apparent.

Take, for example, the 390-yard fourth: the contemporary golf architect Tom Doak once wrote that it “might be the best hole of its length in the free world.” A dogleg around a wetland, the fourth is, it seems, the only dogleg on a course of straight holes—in other words, slightly but not extraordinarily different from the other holes. However the hole’s green, it seems, is so pitched that a golfer in one of the course’s Opens (there have been four; the last in 1908) actually putted off the green—and into the wetland, where he lost the ball. (This might qualify as the most embarrassing thing that has ever happened to a U.S. Open player.) The dangers at Myopia are not those of a Medinah or a Butler National—tight tee shots to far distant greens, mainly—but are instead seemingly-minor but potentially much more catastrophic.

At the seventh hole, according to a review at Golf Club Atlas, the “members know full well to land the ball some twenty yards short of the putting surface and allow for it to bumble on”—presumably, players who opt differently will suffer an apocalyptic fate. In the words of one reviewer, “one of the charms of the course” is that “understanding how best to play Myopia Hunt is not immediately revealed.” Whereas the hazards of a Butler or Medinah are readily known, those at Myopia Hunt are, it seems, only revealed when it is too late.

It’s for that reason, the reviewer goes on to say, that the club had such an impact on American golf course design: the famed Donald Ross arrived in America the same year Myopia Hunt held its first Open, in 1898, and spent many years designing nearby courses while drawing inspiration by visiting the four-time Open site. Other famous Golden Age architects also drew upon Myopia Hunt for their own work. As the reviewer above notes, George Thomas and A.W. Tillinghast—builders of some of the greatest American courses—“were influenced by the abundant placement and penal nature of the hazards” (like the wetland next to the fourth’s green) at Myopia Hunt. Some of America’s greatest golf courses were built by architects with first-hand knowledge of the design style pioneered and given definition by Myopia Hunt.

Coore and Crenshaw—the pale kings of American golf architecture—like to advertise themselves as champions of this kind of design: a difficulty derived from the subtle and the non-obvious, rather than simply by requiring the golfer to hit the ball really far and straight. “Theirs,” says the Coore and Crenshaw website, “is an architectural firm based upon the shared philosophy that traditional, strategic golf is the most rewarding.” Chechessee, in turn, is meant to be a triumph of their view: according to their statement on Chechesee’s website, Coore and Crenshaw’s goal when constructing it “was to create a golf course of traditional character that would reward thoughtful, imaginative, and precise play,” and above all to build a course—like a book?—whose “nuances … will reveal themselves over time.” In other words, to build a contemporary Myopia Hunt.

Yet in the view of this Myopia Hunt member, Coore and Crenshaw failed: Chechessee is, for this lady, far easier than her nineteenth-century home course. Why is that? My speculation, without having seen Myopia Hunt, is that whereas Coore and Crenshaw design in a world that has seemingly passed by the virtues of the past, the Massachusetts course was designed on its own terms. That is, Coore and Crenshaw work within an industry where much of their audience has internalized standards that were developed by golf architects who themselves were reacting against the Golden Age architects like Tillinghast or Ross. Whereas Myopia Hunt Club can have a hole—the ninth—whose green is only nine yards wide and forty yards deep, the following generation of architects (and golfers) rejected such designs as “unfair,” and worked to make golf courses less “odd” or “unique.” So when Coore and Crenshaw come to design, they must work against expectations that the designer of Myopia Hunt Club did not.

Thus, the Golden Age designers were in the same position that, according to the German philosopher Martin Heidegger, the Pre-Socratic philosophers were: in a “brief period of authentic openness to being,” as the Wikipedia article about Heidegger says. That is, according to Heidegger the Pre-Socratics (the Greek philosophers, like Anaximander and Heraclitus and Parmenides, all of whom predated Socrates) had a relationship to the world, and philosophizing about it, that was unavailable to those who would come afterwards: they were able, Heidegger insinuates, to confront the world itself in a way different from those who came afterwards—after all, the latecomers unavoidably had to encounter the works of those very philosophers first.

Unlike his teacher then, Edmund Husserl—who “argued that all that philosophy could and should be is a description of experience”—Heidegger himself however thought that the Pre-Socratic moment was impossible to return to: hence, Heidegger claimed that “experience is always already situated in a world and in ways of being.” So while such a direct confrontation with the world as Husserl demands may have been possible for the Pre-Socratics, Heidegger is seemingly willing to allow, he also argues that history has long since closed off such a possibility, and thus forbade the kind of direct experience of the world Husserl thought of as philosophy’s object. In the same way, whereas the Golden Age architects confronted golf architecture in a raw state, no such head-on confrontation is now possible.

What’s interesting about Heidegger’s view, as people like Penn State professor Michael Berubé has pointed out, is that it has had consequences for such things as our understanding of, say, astronomical objects. As Berubé says in an essay entitled “The Return of Realism,” at the end of Heidegger’s massive Being and Time—the kind encyclopedic book that really emphasizes the “German” in “German philosophy”—Heidegger’s argument that we are “always already” implicated within previous thoughts implies that, for instance, it could be said that “the discovery of Neptune in 1846 could plausibly be described, from a strictly human vantage point, as the ‘invention’ of Neptune.” Or, to put it as Heidegger does: “Once entities have been uncovered, they show themselves precisely as entities which beforehand already were.” Before Myopia Hunt Club and other courses like it were built, there were no “rules” of golf architecture—afterwards, however, sayings like “No blind shots” came to have the weight of edicts from the Almighty.

For academic leftists like Berubé, Heidegger’s insight has proven useful, in a perhaps-paradoxical way. Although the historical Heidegger himself was a member of the Nazi Party, according to Berubé his work has furthered the project of arguing “the proposition that although humans may not be infinitely malleable, human variety and human plasticity can in principle and in practice exceed any specific form of human social organization.” Heidegger’s work, in other words, aims to demonstrate just how contingent a lot of what we think of as necessary is—which is to say that his work can help us to re-view what we have taken for granted, and perhaps see it with a glimpse of what the Pre-Socratics, or the Golden Age golf architects, saw. Even if Heidegger would also deny that such would ever be possible for us, here and now.

Yet, as the example of the lady from Myopia Hunt demonstrates, such a view has also its downside: having seen the original newness, she denies the possibility that the new could return. To her, golf architecture ended sometime around 1930: just as Heidegger thought that, some time around the time of Socrates, philosophy became not just philosophy, but also the history of philosophy, so too does this lady think that golf architecture has also become the history of golf architecture.

Among the “literary people” of his own day, the novelist and journalist Tom Wolfe once complained, could be found a similar snobbishness: “it is one of the unconscious assumptions of modern criticism,” Wolfe wrote, “that the raw material is simply ‘there,’” and from such minds the only worthy question is “Given such-and-such a body of material, what has the artist done with it?” What mattered to these critics, in other words, wasn’t the investigatory reporting done by such artists as Balzac or Dickens, Tolstoy or Gogol, but rather the techniques each artist applied to that material. The human misery each of those writers witnessed and reported, this view holds Wolfe says, is irrelevant to their work; rather, what matters is how artfully that misery is arranged.

It’s a conflict familiar both to literary people and the people that invented golf. The English poets, like Keats and Tennyson, who invented the figure of the Pale King were presumably drawing upon a verse well-known to King James’ translators; literary folk who feared the cost of seeing anew. The relevant verse, imaginably the source of both Keats and Tennyson, is from the James translation of the Book of Revelations (chapter 6, verse 8):

And I looked, and behold a pale horse:
and his name that sat on him was Death,
and Hell followed with him.

But opponents of the Auld Enemy saw the new differently; as novelist John Updike once reported, according the “the old Scots adage,”

We should be conscious of no more grass …
than will cover our own graves.

To the English, both heirs to and inventors of a literary tradition, the Pale King was a terrible symbol of the New, the Young, and the Unknown. But to their ancient opponents, the Scots, the true fear was to be overly aware of the past, at the expense of welcoming in the coming age. As another Celt from across the sea, W. B. Yeats, once put the same point:

Be not inhospitable to strangers,
lest they be angels in disguise.

Parisians put the same point in the aftermath of the shootings and bombings that Friday evening on Twitter by using the hashtag “#PorteOuverte”—a slogan by which, in the aftermath of the horror, thousands of Parisians offered shelter to strangers from whatever was still lurking in the darkness. To Parisians, like the Scots before them, what matters is not whether the Pale King arrives, but our reaction when he does.