Small Is Beautiful—Or At Least, Weird

… among small groups there will be greater variation …
—Howard Wainer and Harris Zwerling.
The central concept of allopatric speciation is that new species can arise only when a small local population becomes isolated at the margin of the geographic range of its parent species.
—Stephen Jay Gould and Niles Eldredge.
If you flipped a coin a thousand times, you were more likely to end up with heads or tails roughly half the time than if you flipped it ten times.
—Michael Lewis. 

No humanist intellectual today is a “reductionist.” To Penn State English professor Michael Bérubé for example, when the great biologist E.O. Wilson speculated—in 1998’s Consilience: The Unity of Knowledge—that “someday … even the disciplines of literary criticism and art history will find their true foundation in physics and chemistry,” Wilson’s claim was (Bérubé wrote) “almost self-parodic.” Nevertheless, despite the withering disdain of English professors and such, examples of reductionism abound: in 2002, journalist Malcolm Gladwell noticed that a then-recent book—Randall Collins’ The Sociology of Philosophies—argued that French Impressionism, German Idealism, and Chinese neo-Confucianism, among other artistic and philosophic movements, could all be understood by the psychological principle that “clusters of people will come to decisions that are far more extreme than any individual member would have come to on his own.” Collins’ claim, of course, is sure to call down the scorn of professors of the humanities like Bérubé for ignoring what literary critic Victor Shklovsky might have called the “stoniness of the stone”; i.e., the specificity of each movement’s work in its context, and so on. Yet from a political point of view (and despite both the bombastic claims of certain “leftist” professors of the humanities and their supposed political opponents) the real issue with Collins’ (and Gladwell’s) “reductionism” is not that they attempt to reduce complex artistic and philosophic movements to psychology—nor even, as I will show, to biology. Instead, the difficulty is that Collins (and Gladwell) do not reduce them to mathematics.  

Yet, to say that neo-Confucianism (or, to cite one of Gladwell’s examples, Saturday Night Live) can be reduced to mathematics first begs the question of what it means to “reduce” one sort of discourse to another—a question still largely governed, Kenneth Schaffner wrote in 2012, by Ernest Nagel’s “largely unchanging and immensely influential analysis of reduction.” According to Nagel’s 1961 The Structure of Science: Problems in the Logic of Scientific Explanation, a “reduction is effected when the experimental laws of the secondary science … are shown to be the logical consequences of the theoretical assumptions … of the primary science.” Gladwell for example, discussing “the Lunar Society”—which included Erasmus Darwin (grandfather to Charles), James Watt (inventor of the steam engine), Josiah Wedgwood (the pottery maker), and Joseph Priestly (who isolated oxygen)—says that this group’s activities bears all “the hallmarks of group distortion”: someone proposes “an ambitious plan for canals, and someone else tries to top that [with] a really big soap factory, and in that feverish atmosphere someone else decides to top them all with the idea that what they should really be doing is fighting slavery.” In other words, to Gladwell the group’s activities can be explained not by reference to the intricacies of thermodynamics or chemistry, nor even the political difficulties of the British abolitionist movement—or even the process of heating clay. Instead, the actions of the Lunar Society can be understood in somewhat the same fashion that, in bicycle racing, the peloton (which is not as limited by wind resistance) can reach speeds no single rider could by himself. 

Yet, if it is so that the principle of group psychology explains, for instance, the rise of chemistry as a discipline, it‘s hard to see why Gladwell should stop there. Where Gladwell uses a psychological law to explain the “Blues Brothers” or “Coneheads,” in other words, the late Harvard professor of paleontology Stephen Jay Gould might have cited a law of biology: specifically, the theory of “punctuated equilibrium”—a theory that Gould, along with his colleague Niles Eldredge, first advanced in 1972. The theory that the two proposed in “Punctuated Equilibria: an Alternative to Phyletic Gradualism” could, thereby, be used to explain the rise of the Not Ready For Prime Time Players as equally well as the psychological theory Gladwell advances.    

In that early 1970s paper, the two biologists attacked the reigning idea of how new species begin: what they called the “picture of phyletic gradualism.” In the view of that theory, Eldredge and Gould  wrote, new “species arise by the transformation of an ancestral population into its modified descendants.” Phyletic gradualism thusly answers the question of why dinosaurs went extinct by replying that they didn’t: dinosaurs are just birds now. More technically, under this theory the change from one species to another is a transformation that “is even and slow”; engages “usually the entire ancestral population”; and “occurs over all or a large part of the ancestral species’ geographic range.” For nearly a century after the publication of Darwin’s Origin of Species, this was how biologists understood the creation of new species. To Gould and Eldredge however that view simply was not in accordance with how speciation usually occurs. 

Instead of ancestor species gradually becoming descendant species, they argued that new species are created by a process they called “the allopatric theory of speciation”—a theory that might explain how Hegel’s The Philosophy of Right and Chevy Chase’s imitation of Gerald Ford could be produced by the same phenomena. Like Gladwell’s use of group psychology (which depends on the competition within a set of people who all know each other), where “phyletic gradualism” thinks that speciation occurs over a wide area to a large population, the allopatric theory thinks that speciation occurs in a narrow range to a small population: “The central concept of allopatric speciation,” Gould and Eldredge wrote, “is that new species can arise only when a small local population becomes isolated at the margin of the geographic range of its parent species.” Gould described this process for a non-professional audience in his essay, “The Golden Rule: A Proper Scale for Our Environmental Crisis,” from his 1982 book, Eight Little Piggies: Reflections on Natural History—a book that perhaps demonstrates just how considerations of biological laws might show why John Belushi’s “Samurai Chef,” or Gilda Radner’s “Roseanne Rosannadanna” succeeded. 

The Pinaleno Mountains, in New Mexico, house a population of squirrel called the Mount Graham Red Squirrel, which “is isolated from all other populations and forms the southernmost extreme of the species’s range.” The Mount Graham subspecies can survive in those mountains despite being so far south of the rest of its species because the Pinalenos are “‘sky islands,’” as Gould calls them: “patches of more northern microclimate surrounded by southern desert.” It’s in such isolated places, the theory of allopatric speciation holds, that new species develop: because the Pinalenos are “a junction of two biogeographic provinces” (the Nearctic “by way of the Colorado Plateau“ and the Neotropical “via the Mexican Plateau”), they are a space where new kinds of selection pressures can work upon a subpopulation than are available on the home range, and therefore places where subspecies can make the kinds of evolutionary “leaps” that can allow such new populations, after success in such “nurseries,” to return to the original species’ home range and replace the ancestral species. Such a replacement, of course, does not involve the entire previous population, nor does it occur over the entire ancestral range, nor is it even and slow, as the phyletic gradualist theory would suggest.

The application to the phenomena considered by Gladwell then is fairly simple. What was happening at 30 Rockefeller Center in New York City in the autumn of 1975 might not have been an example of “group psychology” at work, but instead an instance where a small population worked at the margins of two older comedic provinces: the new improvisational space created by such troupes as Chicago’s Second City, and the older tradition of live television created by such shows as I Love Lucy and Your Show of Shows. The features of the new form thereby forged under the influence of these pressures led, ultimately, to the extinction of older forms of television comedy like the standard three-camera situation comedy, and the eventual rise of single-camera shows like Seinfeld and The Office. Or so, at least, it can be imagined that the story might be told, rather than in the form of Gladwell’s idea of group psychology. 

Yet, it isn’t simply possible to explain a comedic phenomenon or a painting movement in terms of group psychology, instead of the terms familiar to scholars of the humanities—or even, one step downwards in the explanatory hierarchy, in terms of biology instead of psychology. That’s because, as the work of Israeli psychologists Daniel Kahneman and Amos Tversky suggests, there is something odd, mathematically, about small groups like subspecies—or comedy troupes. That “something odd” is this: they’re small. Being small has (the two pointed out in their 1971 paper, “Belief in the Law of Large Numbers”) certain mathematical consequences—and, perhaps oddly, those consequences may help to explain something about the success of Saturday Night Live. 

That’s anyway the point the two psychologists explored in their 1971 paper, “Belief in the Law of Large Numbers”—a paper whose message would, perhaps oddly, later be usefully summarized by Gould in a 1983 essay, “Glow, Big Glowworm”: “Random arrays always include some clumping … just as we will flip several heads in a row quite often so long as we can make enough tosses.” Or—as James Forbes of Edinburgh University noted in 1850—it would be absurd to expect to find “on 1000 throws [of a fair coin] there should be exactly 500 heads and 500 tails.” (In fact, as Forbes went on to remark, there’s less than a 3 percent chance of getting such a result.) But human beings do not usually realize that reality: in “Belief,” Kahneman and Tversky reported G.S. Tune’s 1964 study that found that when people “are instructed to generate a random sequence of hypothetical tosses of a fair coin … they produce sequences where the proportion of heads in any short segment stays far closer to .50 than the laws of chance would predict.” “We assume”—as Atul Gawande summarized the point of “Belief” for the New Yorker in 1998—“that a sequence of R-R-R-R-R-R is somehow less random than, say, R-R-B-R-B-B,” while in reality “the two sequences are equally likely.” Human beings find it difficult to understand true randomness—which may be why it may be so difficult to see how this law of probability might apply to, say, the Blues Brothers.

Yet, what the two psychologists were addressing in “Belief” was the idea expressed by statisticians Howard Wainer and Harris Zwerling in a 2006 article later cited by Kahneman in his recent bestseller, Thinking: Fast and Slow: the statistical law that “among small groups there will be greater variation.” In their 2006 piece, Wainer and Zwerling illustrated the point by observing that, for example, the lowest-population counties in the United States tend to have the highest kidney cancer rates per capita, or the smallest schools disproportionately appear on lists of the best-performing schools. What they mean is that a “county with, say, 100 inhabitants that has no cancer deaths would be in the lowest category” of kidney cancer rates—but “if it has one cancer death it would be among the highest”—while similarly, examining the Pennsylvania System of School Assessment for 2001-02 found “that, of the 50 top-scoring schools (the top 3%), six of them were among the 50 smallest schools (the smallest 3%),” which is “an overrepresentation by a factor of four.” “When the population is small,” they concluded, “there is wide variation”—but when “populations are large … there is very little variation.” Or, it may not be that small groups push each member to achieve more, it’s that small groups of people tend to have high amounts of variation, and (every so often) one of those groups varies so much that somebody invents the discipline of chemistry—or invent the Festrunk Brothers.

The $64,000 question, from this point of view, isn’t the groups that created a new way of painting—but instead all of the groups that nobody has ever heard of that tried, but failed, to invent something new. Yet as a humanist intellectual like Bérubé would surely point out, to investigate this question in this way is to miss nearly everything about Impressionism (or the Land Shark) that makes it interesting. Which, perhaps, is so—but then again, isn’t the fact that such widely scattered actions and organisms can be united under one theoretical lens interesting? Taken far enough, what matters to Bérubé is the individual peculiarities of everything in existence—an idea that recalls what Jorge Luis Borges once described as John Locke’s notion of “an impossible idiom in which each individual object, each stone, each bird and branch had an individual name.” To think of Bill Murray in the same frame as a New Mexican squirrel is, admittedly, to miss the smell of New York City at dawn on a Sunday morning after a show the night before—but it also involves a gain, and one that is applicable to many other situations besides the appreciation of the hard work of comedic actors. Although many in the humanities then like to attack what they call reductionism for its “anti-intellectual” tendencies, it’s well-known that a large enough group of trees constitutes more than a collection of individual plants. There is, I seem to recall, some kind of saying about it.  

Advertisements

Forked

Alice came to a fork in the road. “Which road do I take,” she asked.
“Where do you want to go?” responded the Cheshire Cat.
“I don’t know,” Alice answered.
“Then,” said the Cat, “it doesn’t matter.”
—Lewis Carroll. Alice’s Adventures in Wonderland. (1865).

 

At Baden Baden, 1925, Reti, the hypermodern challenger, opened with the Hungarian, or King’s Fianchetto; Alekhine—the only man to die still holding the title of world champion—countered with an unassuming king’s pawn to e5. The key moment did not take place, however, until Alekhine threw his rook nearly across the board at move 26, which appeared to lose the champion a tempo—but as C.J.S. Purdy would write for Chess World two decades, a global depression, and a world war later, “many of Alekhine’s moves depend on some surprise that comes far too many moves ahead for an ordinary mortal to have the slightest chance of foreseeing it.” The rook move, in sum, resulted in the triumphant slash of Alekhine’s bishop at move 42—a move that “forked” the only two capital pieces Reti had left: his knight and rook. “Alekhine’s chess,” Purdy would write later, “is like a god’s”—an hyperbole that not only leaves this reader of the political scientist William Riker thankful that the chess writer did not see the game Riker saw played at Freeport, 1858, but also grateful that neither man saw the game played at Moscow, 2016.

All these games, in other words, ended with what is known as a “fork,” or “a direct and simultaneous attack on two or more pieces by one piece,” as the Oxford Companion to Chess defines the maneuver. A fork, thereby, forces the opponent to choose; in Alekhine’s triumph, called “the gem of gems” by Chess World, the Russian grandmaster forced his opponent to choose which piece to lose. Just so, in The Art of Political Manipulation, from 1986, University of Rochester political scientist William Riker observed that “forks” are not limited to dinner or to chess. In Political Manipulation Riker introduced the term “heresthetics,” or—as Norman Schofield defined it in 2006—“the art of constructing choice situations so as to be able to manipulate outcomes.” Riker further said that  “the fundamental heresthetical device is to divide the majority with a new alternative”—or in other words, heresthetics is often a kind of political fork.

The premier example Riker used to illustrate such a political forking maneuver was performed, the political scientist wrote, by “the greatest of American politicians,” Abraham Lincoln, at the sleepy Illinois town of Freeport during the drowsy summer of 1858. Lincoln that year was running for the U.S. Senate seat for Illinois against Stephen Douglas—the man known as “the Little Giant” both for his less-than-imposing frame and his significance in national politics. So important had Douglas become by that year—by extending federal aid to the first “land grant” railroad, the Illinois Central, and successfully passing the Compromise of 1850, among many other achievements—that it was an open secret that he would run for president in 1860. And not merely run; the smart money said he would win.

Where the smart money was not was on Abraham Lincoln, a lanky and little-known one-term congressman in 1858. The odds against the would-be Illinois politician were so long, in fact, that according to Riker Lincoln had to take a big risk to win—which he did, by posing a question to Douglas at the little town of Freeport, near the Wisconsin border, towards the end of August. That question was this: “Can the people of a United States Territory, in any lawful way, against the wish of any citizen of the United States, exclude slavery from its limits prior to the formation of a state constitution?” It was a question, Riker wrote, that Lincoln had honed “stilletto-sharp.” It proved a knife in the heart of Stephen Douglas’ ambitions.

Lincoln was, of course, explicitly against slavery, and therefore thought that territories could ban slavery prior to statehood. But many others thought differently; in 1858 the United States stood poised at a precipice that, even then, only a few—Lincoln among them—could see. Already, the nation had been roiled by the Kansas-Nebraska Act of 1854; already, a state of war existed between pro- and anti-slavery men on the frontier. The year before, the U.S. Supreme Court had outlawed the prohibition of slavery in the territories by means of the Dred Scott decision—a decision that, in his “House Divided” speech in June that same year, Lincoln had already charged Douglas with conspiring with the president of the United States, James Buchanan, and Supreme Court Chief Justice Roger Taney to bring about. What Lincoln’s question was meant to do, Riker argued, was to “fork” Douglas between two constituencies: the local Illinois constituents who could return, if they chose, Douglas to the Senate in 1858—and the larger, national constituency that could deliver, if they chose, Douglas the presidency in 1860.

“If Douglas answered yes” to Lincoln’s question, Riker wrote, and thereby said that a territory could exclude slavery prior to statehood, “then he would please Northern Democrats for the Illinois election”—because he would take an issue away from Lincoln by explicitly stating they shared the same opinion. If so, he would take away one of Lincoln’s chief weapons—a weapon especially potent in far northern, German-settled, towns like Freeport. But what Lincoln saw, Riker says, is that if Douglas said yes he would also earn the enmity of Southern slaveowners, for whom it would appear “a betrayal of the Southern cause of the expansion of slave territory”—and thusly cost him a clean nomination for the leadership of the Democratic Party as candidate for president in 1860. If, however, Douglas answered no, “then he would appear to capitulate entirely to the Southern wing of the party and alienate free-soil Illinois Democrats”—thereby hurting “his chances in Illinois in 1858 but help[ing] his chances for 1860.” In Riker’s view, in other words, at Freeport in 1858 Lincoln forked Douglas much as the Russian grandmaster would fork his opponent at the German spa in 1925.

Yet just as that late winter game was hardly the last time the maneuver was used in chess, “forking” one’s political opponent scarcely ended in the little nineteenth-century Illinois farm village. Many of Hillary Clinton’s supporters in 2016 now believe that the Russians “interfered” with the American election—but what hasn’t been addressed is how the Russian state, led by Putin, could have interfered with an American election. Like a vampire who can only invade a home once invited, anyone attempting to “interfere” with an election must have some material to work with; Lincoln’s question at Freeport, after all, exploited a previously-existing difference between two factions within the Democratic Party. If the Russians did “interfere” with the 2016 election, that is, they could only have done so if there already existed yet another split within the Democratic ranks—which, as everyone knows, there was.

“Not everything is about an economic theory,” Hillary Clinton claimed in a February of 2016 speech in Nevada—a claim common enough to anyone who’s been on campus in the past two generations. After all, as gadfly Thomas Frank has remarked (referring to the work of James McGuigan), the “pervasive intellectual reflex” of our times is the “‘terror of economic reductionism.’” The idea that “not everything is about economics” is the core of what is sometimes known as the “cultural left,” or what Penn State University English professor (and former holder of the Paterno Chair) Michael Bérubé has termed “the left that aspires to analyze culture” as opposed to “the left that aspires to carry out public policy.” Clinton’s speech largely echoed the views of that “left,” which—according to the late philosopher Richard Rorty, in the book that inspired Bérubé’s remarks above—is more interested in “remedies … for American sadism” than those “for American selfishness.” It was that left that the rest of Clinton’s speech was designed to attract.

“If we broke up the big banks tomorrow,“ Clinton went on to ask after the remark about economic theory, “would that end racism?” The crowd, of course, answered “No.” “Would that end racism?” she continued, and then called again using the word “sexism,” and then again—a bit more convoluted, now—with “discrimination against the LGBT community?” Each time, the candidate was answered with a “No.” With this speech, in other words, Clinton visibly demonstrated the arrival of this “cultural left” at the very top of the Democratic Party—the ultimate success of the agenda pushed by English professors and others throughout the educational system. If, as Richard Rorty wrote, it really is true that “the American Left could not handle more than one initiative at a time,” so that “it either had to ignore stigma in order to concentrate on money, or vice versa,” then Clinton’s speech signaled the victory of the “stigma” crowd over the “money” crowd. Which is why what Clinton said next was so odd.

The next line of Clinton’s speech went like this: “Would that”—i.e., breaking up the big banks—“give us a real shot at ensuring our political system works better because we get rid of gerrymandering and redistricting and all of these gimmicks Republicans use to give themselves safe seats, so they can undo the progress we have made?” It’s a strange line; in the first place, it’s not exactly the most euphonious group of words I’ve ever heard in a political speech. But more importantly—well, actually, breaking up the big banks could perhaps do something about gerrymandering. According to OpenSecrets.org, after all, “72 percent of the [commercial banking] industry’s donations to candidates and parties, or more than $19 million, went to Republicans” in 2014—hence, maybe breaking them up could reduce the money available to Republican candidates, and so lessen their ability to construct gerrymandered districts. But, of course, doing so would require precisely the kinds of thought pursued by the “public policy” left—which Clinton had already signaled she had chosen against. The opening lines of her call-and-response, in other words, demonstrated that she had chosen to sacrifice the “public policy” left—the one that speaks the vocabulary of science—in favor of the “cultural left”—the one that speaks the vocabulary of the humanities. By choosing the “cultural left,” Clinton was also in effect saying that she would do nothing about either big banks or gerrymandering.

That point was driven home in an article in Fivethirtyeight this past October. In “The Supreme Court Is Allergic To Math,” Oliver Roeder discussed the case of Gill v. Whitford—a case that not only “will determine the future of partisan gerrymandering,” but also “hinges on math.” At issue in the case is something called “the efficiency gap,” which calculates “the difference between each party’s ‘wasted’ votes—votes for losing candidates and votes for winning candidates beyond what the candidate needed to win—and divide that by the total number of votes cast.” The basic argument, in other words, is fairly simple: if a mathematical test determines that a given arrangement of legislative districts provides a large difference, that is evidence of gerrymandering. But in oral arguments, Roeder went on to say, the “most powerful jurists in the land” demonstrated “a reluctance—even an allergy—to taking math and statistics seriously.” Chief Justice John Roberts, for example, said it “may simply be my educational background, but I can only describe [the case] as sociological gobbledygook.” Neil Gorsuch, the man who received the office that Barack Obama was prevented from awarding, compared “the metric to a secret recipe.” In other words, in this case it was the disciplines of mathematics and above all, statistics, that are on the side of those wanting to get rid of gerrymandering, not those analyzing “culture” and fighting “stigma”—concepts that were busy being employed by the justices, essentially to wash their hands of the issue of gerrymandering.

Just as, in other words, Lincoln exploited the split between Douglas’ immediate voters in Illinois who could give him the Senate seat, and the Southern slaveowners who could give him the presidency, Putin (or whomever else one wishes to nominate for that role) may have exploited the difference between Clinton supporters influenced by the current academy—and those affected by the yawning economic chasm that has opened in the United States. Whereas academics are anxious to avoid discussing money in order not to be accused of “economic reductionism,” in other words, the facts on the ground demonstrate that today “more money goes to the top (more than a fifth of all income goes to the top 1%), more people are in poverty at the bottom, and the middle class—long the core strength of our society—has seen its income stagnate,” as Nobel Prize-winning economist Joseph Stiglitz put the point in testimony to the U.S. Senate in 2014. Furthermore, Stiglitz noted, America today is not merely “the advanced country … with the highest level of inequality, but is among those with the least equality of opportunity.” Or in other words, as David Rosnick and Dean Baker put the point in November of that same year, “most [American] households had less wealth in 2013 than they did in 2010 and much less than in 1989.” To address such issues, however, would require precisely the sorts of intellectual tools—above all, mathematical ones—that the current bien pensant orthodoxy of the sort represented by Hillary Clinton, the orthodoxy that abhors sadism more than selfishness, thinks of as irrelevant.

But maybe that’s too many moves ahead.

Best Intentions

L’enfer est plein de bonnes volontés ou désirs
—St. Bernard of Clairvaux. c. 1150 A.D.

And if anyone knows Chang-Rae Lee,” wrote Penn State English professor Michael Bérubé back in 2006, “let’s find out what he thinks about Native Speaker!” The reason Bérubé gives for doing that asking is, first, that Lee wrote the novel under discussion, Native Speaker—and second, that Bérubé “once read somewhere that meaning is identical with intention.” But this isn’t the beginning of an essay about Native Speaker. It’s actually the end of an attack on a fellow English professor: the University of Illinois at Chicago’s Walter Benn Michaels, who (along with with Steven Knapp, now president of George Washington University), wrote the 1982 essay “Against Theory”—an essay that  argued that “the meaning of a text is simply identical to the author’s intended meaning.” Bérubé’s closing scoff then is meant to demonstrate just how politically conservative Michaels’ work is— earlier in the same piece, Bérubé attempted to tie Michaels’ work to Arthur Schlesinger, Jr.’s The Disuniting of America, a book that, because it argued that “multiculturalism” weakened a shared understanding of the United States, has much the same status among some of the intelligentsia that Mein Kampf has among Jews. Yet—weirdly for a critic who often insists on the necessity of understanding historical context—it’s Bérubé’s essay that demonstrates a lack of contextual knowledge, while it’s Michaels’ view—weirdly for a critic who has echoed Henry Ford’s claim that “History is bunk”—that demonstrates a possession of it. In historical reality, that is, it’s Michaels’ pro-intention view that has been the politically progressive one, while it’s Bérubé’s scornful view that shares essentially everything with traditionally conservative thought.

Perhaps that ought to have been apparent right from the start. Despite the fact that, to many English professors, the anti-intentionalist view has helped to unleash enormous political and intellectual energies on behalf of forgotten populations, the reason it could do so was that it originated from a forgotten population that, to many of those same professors, deserves to be forgotten: white Southerners. Anti-intentionalism, after all, was a key tenet of the critical movement called the New Criticism—a movement that, as Paul Lauter described in a presidential address to the American Studies Association in 1994, arose “largely in the South” through the work of Southerners like John Crowe Ransom, Allen Tate, and Robert Penn Warren. Hence, although Bérubé, in his essay on Michaels, insinuates that intentionalism is politically retrograde (and perhaps even racist), it’s actually the contrary belief that can be more concretely tied to a conservative politics.

Ransom and the others, after all, initially became known through a 1930 book entitled I’ll Take My Stand: The South and the Agrarian Tradition, a book whose theme was a “central attack on the impact of industrial capitalism” in favor of a vision of a specifically Southern tradition of a society based around the farm, not the factory. In their vision, as Lauter says, “the city, the artificial, the mechanical, the contingent, cosmopolitan, Jewish, liberal, and new” were counterposed to the “natural, traditional, harmonious, balanced, [and the] patriachal”: a juxtaposition of sets of values that wouldn’t be out of place in a contemporary Republican political ad. But as Lauter observes, although these men were “failures in … ‘practical agitation’”—i.e., although I’ll Take My Stand was meant to provoke a political revolution, it didn’t—“they were amazingly successful in establishing the hegemony of their ideas in the practice of the literature classroom.” Among the ideas that they instituted in the study of literature was the doctrine of anti-intentionalism.

The idea of anti-intentionalism itself, of course, predates the New Criticism: writers like T.S. Eliot (who grew up in St. Louis) and the University of Cambridge don F.R. Leavis are often cited as antecedents. Yet it did not become institutionalized as (nearly) official doctrine of English departments  (which themselves hardly existed) until the 1946 publication of W.K. Wimsatt and Monroe Beardsley’s “The Intentional Fallacy” in The Sewanee Review. (The Review, incidentally, is a publication of Sewanee: The University of the South, which was, according to its Wikipedia page, originally founded in Tennessee in 1857 “to create a Southern university free of Northern influences”—i.e., abolitionism.) In “The Intentional Fallacy,” Wimsatt and Beardsley explicitly “argued that the design or intention of the author is neither available nor desirable as a standard for judging the success of a work of literary art”—a doctrine that, in the decades that followed, did not simply become a key tenet of the New Criticism, but also largely became accepted as the basis for work in English departments. In other words, when Bérubé attacks Michaels in the guise of acting on behalf of minorities, he also attacks him on behalf of the institution of English departments—and so just who the bully is here isn’t quite so easily made out as Bérubé makes it appear.

That’s especially true because anti-intentionalism wasn’t just born and raised among conservatives—it has also continued to be a doctrine in conservative service. Take, for instance, the teachings of conservative Supreme Court justice Antonin Scalia, who throughout his career championed a method of interpretation he called “textualism”—by which he meant (!) that, as he said in 1995, it “is the law that governs, not the intent of the lawgiver.” Scalia argued his point throughout his career: in 1989’s Green v. Bock Laundry Mach. Co., for instance, he wrote that the

meaning of terms on the statute books ought to be determined, not on the basis of which meaning can be shown to have been understood by the Members of Congress, but rather on the basis of which meaning is … most in accord with context and ordinary usage … [and is] most compatible with the surrounding body of law.

Scalia thusly argued that interpretation ought to proceed from a consideration of language itself, apart from those who speak it—a position that would place him, perhaps paradoxically from Michael Bérubé’s position, among the most rarified heights of literary theorists: it was after all the formidable German philosopher Martin Heidegger—a twelve-year member of the Nazi Party and sometime-favorite of Bérubé’s—who wrote the phrase “Die Sprache spricht”: “Language [and, by implication, not speakers] speaks.” But, of course, that may not be news Michael Bérubé wishes to hear.

Like Odysseus’ crew, there’s a simple method by which Bérubé could avoid hearing the point: all of the above could be dismissed as an example of the “genetic fallacy.” First defined by Morris Cohen and Ernest Nagel in 1934’s An Introduction to Logic and Scientific Method, the “genetic fallacy” is “the supposition that an actual history of any science, art, or social institution can take the place of a logical analysis of its structure.” That is, the arguments above could be said to be like the argument that would dismiss anti-smoking advocates on the grounds that the Nazis were also anti-smoking: just because the Nazi were against smoking is no reason not to be against smoking also. In the same way, just because anti-intentionalism originated among conservative Southerners—and also, as we saw, committed Nazis—is no reason to dismiss the thought of anti-intentionalism. Or so Michael Bérubé might argue.

That would be so, however, only insofar as the doctrine of anti-intentionalism were independent from the conditions from which it arose: the reasons to be against smoking, after all, have nothing to do with anti-Semitism or the situation of interwar Germany. But in fact the doctrine of anti-intentionalism—or rather, to put things in the correct order, the doctrine of intentionalism—has everything to do with the politics of its creators. In historical reality, the doctrine enunciated by Michaels—that intention is central to interpretation—was in fact created precisely in order to resist the conservative political visions of Southerners. From that point of view, in fact, it’s possible to see the Civil War itself as essentially fought over this principle: from this height, “slavery” and “states’ rights” and the rest of the ideas sometimes advanced as reasons for the war become mere details.

It was, in fact, the very basis upon which Abraham Lincoln would fight the Civil War—though to see how requires a series of steps. They are not, however, especially difficult ones: in the first place, Lincoln plainly said what the war was about in his First Inaugural Address. “Unanimity is impossible,” as he said there, while “the rule of a minority, as a permanent arrangement, is wholly inadmissable.” Not everyone will agree all the time, in other words, yet the idea of a “wise minority” (Plato’s philosopher-king or the like) has been tried for centuries—and been found wanting; therefore, Lincoln continued, by “rejecting the majority principle, anarchy or despotism in some form is all that is left.” Lincoln thereby concluded that “a majority, held in restraint by constitutional checks and limitations”—that is, bounds to protect the minority—“is the only true sovereign of a free people.” Since the Southerners, by seceding, threatened this idea of government—the only guarantee of free government—therefore Lincoln was willing to fight them. But where did Lincoln obtain this idea?

The intellectual line of descent, as it happens, is crystal clear: as Wills writes, “Lincoln drew much of his defense of the Union from the speeches of [Daniel] Webster”: after all, the Gettysburg Address’ famous phrase, “government of the people, by the people, for the people” was an echo of Webster’s Second Reply to Hayne, which contained the phrase “made for the people, made by the people, and answerable to the people.” But if Lincoln got his notions of the Union (and thusly, his reasons for fighting the war) from Webster, then it should also be noted that Webster got his ideas from Supreme Court Justice Joseph Story: as Theodore Parker, the Boston abolitionist minister, once remarked, “Mr. Justice Story was the Jupiter Pluvius [Raingod] from whom Mr. Webster often sought to elicit peculiar thunder for his speeches and private rain for his own public tanks of law.” And Story, for his part, got his notions from another Supreme Court justice: James Wilson, who—as Linda Przybyszewski notes in passing in her book, The Republic According to John Marshall Harlan (a later Supreme Court justice)—was “a source for Joseph Story’s constitutional nationalism.” So in this fashion Lincoln’s arguments concerning the constitution—and thus, the reasons for fighting the war—ultimately derived from Wilson.

 

JamesWilson
Not this James Wilson.

Yet, what was that theory—the one that passed by a virtual apostolic succession from Wilson to Story to Webster to Lincoln? It was derived, most specifically, from a question Wilson had publicly asked in 1768, in his Considerations on the Nature and Extent of the Legislative Authority of the British Parliament. “Is British freedom,” Wilson had there asked, “denominated from the soil, or from the people, of Britain?” Nineteen years later, at the Constitutional Convention of 1787, Wilson would echo the same theme: “Shall three-fourths be ruled by one-fourth? … For whom do we make a constitution? Is it for men, or is it for imaginary beings called states?” To Wilson, the answer was clear: constitutions are for people, not for tracts of land, and as Wills correctly points out, it was on that doctrine that Lincoln prosecuted the war.

James Wilson (1742-1798)
This James Wilson.

Still, although all of the above might appear unobjectionable, there is one key difficulty to be overcome. If, that is, Wilson’s theory—and Lincoln’s basis for war—depends on a theory of political power derived from people, and not inanimate objects like the “soil,” that requires a means of distinguishing between the two—which perhaps is why Wilson insisted, in his Lectures on Law in 1790 (the very first such legal works in the United States), that “[t]he first and governing maxim in the interpretation of a statute is to discover the meaning of those who made it.” Or—to put it another way—the intention of those who made it. It’s intention, in other words, that enables Wilson’s theory to work—as Knapp and Michaels well-understand in “Against Theory.”

The central example of “Against Theory,” after all, is precisely about how to distinguish people from objects. “Suppose that you’re walking along a beach and you come upon a curious sequence of squiggles in the sand,” Michaels and his co-author ask. These “squiggles,” it seems, appear to be the opening lines of Wordsworth’s “A Slumber”: “A slumber did my spirit seal.” That wonder, then, is reinforced by the fact that, in this example, the next wave leaves, “in its wake,” the next stanza of the poem. How to explain this event, Knapp and Michaels ask?

There are, they say, only two alternatives: either to ascribe “these marks to some agent capable of intentions,” or to “count them as nonintentional effects of mechanical processes,” like some (highly unlikely) process of erosion or wave action or the like. Which, in turn, leads up to the $64,000 question: if these “words” are the result of “mechanical processes” and not the actions of an actor, then “will they still seem to be words?”

The answer, of course, is that they will not: “They will merely seem to resemble words.” Thus, to deprive (what appear to be) the words “of an author is to convert them into accidental likenesses of language.” Intention and meaning are, in this way, identical to each other: no intention, no meaning—and vice versa. Similarly, I suggest, to Lincoln (and his intellectual antecedents), the state is identical to its people—and vice versa. Which, clearly, then suggests that those who deny intention are, in their own fashion—and no matter what they say—secessionists.

If so, then that would, conversely, make those who think—along with Knapp and Michaels—that it is intention that determines meaning, and—along with Lincoln and Wilson—that it is people that constitutes states, then it would follow that those who thought that way really could—unlike the sorts of “radicals” Bérubé is attempting to cover for—construct the United States differently, in a fashion closer to the vision of James Wilson as interpreted by Abraham Lincoln. There are, after all, a number of things about the government of the United States that still lend themselves to the contrary theory, that power derives from the inanimate object of the soil: the Senate, for one. The Electoral College, for another. But the “radical” theory espoused by Michael Bérubé and others of his ilk does not allow for any such practical changes in the American constitutional architecture. In fact, given its collaboration—a word carefully chosen—with conservatives like Antonin Scalia, it does rather the reverse.

Then again, perhaps that is the intention of Michael Bérubé. He is, after all, an apparently-personable man who nevertheless asked, in a 2012 essay in the Chronicle of Higher Education explaining why he resigned the Paterno Family Professorship in Literature at Pennsylvania State University, us to consider just how horrible the whole Jerry Sandusky scandal was—for Joe Paterno’s family. (Just “imagine their shock and grief” at finding out that the great college coach may have abetted a child rapist, he asked—never mind the shock and grief of those who discovered that their child had been raped.) He is, in other words, merely a part-time apologist for child rape—and so, I suppose, on his logic we ought to give a pass to his slavery-defending, Nazi-sympathizing, “intellectual” friends.

They have, they’re happy to tell us after all, only the best intentions.

Comedy Bang Bang

In other words, the longer a game of chance continues the larger are the spells and runs of luck in themselves,
but the less their relative proportions to the whole amounts involved.
—John Venn. The Logic of Chance. (1888). 

 

“A probability that is very small for a single operation,” reads the RAND Corporation paper mentioned in journalist Sharon McGrayne’s The Theory That Would Not Die, “say one in a million, can become significant if this operation will occur 10,000 times in the next five years.” The paper, “On the Risk of an Accidental or Unauthorized Nuclear Detonation,” was just what it says on the label: a description of the chances of an unplanned atomic explosion. Previously, American military planners had assumed “that an accident involving an H-bomb could never occur,” but the insight of this paper was that overall risk changes depending upon volume—an insight that ultimately depended upon a discovery first described by mathematician Jacob Bernoulli in 1713. Now called the “Law of Large Numbers,” Bernoulli’s thought was that “it is not enough to take one or another observation … but that a large number of them are needed”—it’s what allows us to conclude, Bernoulli wrote, that “someone who intends to throw at once three sixes with three dice, should be considered reckless even if winning by chance.” Yet, while recognizing the law—which predicted that even low-probability events become likely if there are many of them—considerably changed how the United States handled nuclear weapons, it has had essentially no impact on how the United States handles certain conventional weapons: the estimated 300 million guns held by its citizens. One possible reason why that may be, suggests the work of Vox.com founder Ezra Klein, is that arguments advanced by departments of literature, women’s studies, African-American studies and other such academic “disciplines” more or less openly collude with the National Rifle Association to prevent sensible gun control laws.

The inaugural “issue” of Vox contained Klein’s article “How Politics Makes Us Stupid”—an article that asked the question, “why isn’t good evidence more effective in resolving political debates?” According to the consensus wisdom, Klein says, “many of our most bitter political battles are mere misunderstandings” caused by a lack of information—in this view, all that’s required to resolve disputes is more and better data. But, Klein also writes, current research shows that “the more information partisans get, the deeper their disagreements become”—because there are some disagreements “where people don’t want to find the right answer so much as they want to win the argument.” In other words, while some disagreements can be resolved by considering new evidence—like the Strategic Air Command changed how it handled nuclear weapons in light of a statistician’s recall of Bernoulli’s work—some disagreements, like gun control, cannot.

The work Klein cites was conducted by Yale Law School professor Daniel Kahan, along with several co-authors, and it began—Klein says—by collecting 1,000 Americans and then surveying both their political views and their mathematical skills. At that point, Kahan’s group gave participants a puzzle, which asked them to judge an experiment designed to show whether a new skin cream was more or less likely to make a skin condition worse or better, based on the data presented. The puzzle, however, was jiggered: although many more people got better using the skin cream than got worse using the skin cream, the percentage of people who got worse using the skin cream against those who did not use it was actually higher. In other words, if you paid attention merely to numbers, the data might appear to indicate one thing, while a calculation of percentages showed something else. As it turns out, most people relied on the raw numbers—and were wrong; meanwhile, people with higher mathematical skill were able to work through the problem to the right answer.

Interestingly, however, the results of this study did not demonstrate to Kahan that perhaps it is necessary to increase scientific and mathematical education. Instead, Kahan argues that the attempt by “economists and other empirical social scientists” to shear the “emotional trappings” from the debate about gun control in order to make it “a straightforward question of fact: do guns make society less safe or more” is misguided. Rather, because guns are “not just ‘weapons or pieces of sporting equipment,’” but “are also symbols,” the proper terrain to contest is not the grounds of empirical fact, but the symbolic: “academics and others who want to help resolve the gun controversy should dedicate themselves to identifying with as much precision as possible the cultural visions that animate this dispute.” In other words, what ought to structure this debate is not science, but culture.

To many on what’s known as the “cultural left,” of course, this must be welcome news: it amounts to a recognition of “academic” disciplines like “cultural studies” and the like that have argued for decades that cultural meanings trump scientific understanding. As Canadian philosopher Ian Hacking put it some years ago in The Social Construction of What?, a great deal of work in those fields of “study” have made claims that approach saying “that scientific results, even in fundamental physics, are social constructs.” Yet though the point has, as I can speak from personal experience, become virtual commonsense in departments of the humanities, there are several means of understanding the phrase “social construct.”

As English professor Michael Bérubé has remarked, much of that work can be traced as  “following the argument Heidegger develops at the end of the first section of Being and Time,” where the German philosopher (and member of the Nazi Party) argued that “we could also say that the discovery of Neptune in 1846 cold plausibly be described, from a strictly human vantage point, as the ‘invention’ of Neptune.” In more general terms New York University professor Andrew Ross—the same Ross later burned in what’s become known as the “Sokal Affair”—described one fashion in which such an argument could go: by tracing how a “scientific theory was advanced through power, authority, persuasion and responsiveness to commercial interests.” Of course, as a journalistic piece by Joy Pullmann—writing in the conservative Federalist—described recently, as such views have filtered throughout the academy they have led at least one doctoral student to claim in her dissertation at the education department of the University of North Dakota that “language used in the syllabi” of eight science classes she reviewed

reflects institutionalized STEM teaching practices and views about knowledge that are inherently discriminatory to women and minorities by promoting a view of knowledge as static and unchanging, a view of teaching that promotes the idea of a passive student, and by promoting a chilly climate that marginalizes women.

The language of this description, interestingly, equivocates between the claim that some, or most, scientists are discriminatory (a relatively safe claim) and the notion that there is something inherent about science itself (the radical claim)—which itself indicates something of the “cultural” view. Yet although, as in this latter example, claims regarding the status of science are often advanced on the grounds of discrimination, it seems to escape those making such claims just what sort of ground is conceded politically by taking science as one’s adversary.

For example, here is the problem with Kahan’s argument over gun control: by agreeing to contest on cultural grounds pro-gun control advocates would be conceding their very strongest argument: the Law of Large Numbers is not an incidental feature of science, but one of its very foundations. (It could perhaps even be the foundation, because science proceeds on the basis of replicability.) Kahan’s recommendation, in other words, might not appear so much as a change in tactics as an outright surrender: it’s only in the light of the Law of Large Numbers that the pro gun-control argument is even conceivable. Hence, it is very difficult to understand how an argument can be won if one’s best weapon is, I don’t know, controlled. In effect, conceding the argument made in the RAND paper quoted above is more or less to give up on the very idea of reducing the numbers of firearms, so that American streets could perhaps be safer—and American lives protected.

Yet another, and even larger-scale problem with taking the so-called “cultural turn,” as Kahan advises, however, is that abandoning the tools of the Law of Large Numbers does not merely concede ground on the gun control issue alone. It also does so on a host of other issues—perhaps foremost of them on matters of political representation itself. For example, it prevents an examination of the Electoral College from a scientific, mathematically-knowledgable point of view—as I attempted to do in my piece, “Size Matters,” from last month. It may help to explain what Congressman Steve Israel of New York meant when journalist David Daley, author of a recent book on gerrymandering, interviewed him on the practical effects of gerrymandering in the House of Representatives (a subject that requires strong mathematical knowledge to understand): “‘The Republicans have always been better than Democrats at playing the long game.’” And there are other issues also—all of which is to say that, by attacking science itself, the “cultural left” may literally be preventing government from interceding on the part of the very people for whom they claim to speak.

Some academics involved in such fields have, in fact, begun to recognize this very point: all the way back in 2004, one of the chiefs of this type of specialist, Bruno Latour, dared to ask himself “Was I wrong to participate in the invention of this field known as science studies?” The very idea of questioning the institution of that field can, however, seem preposterous: even now, as Latour also wrote then, there are

entire Ph.D. programs … still running to make sure that good American kids are learning the hard way that facts are made up, that there is no such thing as natural, unmediated, unbiased access to truth, that we are always prisoners of language, that we always speak from a particular standpoint, and so on, while dangerous extremists are using the very same argument of social construction to destroy hard-won evidence that could save our lives.

Indeed. It’s actually to the point, in fact, that it would be pretty easy to think that the supposed “left” doesn’t really want to win these arguments at all—that, perhaps, they just wish to go out …

With a bang.

The End Of The Beginning

The essential struggle in America … will be between city men and yokels.
The yokels hang on because the old apportionments give them unfair advantages. …
But that can’t last.
—H.L. Mencken. 23 July 1928.

 

“It’s as if,” the American philosopher Richard Rorty wrote in 1998, “the American Left could not handle more than one initiative at a time, as if it either had to ignore stigma in order to concentrate on money, or vice versa.” Penn State literature professor Michael Bérubé sneered at Rorty at the time, writing that Rorty’s problem is that he “construes leftist thought as a zero-sum game,” as if somehow

the United States would have passed a national health-care plan, implemented a family-leave policy, and abolished ‘right to work’ laws if only … left-liberals in the humanities hadn’t been wasting our time writing books on cultural hybridity and popular music.

Bérubé then essentially asked Rorty, “where’s the evidence?”—knowing, of course, that it is impossible to prove a counterfactual, i.e. what didn’t happen. But even in 1998, there was evidence to think that Rorty was not wrong: that, by focusing on discrimination rather than on inequality, “left-liberals” have, as Rorty accused then, effectively “collaborated with the Right.” Take, for example, what are called “majority-minority districts,” which are designed to increase minority representation, and thus combat “stigma”—but have the effect of harming minorities.

A “majority-minority district,” according to Ballotpedia, “is a district in which a minority group or groups comprise a majority of the district’s total population.” They were created in response to Section Two of the Voting Rights Act of 1965, which prohibited drawing legislative districts in a fashion that would “improperly dilute minorities’ voting power.”  Proponents of their use maintain that they are necessary in order to prohibit what’s sometimes called “cracking,” or diluting a constituency so as to ensure that it is not a majority in any one district. It’s also claimed that “majority-minority” districts are the only way to ensure minority representation in the state legislatures and Congress—and while that may or may not be true, it is certainly true that after drawing such districts there were more minority members of Congress than there were before: according to the Congressional Research Service, prior to 1969 (four years after passage) there were less than ten black members of Congress, a number that then grew until, after the 106th Congress (1999-01), there have consistently been between 39 and 44 African-American members of Congress. Unfortunately, while that may have been good for individual representatives, it may not be all that great for their constituents.

That’s because while “majority-minority” districts may increase the number of black and minority congressmen and women, they may also decrease the total numbers of Democrats in Congress. As The Atlantic put the point in 2013: after the redistricting process following the Census of 1990, the “drawing of majority-minority districts not only elected more minorities, it also had the effect of bleeding minority voters out of all the surrounding districts”—making them virtually impregnably Republican. In 2012, for instance, Barack Obama won 44 Congressional districts by more than 50 percent of the vote, while Mitt Romney won only eight districts by such a large percentage. Figures like these could seem overwhelmingly in favor of the Democrats, of course—until it is realized that, by winning congressional seats by such huge margins in some districts, Democrats are effectively losing votes in others.

That’s why—despite the fact that he lost the popular vote—in 2012 Romney’s party won 226 of 435 Congressional districts, while Obama’s party won 209. In this past election, as I’ve mention in past posts, Republicans won 55% of the seats (241) despite getting 49.9% of the vote, while Democrats won 44% of the seats despite getting 47.3% of the vote. That might not seem like a large difference, but it is suggestive when these percentages always point in a single direction: going back to 1994, the year of the “Contract With America,” Republicans have consistently outperformed their share of the popular vote, while Democrats have consistently underperformed theirs.

From the perspective of the Republican party, that’s just jake, despite being—according to a lawsuit filed by the NAACP in North Carolina—due to “an intentional and cynical use of race.” Whatever the ethics of the thing, it’s certainly had major results. “In 1949,” as Ari Berman pointed out in The Nation not long ago, “white Democrats controlled 103 of 105 House seats in the former Confederacy,” while the last white Southern congressman not named Steve Cohen exited the House in 2014. Considered all together, then, as “majority-minority districts” have increased, the body of Southern congressmen (and women) has become like an Oreo: a thin surface of brown Democrats on the outside, thickly white and Republican on the inside—and nothing but empty calories.

Nate Silver, to be sure, discounted all this worry as so much ado about nothing in 2013: “most people,” he wrote then, “are putting too much weight on gerrymandering and not enough on geography.” In other words, “minority populations, especially African-Americans, tend to be highly concentrated in certain geographic areas,” so much so that it would a Herculean task “not to create overwhelmingly minority (and Democratic) districts on the South Side of Chicago, in the Bronx or in parts of Los Angeles or South Texas.” Furthermore, even if that could be accomplished such districts would violate “nonpartisan redistricting principles like compactness and contiguity.” But while Silver is right on the narrow ground he contests, it merely begs the question: why should geography have anything to do with voting? Silver’s position essentially ensures that African-American and other minority votes count for less. “Majority minority districts” imply that minority votes do not have as much effect on policy as votes in other kinds of districts: they create, as if the United States were some corporation with common and preferred shares, two kinds of votes.

Like discussions about, for example, the Electoral College—in which a vote in Wyoming is much more valuable than one in California—Silver’s position in other words implies that minority votes will remain less valuable than other votes because a vote in a “majority-minority” district will have less probability of electing a congressperson who is a member of a majority in Congress. What does it matter to African-Americans if one of their number is elected to Congress, if Congress can do nothing for them?  To Silver, there isn’t any issue with majority-minority districts because they reflect their underlying proportions of people—but what matters is whether whoever’s elected can get policies that benefit them.

Right here, in other words, we get to the heart of the dispute between the deceased Rorty and his former student Bérubé: the difference between procedural and substantive justice. To some left-liberal types like Michael Bérubé, that might appear just swell: to coders in the Valley (represented by California’s 17th, the only majority-Asian district in the continental United States) or cultural-studies theorists in Boston, what might be important is simply the numbers of minority representatives, not the ability to pass a legislative agenda that’s fair for all Americans. It all might seem like no skin off their nose. (More ominously, it conceivably might even be in their economic interests: the humanities and the arts after all are intellectually well-equipped for a politics of appearances—but much less so for a politics of substance.) But ultimately this also affects them, and for a similar reason: urban professionals are, after all, urban—which means that their votes are, like majority-minority districts, similarly concentrated.

“Urban Democrat House members”—as The Atlantic also noted in 2013—“win with huge majorities, but winning a district with 80 percent doesn’t help the party gain any more seats than winning with 60 percent.” As Silver put the same point, “white voters in cities with high minority populations tend to be quite liberal, yielding more redundancy for Democrats.” Although these percentages might appear heartening to some of those within such districts, they ought to be deeply worrying: individual votes are not translating into actual political power. The more geographically concentrated Democrats are the less and less capable their party becomes of accomplishing its goals. While winning individual races by huge margins might be satisfying to some, no one cares about running up the score in a junior varsity game.

What “left-liberal” types ought to be contesting, in other words, isn’t whether Congress has enough black and other minority people in it, but instead the ridiculous, anachronistic idea that voting power should be tied to geography. “People, not land or trees or pastures vote,” Chief Justice of the Supreme Court Earl Warren wrote in 1964; in that case, Wesberry v. Sanders, the Supreme Court ruled that, as much as possible, “one man’s vote in a Congressional election is to be worth as much as another’s.” By shifting discussion to procedural issues of identity and stigma, “majority-minority districts” obscure that much more substantive question of power. Like some gaggle of left-wing Roy Cohns, people like Michael Bérubé want to talk about who people are. His opponents ought to reply by saying they’re interested in what people could be—and building a real road to get there.

Stormy Weather

They can see no reasons …
—“I Don’t Like Mondays” 
The Boomtown Rats.
The Fine Art of Surfacing. 1979.

 

“Since Tuesday night,” John Cassidy wrote in The New Yorker this week, “there has been a lot of handwringing about how the media, with all its fancy analytics, failed to foresee Donald Trump’s victory”: as the New York Times headline had it, “How Data Failed Us in Calling an Election.” The failure of Nate Silver and other statistical analysts in the lead-up to Election Day rehearses, once again, a seemingly-ancient argument between what are now known as the sciences and the humanities—an argument sometimes held to be as old as the moment when Herodotus (the “Father of History”) asserted that his object in telling the story of the Greco-Persian Wars of 2500 years ago was “to set forth the reasons why [the Greeks and Persians] wage war on each other.” In other words, Herodotus thought that, to investigate war, it was necessary to understand the motives of the people who fought it—just as Cassidy says the failure of the press to get it right about this election was, Cassidy says, “a failure of analysis, rather than of observation.” The argument both Herodotus and Cassidy are making is the seemingly unanswerable one that it is the interpretation of the evidence, rather than the evidence itself, that is significant—a position that seems inarguable so long as you aren’t in the Prussian Army, dodging Nazi bombs during the last year of the Second World War, or living in Malibu.

The reason why it seems inarguable, some might say, is because the argument both Herodotus and Cassidy are making is inescapable: obviously, given Herodotus’ participation, it is a very ancient one, and yet new versions are produced all the time. Consider for instance a debate conducted by English literature professor Michael Bérubé and philosopher John Searle some years ago, about a distinction between what Searle called “brute fact” and “social fact.” “Brute facts,” Bérubé wrote later, are “phenomena like Neptune, DNA, and the cosmic background radiation,” while the second kind are “items whose existence and meaning are obviously dependent entirely on human interpretation,” such as “touchdowns and twenty-dollar bills.” Like Searle, most people might like to say that “brute fact” is clearly more significant than “social fact,” in the sense that Neptune doesn’t care what we think about it, whereas touchdowns and twenty dollar bills are just as surely entirely dependent on what we think of them.

Not so fast, said Bérubé: “there’s a compelling sense,” the professor of literature argued, in which social facts are “prior to and even constitutive of” brute facts—if social facts are the means by which we obtain our knowledge of the outside world, then social facts could “be philosophically prior to and certainly more immediately available to us humans than the world of brute fact.” The only way we know about Neptune is because a number of human beings thought it was important enough to discover; Neptune doesn’t give a damn one way or the other.

“Is the distinction between social facts and brute facts,” Bérubé therefore asks, “a social fact or a brute fact?” (Boom! Mic drop.) That is, whatever the brute facts are, we can only interpret them in the light of social facts—which would seem to grant priority to those disciplines dealing with social facts, rather than those disciplines that deal with brute fact; Hillary Clinton, Bérubé might say, would have been better off hiring an English professor, rather than a statistician, to forecast the election. Yet, despite the smugness with which Bérubé delivers what he believes is a coup de grâce, it does not seem to occur to him that traffic between the two realms can also go the other way: while it may be possible to see how “social facts” subtly influence our ability to see “brute facts,” it’s also possible to see how “brute facts” subtly influence our ability to see “social facts.” It’s merely necessary to understand how the nineteenth-century Prussian Army treated its horses.

The book that treats that question about German military horsemanship is called The Law of Small Numbers, which was published in 1898 by one Ladislaus Bortkiewicz: a Pole who lived in the Russian Empire and yet conducted a study on data about the incidence of deaths caused by horse kicks in the nineteenth-century Prussian Army. Apparently, this was a cause of some concern to military leaders: they wanted to know whether, say, if an army corp that experienced several horse kick deaths in a year—an exceptional number of deaths from this category—was using bad techniques, or whether they happened to buy particularly ornery horses. Why, in short, did some corps have what looked like an epidemic of horse kick deaths in a given year, while others might go for years without a single death? What Bortkiewicz found answered those questions—though perhaps not in a fashion the army brass might have liked.

Bortkiewicz began by assembling data about the number of fatal horse kicks in fourteen Prussian army corps over twenty years, which he then combined into “corp years”: the number of years together with the number of corps. What he found—as E.J. Gumbel pus it in the International Encyclopedia of the Social Sciences—was that for “over half the corps-year combinations there were no deaths from horse kicks,” while “for the other combinations the number of deaths ranged up to four.” In most years, in other words, no one was killed in any given corps by a horse kick, while in some years someone was—and in terrible years four were. Deaths by horse kick, then, were uncommon, which meant they were hard to study: given that they happened so rarely, it was difficult to determine what caused them—which was why Bortkiewicz had to assemble so much data about them. By doing so, the Russian Pole hoped to be able to isolate a common factor among these deaths.

In the course of studying these deaths, Bortkiewicz ended up independently re-discovering something that a French mathematician, Simeon Denis Poisson, had already, in 1837, used in connection with discussing the verdicts of juries: an arrangement of data now known as the Poisson distribution. And as the mathematics department at the University of Massachusetts is happy to tell us (https://www.umass.edu/wsp/resources/poisson/), the Poisson distribution applies when four conditions are met: first, “the event is something that can be counted in whole numbers”; second, “occurrences are independent, so that one occurrence neither diminishes nor increases the chance of another”; third, “the average frequency of occurrence for the time period in question is known”; and finally “it is possible to count how many events have occurred.” If these things are known, it seems, the Poisson distribution will tell you how often the event in question will happen in the future—a pretty useful feature for, say, predicting the results of an election. But that what wasn’t was intriguing about Bortkiewicz’ study: what made it important enough to outlast the government that commissioned it was that Bortkiewicz found that the Poisson distribution “may be used in reverse”—a discovery ended up telling us about far more than the care of Prussian horses.

What “Bortkiewicz realized,” as Aatish Bhatia of Wired wrote some years ago, was “that he could use Poisson’s formula to work out how many deaths you could expect to see” if the deaths from horse kicks in the Prussian army were random. The key to the Poisson distribution, in other words, is the second component, “occurrences are independent, so that one occurrence neither diminishes nor increases the chance of another”: a Poisson distribution describes processes that are like the flip of a coin. As everyone knows, each flip of a coin is independent of the one that came before; hence, the record of successive flips is the record of a random process—a process that will leave its mark, Bortkiewicz understood.

A Poisson distribution maps a random process; therefore, if the process in question maps a Poisson distribution, then it must be a random process. A distribution that matches the results a Poisson distribution would predict must also be a process in which each occurrence is independent of those that came before. As the UMass mathematicians say, “if the data are lumpy, we look for what might be causing the lump,” while conversely, if  “the data fit the Poisson expectation closely, then there is no strong reason to believe that something other than random occurrence is at work.” Anything that follows a Poisson distribution is likely the result of a random process; hence, what Bortkiewicz had discovered was a tool to find randomness.

Take, for example, the case of German V-2 rocket attacks on London during the last years of World War II—the background, as it happens, to novelist Thomas Pynchon’s Gravity’s Rainbow. As Pynchon’s book relates, the flying missiles were falling in a pattern: some parts of London were hit multiple times, while others were spared. Some Londoners argued that this “clustering” demonstrated that the Germans must have discovered a way to guide these missiles—something that would have been highly, highly advanced for mid-twentieth century technology. (Even today, guided missiles are incredibly advanced: much less than ten percent of all the bombs dropped during the 1991 Gulf War, for instance, had “smart bomb” technology.) So what British scientist R. D. Clarke did was to look at the data for all the targets of V-2s that fell on London. What he found was that the results matched a Poisson distribution—the Germans did not possess super-advanced guidance systems. They were just lucky.

Daniel Kahneman, the Israeli psychologist, has a similar story: “‘During the Yom Kippur War, in 1973,’” Kahneman told New Yorker writer Atul Gawande, he was approached by the Israeli Air Force to investigate why, of two squads that took to the skies during the war, “‘one had lost four planes and the other had lost none.’” Kahneman told them not to waste their time, because a “difference of four lost planes could easily have occurred by chance.” Without knowing about Bortkiewicz, that is, the Israeli Air Force “would inevitably find some measurable differences between the squadrons and feel compelled to act on them”—differences that, in reality, mattered not at all. Presumably, Israel’s opponents were bound to hit some of Israel’s warplanes; it just so happened that they were clustered in one squadron and not the other.

Why though, should any of this matter in terms of the distinction between “brute” and “social” facts? Well, consider what Herodotus wrote more than two millennia ago: what matters, when studying war, is the reasons people had for fighting. After all, wars are some of the best examples of a “social fact” anywhere: wars only exist, Herodotus is claiming, because of what people think about them. But what if it could be shown that, actually, there’s a good case to be made for thinking of war as a “brute fact”—something more like DNA or Neptune than like money or a home run? As it happens, at least one person, following in Bortkiewicz’ footsteps, already has.

In November of 1941, the British meteorologist and statistician Lewis Fry Richardson published, in the journal Nature, a curious article entitled “Frequency of Occurrence of Wars and Other Quarrels.” Richardson, it seems, had had enough of the endless theorizing about war’s causes: whether it be due to, say, simple material greed, or religion, or differences between various cultures or races. (Take for instance the American Civil War: according to some Southerners, the war could be ascribed to the racial differences between Southern “Celtics” versus Northern “Anglo-Saxons”; according to William Seward, Abraham Lincoln’s Secretary of State, the war was due to the differences in economic systems between the two regions—while to Lincoln himself, perhaps characteristically, it was all due to slavery.) Rather than argue with the historians, Richardson decided to instead gather data: he compiled a list of real wars going back centuries, then attempted to analyze the data he had collected.

What Richardson found was, to say the least, highly damaging to Herodotus: as Brian Hayes puts it in a recent article in American Scientist about Richardson’s work, when Richardson compared a group of wars with similar amounts of casualties to a Poisson distribution, he found that the “match is very close.” The British scientist also “performed a similar analysis of the dates on which wars ended—the ‘outbreaks of peace’—with the same result.” Finally, he checked another data set concerning wars, this one compiled by the University of Chicago’s Quincy Wright—“and again found good agreement.” “Thus,” Hayes writes, “the data offer no reason to believe that wars are anything other than randomly distributed accidents.” Although Herodotus argued that the only way to study wars is to study the motivations of those who fought them, there may in reality be no more “reason” for the existence of war than the existence of a forest fire in Southern California.

Herodotus, to be sure, could not have seen that: the mathematics of his time were nowhere near sophisticated enough to run a Poisson distribution. Therefore, the Greek historian was eminently justified in thinking that wars must have “reasons”: he literally did not have the conceptual tools necessary to think that wars may not have reasons at all. That was an unavailable option. But through the work of Bortkiewizc and his successors, that has now become an option: indeed, the innovation of these statisticians has been to show that our default assumption ought to be what statisticians call the “null hypothesis,” which is defined by the Cambridge Dictionary of Statistics to be “the ‘no difference’ or ‘no association’ hypothesis.” Unlike Herodotus, who presumed that explanations must equal causes, we now assume that we ought to be first sure that there is anything to explain before trying to explain it.

In this case, then, it may be that the “brute fact” of the press’ Herodotian commitment to discovering “reasons” that explains why nobody in the public sphere predicted Donald Trump’s victory: because the press is already committed to the supremacy of analysis over observation, it could not perform the observations necessary to think Trump could win. Or, as Cassidy put it, when a reporter saw the statistical election model of choice “registering the chances of the election going a certain way at ninety per cent, or ninety-five per cent, it’s easy to dismiss the other outcome as a live possibility—particularly if you haven’t been schooled in how to think in probabilistic terms, which many people haven’t.” Just how powerful the assumption of the force of analysis over data can be is demonstrated by the fact that—even despite noting the widespread lack of probabilistic thinking—Cassidy still thinks it possible that “F.B.I. Director James Comey’s intervention ten days before the election,” in which Comey announced his staff was still investigating Clinton’s emails, “may have proved decisive.” In other words, despite knowing something about the impact of probability, Cassidy still thinks it possible that a letter from the F.B.I. director was somehow more important to the outcome of this past election than the evidence of their own lives were to million of Americans—or, say, the effect of a system in which the answer to the question where outweighs that of how many?

Probabilistic reasoning, of course, was unavailable to Herodotus, who lived two millennia before the mathematical tools necessary were even invented—which is to say that, while some like to claim that the war between interpretation and data is eternal, it might not be. Yet John Cassidy—and Michael Bérubé—don’t live before those tools were invented, and yet they persist in writing as if they do. While that’s fine, so far as it is their choice as private citizens, it ought to be quite a different thing insofar as it is their jobs as journalist and teacher, respectively—particularly in the case, as say in the 2016 election, when it is of importance to the continued health of the nation as a whole that there be a clear public understanding of events. Some people appear to think that continuing the quarrels of people whose habits of mind, today, would barely qualify them to teach Sunday school is something noble; in reality, it may just be a measure of how far we have yet to travel.

 

Lions For Lambs

And the remnant of Jacob shall be among the Gentiles in the midst of many people as a lion among the beasts of the forest, as a young lion among the flocks of sheep …
Micah 5:8

Micah was the first prophet to predict the downfall of Jerusalem. According to him, the city was doomed because its beautification was financed by dishonest business practices, which impoverished the city’s citizens. He also called to account the prophets of his day, whom he accused of accepting money for their oracles.
“Micah.” Wikipedia.

 

“Before long I’ll be dead, and you and your brother and your sister and all of her children, all of us dead, all of us rotting underground,” says the villainous patriarch of the aristocratic Lannister clan, Tywin, to his son Jaime in a conversation during the first season of the hit HBO show, Game of Thrones. “It’s the family name that lives on,” Tywin continues—a sentence that not only does much to explain the popularity of the show, but also overturns the usual explanation for that interest: the narrative uncertainty, or the way in which, at least in the first several seasons, it was never obvious which characters were the heroes, and so would survive to the end of the tale. But if Tywin is right, the attraction of the show isn’t that it is so unpredictable. It’s rather that the show’s uncertainty about the various characters’ fates is balanced by a matching certainty that they are in peril: either from the political machinations that end up destroying many of the characters the show had led us to think were protagonists (Ned and his son Robb Stark in particular)—or from the horror that, the opening minutes of the show’s very first episode display, has awakened in the frozen north of Thrones’ fictional world. Hence, the uncertainty about what is going to happen is mirrored by a certainty that something will happen—a certainty signified by the motto of the family to which many fan-favorite characters belong, House Stark: “Winter is Coming.” It’s that motto, I think, that furnishes much of the show’s power—because it is such a direct riposte to much of today’s conventional wisdom, a dogma that unites the supposed “radical left” of the contemporary university with their seeming ideological opposites: the financial elite of Wall Street.

To put it plainly, the relevant division in America today is not between Republicans and Democrats, but instead between those who (still) think the notion encapsulated by the phrase “Winter Is Coming” matters—and those who don’t. For the idea contained within the phrase “Winter Is Coming,” after all, is much older than George Martin’s series of fantasy novels. It is, for example, much the same as an idea expressed by the English writer George Orwell, author of 1984 and Animal Farm, in 1946:

… we are all capable of believing things which we know to be untrue, and then, when we are finally proved wrong, impudently twisting the facts so as to show that we were right. Intellectually, it is possible to carry on this process for an indefinite time: the only check on it is that sooner or later a false belief bumps up against solid reality, usually on a battlefield.

What Orwell expresses here, I’d say, is the Stark idea—the idea that, sooner or later, one’s beliefs run up against reality, whether that reality comes in the form of the weather or war or something else. It’s the notion that, sooner or later, things converge towards reality: a notion that many contemporary intellectuals have abandoned. To them, the view expressed by Orwell and the Starks is what’s known as “foundationalism”: something that all recent students in the humanities have been trained, over the past several generations, to boo and hiss.

“Foundationalism,” according to Pennsylvania State University literature professor Michael Bérubé, for example—a person I often refer to because, unlike the work of a lot others, he at least expresses what he’s saying clearly, and also because he represents a university well-known for its commitment to openness and transparency and occasionally less-than-enthusiastic opposition to child abuse—is the notion that there is a “principle that is independent of all human minds.” That is opposed, for people who think about this sort of thing, to “antifoundationalism”: the idea that a lot of stuff (maybe everything) is simply a matter of “human deliberation and consensus.” Also known as “social constructionism,” it’s an idea that Orwell, or the Starks, would have looked at slant-eyed: winter, for instance, doesn’t particularly care what people think about it, and while war is like both a seminar and a hurricane, the things that happen in war—like, say, having the technology to turn an entire city into a fireball—are not appreciably different from the impact of a tsunami.

Within the humanities however the “anti-foundationalist” or “social constructionist” idea has largely taken the field. “Notwithstanding,” as literature professor Mark Bauerlein of Emory University has remarked, “the diversity trumpeted by humanities departments these days, when it comes to conceptions of knowledge, one standpoint reigns supreme: social constructionism.” To those who hold it, it is a belief that straightforwardly powers what Bauerlein calls “a moral obligation to social justice”: in this view, either you are on the side of antifoundationalism, or you are a yahoo who thinks that the problem with the world is that there isn’t enough Donald Trump in it. Yet antifoundationalism, or the idea that everything is a matter of human discussion, is not necessarily so obviously on the side of good and not evil as the professors of the nation’s universities appear to believe.

In fact, while Bauerlein says that this dogma is “a party line, a tribal glue distinguishing humanities professors from their colleagues in the business school, the laboratory, the chapel, and the computing center, most of whom believe that at least some knowledge is independent of social conditions,” there’s actually good reason to think that a disbelief in an underlying reality isn’t all that unfamiliar to the business school. Arguably, there’s no portion of the university that pays more homage to the dogma of “social construction” than the business school.

Take, for instance, the idea Eugene Fama has built his career upon: the “random walk” theory of the stock market, also known as the “efficient market hypothesis.” Today, Fama is a Nobel Prize-laureate (well, winner of the Swedish National Bank’s Prize in Economic Sciences in Memory of Alfred Nobel, a prize not established by Alfred Nobel in his 1895 will), a professor at the University of Chicago’s Booth School of Business, and the so-called “Father of Finance, ” but in 1965 he was an obscure graduate student—at least, until he wrote the paper that established him within his profession that year, “The Behavior of Stock-Market Prices.” In that paper, Fama argued that “the future path of the price level of a security is no more predictable than the path of a series of cumulated random numbers,” which had the consequence that “the series of price changes has no memory.” (Which is what stock prospectuses mean when they say that “past performance cannot predict future performance.”) What Fama meant was that, no matter how many times he went back over the data, he could find no means by which to predict the future path of a particular stock. Hence he concluded that, when it comes to the market, “the past cannot be used to predict the future in any meaningful way”—an idea with some notably anti-foundationalist consequences.

Those consequences can be be viewed in such papers as Fama’s 2010 study with colleague Kenneth French: “Luck versus Skill in the Cross-Section of Mutual Fund Returns”—a study that set out to examine whether it was true that the managers of mutual funds can actually do what they claim they can do, and outperform the stock market. In “Luck versus Skill,” Fama and French say that the evidence shows those managers can’t: “For fund investors the … results are disheartening,” because “few active funds produce … returns that cover their costs.” Maybe there are really intelligent people out there who are smarter than the market, Fama is suggesting—but if there are, he can’t find them.

Now, so far Fama’s idea might sound pretty unexceptional: to readers of this blog, it might even sound like common sense. It’s a fairly close idea to the one explored, for instance, by psychologist Amos Tversky and his co-authors in the paper, “The Hot Hand in Basketball,” which was about how what appeared to be a “hot,” or “clutch,” basketball shooter was simply an effect of randomness: if your skill level is such that you expect to make a certain percentage of your shots, then—simply through the laws of probability—it is likely that you will make a certain number of baskets in a row. Similarly, if there are enough mutual funds in the market, some number of them will have gaudy track records to report: “Given the multitude of funds,” as Fama writes, “many have extreme returns by chance.” If there’s enough participants in any competition, some will be winners—or to put it another way, if a monkey throws enough shit at a wall, some of it will stick.

That, Fama might say, doesn’t mean that the monkey has somehow gotten in touch with Reality: if no one person can outperform the market, then there is nothing anyone can know that would help them to become a better stock-picker. What that must mean in turn is (as the Wikipedia article on the subject notes) that “market prices reflect all available information,” or that “stocks always trade at their fair value”—which is right about where that the work of seemingly-conservative professors in economics departments and business schools, and their seeming-liberal opponents in departments of the humanities begins to converge.

Fama, after all, denies the existence of what are known as “bubbles”: “speculative bubbles, market bubbles, price bubbles, financial bubbles, speculative manias or balloons” as Wikipedia terms them. “Bubbles” describe situations in which a given asset—like, I don’t know, a house—is traded “at a price or price range that strongly deviates from the corresponding asset’s intrinsic value.” The classic example is the Dutch tulip craze of the seventeenth century, during which a single tulip bulb might have sold for ten times the yearly wage of a workman. (Other instances might be closer to the reader’s mind than that.) But according to Fama there can be no such thing as a “bubble”: when John Cassidy of The New Yorker said to Fama in an interview that the chief problem during the financial crisis of 2008 was that “there was a credit bubble that inflated and ultimately burst,” Fama replied by saying, “I don’t know what a credit bubble means. I don’t even know what a bubble means. These words have become popular. I don’t think they have any meaning.” Although a careful reader might note that what Fama is saying here is something like that there is a bubble in the concept of bubbles, what he intends is to deny that there are bubbles, and thus that there is any “intrinsic value” to a given asset.

It’s at this point, I think, that the connection between Eugene Fama’s contention about the “efficient market hypothesis” and the doctrine in the humanities known as “antifoundationalism” becomes clear: both are denials of the Starks’ “Winter Is Coming” motto. After all, a bubble only makes sense if there is some kind of “intrinsic,” or “foundational,” value to something; similarly, a “foundationalist” thinks that there is some nonhuman reality. But why does this obscure and esoteric doctrinal dispute among a few intellectuals matter, aside from being the latest turn of the wheel of fashion within the walls of the academy?

Well, it matters because what they are really discussing—the real meaning of “intrinsic value”—is whether to allow ordinary people to have any say about the future of their lives.

Many liberals, for instance, have warned about the Republican assault on the right to vote in such matters as the Supreme Court’s 2013 ruling in Shelby County vs. Holder, which essentially gutted the Voting Rights Act of 1965, or the passage of “voter ID laws” in many states—sold as “protections” but in reality a means of preventing voting. What’s far less-often discussed, however, is that intellectuals of the supposed academic left have begun—quietly, to be sure—to question the very idea of voting.

Oxford don Mary Beard, for example—a scholar of the ancient world and avowed feminist—recently wrote a column for the London Review of Books concerning the “Brexit” referendum, in which the people of Great Britain decided whether to stay in the European Union or not. Beard’s sort—educated, with “progressive” opinions—thought that Britain ought to remain in the Union; when the results came in, however, the nation had decided to leave, or “Brexit.” “Handing us a referendum,” Beard wrote in response, “is not a way to reach a responsible decision”—“for God’s sake,” one can almost hear Beard lecturing, “how can you let an important decision be up to the [insert condescending adjective here] voters?” But while that might sound like a one-time response to a very particular situation, in fact many smart people who share Beard’s general views also share her distrust of elections.

What is an election, anyway, but an event analogous to a battle, or a hurricane? To people inclined to dismiss the significance of real events, it’s easy enough to dismiss the notion of elections. “Importantly”— wrote Princeton University’s Lawrance S. Rockefeller Professor of Politics, Stephen Macedo, recently—“majority rule is not a fundamental principle of either democracy or fairness, nor is it required by any basic principle of democracy or fairness.” According to Macedo, “the basic principle of democracy” isn’t elections, but instead “political equality,” or a “respect [for] minority rights and … fair and inclusive deliberation.” In other words, so long as “minority rights” are respected and there is “fair and inclusive deliberation,” it doesn’t matter if anyone votes or not—which is to say that to very many smart, and supposedly “liberal” or “leftist” people, the very notion that voting has any kind of “intrinsic value” to it at all has become irrelevant.

That, more or less, is what the characters on Game of Thrones think too. After all, as Tywin says to Jaime at one point during the conversation I began this essay with, a “lion doesn’t concern himself with the opinion of a sheep.” Which, one supposes, is not a very surprising sentiment on a show that, while it sometimes depicts depicts dragons and magic, mostly concerns the doings of a handful of aristocrats in a feudal age. What might be pretty surprising, however—depending on your level of distrust—is that, today, a great many of the people entrusted to be society’s shepherds appear to agree with them.