Water to the Sea

Yet lives our pilot still. Is’t meet that he
Should leave the helm and like a fearful lad
With tearful eyes add water to the sea
And give more strength to that which hath too much,
Whiles, in his moan, the ship splits on the rock,
Which industry and courage might have saved?
Henry VI, Part III. Act V, scene iv.

Those who make many species are the ‘splitters,’ and those who make few are the ‘lumpers,’” remarked Charles Darwin in an 1857 letter to botanist J.D. Hooker; the title of University of Chicago professor Kenneth Warren’s most recent book, What Was African-American Literature?, announces him as a “lumper.” The chief argument of Warren’s book is that the claim that something called “African-American literature” is “different from the rest of American lit[ature]”—a claim that many of Warren’s colleagues, perhaps no one more so than Harvard’s Henry Louis Gates, Jr., have based their careers upon—is, in reality, a claim that, historically, many writers with large amounts of melanin would have rejected. Take the fact, Warren says, that “literary societies … among free blacks in the antebellum north were not workshops for the production of a distinct black literature but salons for producing works of literary distinction”: these were not people looking to split off—or secede—from the state of literature. Warren’s work is, thereby, aimed against those who, like so many Lears, have divided and subdivided literature by attaching so many different adjectives to literature’s noun—an attack Warren says he makes because “a literature insisting that the problem of the 21st century remains the problem of the color line paradoxically obscures the economic and political problems facing many black Americans, unless those problems can be attributed to racial discrimination.” What Warren sees, I think, is that far too much attention is being paid to the adjective in “African-American literature”—though what he may not see is that the real issue concerns the noun.

The noun being, of course, the word “literature”: Warren’s account worries the “African-American” part of “African-American literature” instead of the “literature” part. Specifically, in Warren’s view what links the adjective to the noun—or “what made African American literature a literature”—was the regime of “constitutionally-sanctioned state-enforced segregation” known as Jim Crow, which made “black literary achievement … count, almost automatically, as an effort on behalf of the ‘race’ as a whole.” Without that institutional circumstance there are writers who are black—but no “black writers.” To Warren, it’s the distinct social structure of Jim Crow, hardening in the 1890s, that creates “black literature,” instead of merely examples of writing produced by people whose skin is darker-colored than that of other writers.

Warren’s argument thereby takes the familiar form of the typical “social construction” argument, as outlined by Ian Hacking in his book, The Social Construction of What? Such arguments begin, Hacking says, when “X is taken for granted,” and “appears to be inevitable”; in the present moment, African-American literature can certainly be said—for some people—to appear to be inevitable: Harvard’s Gates, for instance, has long claimed that “calls for the creation of a [specifically “black”] tradition occurred long before the Jim Crow era.” But it’s just at such moments, Hacking says, that someone will observe that in fact the said X is “the contingent product of the social world.” Which is just what Warren does.

Warren points out that although those who argue for an ahistorical vision of an African-American literature would claim that all black writers were attempting to produce a specifically black literature, Warren notes that the historical evidence points, merely, to an attempt to produce literature: i.e., a member of the noun class without a modifying adjective. At least, until the advent of the Jim Crow system at the end of the nineteenth century: it’s only after that time, Warren says, that “literary work by black writers came to be discussed in terms of how well it served (or failed to serve) as an instrument in the fight against Jim Crow.” In the familiar terms of the hallowed social constructionism argument, Warren is claiming that the adjective is added to the noun later, as a result of specific social forces.

Warren’s is an argument, of course, with a number of detractors, and not simply Gates. In The Postethnic Literary: Reading Paratexts and Transpositions Around 2000, Florian Sedlmeier charged Warren with reducing “African American identity to a legal policy category,” and furthermore that Warren’s account “relegates the functions of authorship and literature to the economic subsystem.” It’s a familiar version of the“reductionist” charge often cited by “postmoderns” against Marxists—an accusation tiresome at best in these days.

More creatively, in a symposium of responses to Warren in the Los Angeles Review of Books, Erica Edwards attempted to one-up Warren by saying that Warren fails to recognize that perhaps the true “invention” of African-American literature was not during the Jim Crow era of legalized segregation, but instead “with the post-Jim Crow creation of black literature classrooms.” Whereas Gates, in short, wishes to locate the origin of African-American literature in Africa prior to (or concurrently with) slavery itself, and Warren instead locates it in the 1890s during the invention of Jim Crow, Edwards wants to locate it in the 1970s, when African-American professors began to construct their own classes and syllabi. Edwards’ argument, at the least, has a certain empirical force: the term “African-American” itself is a product of the civil rights movement and afterwards; that is, the era of the end of Jim Crow, not its beginnings.

Edwards’ argument thereby leads nearly seamlessly into Aldon Lynn Nielsen’s objections, published as part of the same symposium. Nielsen begins by observing that Warren’s claims are not particularly new: Thomas Jefferson, he notes, “held that while Phillis Wheatley [the eighteenth-century black poet] wrote poems, she did not write literature,” while George Schuyler, the black novelist, wrote for The Nation in 1926 that “there was not and never had been an African American literature”—for the perhaps-surprising reason that there was no such thing as an African-American. Schuyler instead felt that the “Negro”—his term—“was no more than a ‘lampblacked Anglo-Saxon.’” In that sense, Schuyler’s argument was even more committed to the notion of “social construction” than Warren is: whereas Warren questions the timelessness of the category of a particular sort of literature, Schuyler questioned the existence of a particular category of person. Warren, that is, merely questions why “African-American literature” should be distinguished—or split from—“American literature”; Schuyler—an even more incorrigible lumper than Warren—questioned why “African-Americans” ought to be distinguished from “Americans.”

Yet, if even the term “African-American,” considered as a noun itself rather than as the adjective it is in the phrase “African-American literature,” can be destabilized, then surely that ought to raise the question, for these sharp-minded intellectuals, of the status of the noun “literature.” For it is precisely the catechism of many today that it is the “liberating” features of literature—that is, exactly, literature’s supposed capacity to produce the sort of argument delineated and catalogued by Hacking, the sort of argument in which it is argued that “X need not have existed”—that will produce, and has produced, whatever “social progress” we currently observe about the world.

That is the idea that “social progress” is the product of an increasing awareness of Nietzsche’s description of language as a “mobile army of metaphors, metonyms, and anthropomorphisms”—or, to use the late American philosopher Richard Rorty’s terminology, to recognize that “social progress” is a matter of redescription by what he called, following literary critic Harold Bloom, “strong poets.” Some version of such a theory is held by what Rorty, following University of Chicago professor Allan Bloom, called “‘the Nietzscheanized left’”: one that takes seriously the late Belgian literature professor Paul de Man’s odd suggestion that “‘one can approach … the problems of politics only on the basis of critical-linguistic analysis,’” or the late French historian Michel Foucault’s insistence that he would not propose a positive program, because “‘to imagine another system is to extend our participation in the present system.’” But such sentiments have hardly been limited to European scholars.

In America, for instance, former Duke University professor of literature Jane Tompkins echoed Foucault’s position in her essay “Sentimental Power: Uncle Tom’s Cabin and the Politics of Literary History.” There, Tompkins approvingly cited novelist Harriet Beecher Stowe’s belief, as expressed in Uncle Tom, that the “political and economic measures that constitute effective action for us, she regards as superficial, mere extensions of the worldly policies that produced the slave system in the first place.’” In the view of people like Tompkins, apparently, “political measures” will somehow sprout out of the ground of their own accord—or at least, by means of the transformative redescriptive powers of “literature.”

Yet if literature is simply a matter of redescription then it must be possible to redescribe “literature” itself: which in this paragraph will be in terms of a growing scientific “literature” (!) that, since the 1930s, has examined the differences between animals and human beings in terms of what are known as “probability guessing experiment[s].” In the classic example of this research—as cited in a 2000 paper called “The Left Hemisphere’s Role in Hypothesis Formation”—if a light is flashed with a ratio of 70% red light to 30% green, animals will tend always to guess red, while human beings will attempt to anticipate which light will be flashed next: in other words, animals will “tend to maximize or always choose the option that has occurred most frequently in the past”—whereas human beings will “tend to match the frequency of previous occurrences in their guesses.” Animals will simply always guess the same answer, while human beings will attempt to divine the pattern: that is, they will make their guesses based on the assumption that the previous series of flashes were meaningful. If the previous three flashes were “red, red, green,” a human being will tend to guess that the next flash will be red, whereas an animal will simply always guess red.

That in turn implies that, since in this specific example there is in fact no pattern and merely a probabilistic ratio of green to red, animals will always outperform human beings in this sort of test: as the authors of the paper write, “choosing the most frequent option all of the time, yields more correct guesses than matching as long as p ≠ 0.5.” Or, as they also note, “if the red light occurs with a frequency of 70% and a green light occurs with a frequency of 30%, overall accuracy will be highest if the subject predicts red all the time.” It’s true, in other words, that attempting to match a pattern will result in being correct 100% of the time—if the pattern is successfully matched. That result has, arguably, consequences for the liberationist claims of social constructionist arguments in general and literature in specific.

I trust that, without much in the way of detail—which I think could be elucidated at tiresome length—it can be stipulated that, more or less, the entire liberatory project of “literature” described above, as held by such luminaries as Foucault or Tompkins, can be said to be an attempt at elaborating rules for “pattern recognition.” Hence, it’s possible to understand how training in literature might be helpful towards fighting discrimination, which after all is obviously about constructing patterns: racists are not racist towards merely 65% of all black people, or are only racist 37% of the time. Racism—and other forms of discrimination—are not probabilistic, they are deterministic: they are rules used by discriminators that are directed at everyone within the class. (It’s true that the phenomenon of “passing” raises questions about classes, but the whole point of “passing” is that individual discriminators are unaware of the class’ “true” boundaries.) So it’s easy to see how pattern-recognition might be a useful skill with which to combat racial or other forms of discrimination.

Matching a pattern, however, suffers from one difficulty: it requires the existence of a pattern to be matched. Yet, in the example discussed in “The Left Hemisphere’s Role in Hypothesis Formation”—as in everything influenced by probability—there is no pattern: there is merely a larger chance of the light being red rather than green in each instance. Attempting then to match a pattern in a situation ruled instead by probability is not only unhelpful, but positively harmful: because there is no pattern, “guessing” simply cannot perform as well as simply maintaining the same choice every time. (Which in this case would at least result in being correct 70% of the time.) In probabilistic situations, in other words, where there is merely a certain probability of a given result rather than a certain pattern, both empirical evidence and mathematics itself demonstrates that the animal procedure of always guessing the same will be more successful than the human attempt at pattern recognition.

Hence, it follows that although training in recognizing patterns—the basis of schooling in literature, it might be said—might be valuable in combatting racism, such training will not be helpful in facing other sorts of problems: as the scientific literature demonstrates, pattern recognition as a strategy only works if there is a pattern. That in turn means that literary training can only be useful in a deterministic, and not probabilistic, world—and therefore, then, the project of “literature,” so-called, can only be “liberatory” in the sense meant by its partisans if the obstacles from which human beings need liberation are pattern-based. And that’s a conclusion, it seems to me, that is questionable at best.

Take, for example, the matter of American health care. Unlike all other industrialized nations, the United States does not have a single, government-run healthcare system, despite the fact that—as Malcolm Gladwell has noted that the American labor movement knew as early as the 1940s— “the safest and most efficient way to provide insurance against ill health or old age [is] to spread the costs and risks of benefits over the biggest and most diverse group possible.”  In other words, insurance works best by lumping, not splitting. The reason why may perhaps be the same as the reason that, as the authors of “The Left Hemisphere’s Role in Hypothesis Formation” point out, it can be said that “humans choose a less optimal strategy than rats” when it comes to probabilistic situations. Contrary to the theories of those in the humanities, in other words, the reality  is that human beings in general—and Americans when it comes to health care—appear to have a basic unfamiliarity with the facts of probability.

One sign of that ignorance is, after all, the growth of casino gambling in the United States even as health care remains a hodgepodge of differing systems—despite the fact that both insurance and casinos run on precisely the same principle. As statistician and trader Nassim Taleb has pointed out, casinos “never (if they do things right) lose money”—so long as they are not run by Donald Trump—because they “simply do not let one gambler make a massive bet” and instead prefer “to have plenty of gamblers make a series of bets of limited size.” In other words, it is not possible for some high roller to bet, say, a Las Vegas casino the entire worth of the casino on a single hand of blackjack, or any other game; casinos just simply limit the stakes to something small enough that the continued existence of the business is not at risk on any one particular event, and then make sure that there are enough bets being made to allow the laws of probability in every game (which are tilted toward the casino) to ensure the continued health of the business. Insurance, as Gladwell observed above, works precisely the same way: the more people paying premiums—and the more widely dispersed they are—the less likely it is that any one catastrophic event can wipe out the insurance fund. Both insurance and casinos are lumpers, not splitters: that, after all, is precisely why all other industrialized nations have put their health care systems on a national basis rather than maintaining the various subsystems that Americans—apparently inveterate splitters—still have.

Health care, of course, is but one of the many issues of American life that, although influenced by, ultimately have little to do with, racial or other kinds of discrimination: what matters about health care, in other words, is that too few Americans are getting it, not merely that too few African-Americans are. The same is true, for instance, about incarceration: although such works as Michelle Alexander’s The New Jim Crow have argued that the fantastically-high rate of incarceration in the United States constitutes a new “racial caste system,” University of Pennsylvania professor of political science Marie Gottschalk has pointed out that “[e]ven if you released every African American from US prisons and jails today, we’d still have a mass incarceration crisis in this country.” The problem with American prisons, in other words, is that there are too many Americans in them, not (just) too many African-Americans—or any other sort of American.

Viewing politics through a literary lens, in sum—as a matter of flashes of insight and redescription, instantiated by Wittgenstein’s duck-rabbit figure and so on—ultimately has costs: costs that have been witnessed again and again in recent American history, from the War on Drugs to the War on Terror. As Warren recognizes, viewing such issues as health care or prisons through a literary, or more specifically racial, lens is ultimately an attempt to fit a square peg through a round hole—or, perhaps even more appositively, to bring a knife to a gun fight. Warren, in short, may as well have cited UCLA philosophy professor Abraham Kaplan’s observation, sometimes called Kaplan’s Law of the Instrument: “Give a boy a hammer and everything he meets has to be pounded.” (Or, as Kaplan put the point more delicately, it ought not to be surprising “to discover that a scientist formulates problems in a way which requires for their solution just those techniques in which he himself is especially skilled.”) Much of the American “left,” in other words, views all problems as matters of redescription and so on—a belief not far from common American exhortations to “think positively” and the like. Certainly, America is far from the post-racial utopia some would like it to be. But curing the disease is not—contrary to the beliefs of many Americans today—the same as diagnosing it.

Like it—or lump it.

Baal

Just as ancient Greek and Roman propagandists insisted, the Carthaginians did kill their own infant children, burying them with sacrificed animals and ritual inscriptions in special cemeteries to give thanks for favours from the gods, according to a new study.
The Guardian, 21 January 2014.

 

Just after the last body fell, at three seconds after 9:40 on the morning of 14 December, the debate began: it was about, as it always is, whether Americans ought to follow sensible rules about guns—or whether they ought to be easier to obtain than, say, the right to pull fish out of the nearby Housatonic River. There’s been a lot of words written about the Sandy Hook killings since the day that Adam Lanza—the last body to fall—killed 20 children and six adults at the elementary school he once attended, but few of them have examined the culpability of some of the very last people one might expect with regard to the killings: the denizens of the nation’s universities. After all, it’s difficult to accuse people who themselves are largely in favor of gun control of aiding and abetting the National Rifle Association—Pew Research reported, in 2011, that more than half of people with more than a college degree favored gun control. And yet, over the past several generations a doctrine has gained ground that, I think, has not only allowed academics to absolve themselves of engaging in debate on the subject of gun control, but has actively harmed the possibility of accomplishing it.

Having said that, of course, it is important to acknowledge that virtually all academics—even those who consider themselves “conservative” politically—are in favor of gun control: when for example Texas passed a law legalizing carrying guns on college campus recently Daniel S. Hamermesh, a University of Texas emeritus professor of economics (not exactly a discipline known for its radicalism), resigned his position, citing a fear for his own and his students’ safety. That’s not likely accidental, because not only do many academics oppose guns in their capacities as citizens, but academics have a special concern when it comes to guns: as Firmin DeBrabander, a professor of philosophy at the Maryland Institute College of Art argued in the pages of Inside Higher Ed last year, against laws similar to Texas’, “guns stand opposed” to the “pedagogical goals of the classroom” because while in the classroom “individuals learn to talk to people of different backgrounds and perspectives,” guns “announce, and transmit, suspicion and hostility.” If anyone has a particular interest in controlling arms, in other words, it’s academics, being as their work is particularly designed to foster what DeBrabander calls “open and transformative exchange” that may air “ideas [that] are offensive.” So to think that academics may in fact be an obstacle towards achieving sensible policies regarding guns might appear ridiculous on the surface.

Yet there’s actually good reason to think that academic liberals bear some responsibility for the United States’ inability to regulate guns like every other industrialized—I nearly said, “civilized”—nation on earth. That’s because changing gun laws would require a specific demands for action, and as political science professor Adolph Reed, Jr. of the University of Pennsylvania put the point not long ago in Harper’s, these days the “left has no particular place it wants to go.” That is, to many on campus and off, making specific demands of the political sphere is itself a kind of concession—or in other words, as journalist Thomas Frank remarked a few years ago about the Occupy Wall Street movement, today’s academic left teaches that “demands [are] a fetish object of literal-minded media types who stupidly crave hierarchy and chains of command.” Demanding changes to gun laws is, after all, a specific demand, and to make specific demands is, from this sophisticated perspective, a kind of “sell out.”

Still, how did the idea of making specific demands become a derided form of politics? After all, the labor movement (the eight-hour day), the suffragette movement (women’s right to vote) or the civil rights movement (an end to Jim Crow) all made specific demands. How then has American politics arrived at the diffuse and essentially inarticulable argument of the Occupy movement—a movement within which, Elizabeth Jacobs claimed in a report for the Brookings Institute while the camp in Zuccotti Park still existed, “the lack of demands is a point of pride?” I’d suggest that one possible way the trick was turned was through a 1967 article written by one Robert Bellah, of Harvard: an article that described American politics, and its political system, as a “civil religion.” By describing American politics in religious rather than secular terms, Bellah opened the way towards what some have termed the “non-politics” of Occupy and other social movements—and incidentally, allow children like Adam Lanza’s victims to die.

In “Civil Religion in America,” Bellah—who received his bachelor’s from Harvard in 1950, and then taught at Harvard until moving to the University of California at Berkeley in 1967, where he continued until the end of his illustrious career—argued that “few have realized that there actually exists alongside of and rather clearly differentiated from the churches an elaborate and well-institutionalized civil religion in America.” This “national cult,” as Bellah terms it, has its own holidays: Thanksgiving Day, Bellah says, “serves to integrate the family into the civil religion,” while “Memorial Day has acted to integrate the local community into the national cult.” Bellah also remarks that the “public school system serves as a particularly important context for the cultic celebration of the civil rituals” (a remark that, incidentally, perhaps has played no little role in the attacks on public education over the past several decades). Bellah also argues that various speeches by American presidents like Abraham Lincoln and John F. Kennedy are also examples of this “civil religion” in action: Bellah spends particular time with Lincoln’s Gettysburg Address, which he notes that poet Robert Lowell observed is filled with Christian imagery, and constitutes “a symbolic and sacramental act.” In saying so, Bellah is merely following a longstanding tradition regarding both Lincoln and the Gettysburg Address—a tradition that, however, that does not have the political valence that Bellah, or his literal spiritual followers, might think it does.

“Some think, to this day,” wrote Garry Wills of Northwestern University in his magisterial Lincoln at Gettysburg: The Words that Remade America, “that Lincoln did not really have arguments for union, just a kind of mystical attachment to it.” It’s a tradition that Wills says “was the charge of Southerners” against Lincoln at the time: after the war, Wills notes, Alexander Stephens—the only vice president the Confederate States ever had—argued that the “Union, with him [Lincoln], in sentiment rose to the sublimity of a religious mysticism.” Still, it’s also true that others felt similarly: Wills points out that the poet Walt Whitman wrote that “the only thing like passion or infatuation” in Lincoln “was the passion for the Union of these states.” Nevertheless, it’s a dispute that might have fallen by the historical wayside if it weren’t for the work of literary critic Edmund Wilson, who called his essay on Lincoln (collected in a relatively famous book Patriotic Gore: Studies in the Literature of the American Civil War) “The Union as Religious Mysticism.” That book, published in 1962, seems to have at least influenced Lowell—the two were, if not friends, at least part of the same New York City literary scene—and through Lowell Bellah, seems plausible.

Even if there was no direct route from Wilson to Bellah, however, it seems indisputable that the notion—taken from Southerners—concerning the religious nature of Lincoln’s arguments for the American Union became widely transmitted through American culture. Richard Nixon’s speechwriter, William Safire—since a longtime columnist for the New York Times—was familiar with Wilson’s ideas: as Mark Neely observed in his The Fate of Liberty: Abraham Lincoln and the Civil Liberties, on two occasions in Safire’s novel Freedom, “characters comment on the curiously ‘mystical’ nature of Lincoln’s attachment to the Union.” In 1964, the theologian Reinhold Niebuhr published an essay entitled “The Religion of Abraham Lincoln,” while in 1963 William J. Wolfe of the Episcopal Theological School of Cambridge, Massachusetts claimed that “Lincoln is one of the greatest theologians in America,” in the sense “of seeing the hand of God intimately in the affairs of nations.” Sometime in the early 1960s and afterwards, in other words, the idea took root among some literary intellectuals that the United States was a religious society—not one based on an entirely secular philosophy.

At least when it comes to Lincoln, at any rate, there’s good reason to doubt this story: far from being a religious person, Lincoln has often been described as non-religious or even an atheist. His longtime friend Jesse Fell—so close to Lincoln that it was he who first suggested what became the famous Lincoln-Douglas debates—for instance once remarked that Lincoln “held opinions utterly at variance with what are usually taught in the church,” and Lincoln’s law partner William Herndon—who was an early fan of Charles Darwin’s—said that the president also was “a warm advocate of the new doctrine.” Being committed to the theory of evolution—if Lincoln was—doesn’t mean, of course, that the president was therefore anti-religious, but it does mean that the notion of Lincoln as religious mystic has some accounting to do: if he was, it apparently was in no very simple way.

Still, as mentioned the view of Lincoln as a kind of prophet did achieve at least some success within American letters—but, as Wills argues in Lincoln at Gettysburg, that success has in turn obscured what Lincoln really argued concerning the structure of American politics. As Wills remarks for instance, “Lincoln drew much of his defense of the Union from the speeches of [Daniel] Webster, and few if any have considered Webster a mystic.” Webster’s views, in turn, descend from a line of American thought that goes back to the Revolution itself—though its most significant moment was at the Constitutional Convention of 1787.

Most especially, to one James Wilson, a Scottish emigrant, delegate to the Constitutional Convention of 1787, and later one of the first justices of the Supreme Court of the United States. If Lincoln got his notions of the Union from Webster, then Webster got his from Supreme Court Justice Joseph Story: as Wills notes, Theodore Parker, the Boston abolitionist minister, once remarked that “Mr. Justice Story was the Jupiter Pluvius [Raingod] from whom Mr. Webster often sought to elicit peculiar thunder for his speeches and private rain for his own public tanks of law.” Story, for his part, got his notion from Wilson: as Linda Przybyscewski notes in passing in her book, The Republic According to John Marshall Harlan (a later justice), Wilson was “a source for Joseph Story’s constitutional nationalism.” And Wilson’s arguments concerning the constitution—which he had a strong hand in making—were hardly religious.

At the constitutional convention, one of the most difficult topics to confront the delegates was the issue of representation: one of the motivations for the convention itself, after all, was the fact that under the previous terms of government, the Articles of Confederation, each state, rather than each member of the Continental Congress, possessed a vote. Wilson had already, in 1768, attacked the problem of representation as being one of the foremost reasons for the Revolution itself—the American colonies were supposed, by British law, to be fully as much British subjects as a Londoner or Mancunian, but yet had no representation in Parliament: “Is British freedom,” Wilson therefore asked in his Considerations on the Nature and Extent of the Legislative Authority of the British Parliament, “denominated from the soil, or from the people, of Britain?” That question was very much the predecessor of the question Wilson would ask at the convention: “For whom do we make a constitution? Is it for men, or is it for imaginary beings called states?” To Wilson, the answer was clear: constitutions are for people, not for tracts of land.

Wilson also made an argument that would later be echoed by Lincoln: he drew attention to the disparities of population between the several states. At the time of the convention, Pennsylvania—just as it is today—was a much more populous state than New Jersey was, a difference that made no difference under the Articles of Confederation, under which all states had the same number of votes: one. “Are not the citizens of Pennsylvania,” Wilson therefore asked the Convention, “equal to those of New Jersey? Does it require 150 of the former to balance 50 of the latter?” This argument would later be echoed by Lincoln, who, in order to illustrate the differences between free states and slave states, would—in October of 1854, at Peoria, in the speech that would mark his political comeback—note that

South Carolina has six representatives, and so has Maine; South Carolina has eight presidential electors, and so has Maine. This is precise equality so far; and, of course they are equal in Senators, each having two. Thus in the control of the government, the two States are equals precisely. But how are they in the number of their white people? Maine has 581,813—while South Carolina has 274,567. Maine has twice as many as South Carolina, and 32,679 over. Thus each white man in South Carolina is more than the double of any man in Maine.

The point of attack for both men, in other words, was precisely the same: the matter of representation in terms of what would later be called a “one man, one vote” standard. It’s an argument that hardly appears “mystical” in nature: since the matter turns, if anything, upon ratios of numbers to each other, it seems more aposit to describe the point of view adopted here as, if anything, “scientific”—if it weren’t for the fact that even the word “scientific” seems too dramatic a word for a matter that appears to be far more elemental.

Were Lincoln or Wilson alive today, then, it seems that the first point they might make about the gun control debate is that it is a matter about which the Congress is greatly at variance with public opinion: as Carl Bialik reported for FiveThirtyEight this past January, whenever Americans are polled “at least 70 percent of Americans [say] they favor background checks,” and furthermore that an October 2015 poll by CBS News and the New York Times “found that 92 percent of Americans—including 87 percent of Republicans—favor background checks for all gun buyers.” Yet, as virtually all Americans are aware, it has become essentially impossible to pass any sort of sensible legislation through Congress: a fact dramatized this spring by a “sit-down strike” in Congress by congressmen and congresswomen. What Lincoln and Wilson might further say about the point is that the trouble can’t be solved by such a “religious” approach: instead, what they presumably would recommend is that what needs to change is a system that inadequately represents the people. That isn’t the answer that’s on offer from academics and others on the American left, however. Which is to say that, soon enough, there will be another Adam Lanza to bewail—another of the sacrifices, one presumes, that the American left demands Americans must make to what one can only call their god.

Talk That Talk

Talk that talk.
“Boom Boom.”
    John Lee Hooker. 1961.

 

Is the “cultural left” possible? What I mean by “cultural left” is those who, in historian Todd Gitlin’s phrase, “marched on the English department while the Right took the White House”—and in that sense a “cultural left” is surely possible, because we have one. Then again however, there are a lot of things that exist but yet have little rational grounds for doing so, such as the Tea Party or the concept of race. So, did the strategy of leftists invading the nation’s humanities departments ever really make any sense? In other words, is it even possible to conjoin a sympathy for and solidarity with society’s downtrodden with a belief that the means to further their interests is to write, teach, and produce art and other “cultural” products? Or, is that idea like using a chainsaw to drive nails?

Despite current prejudices, which often these days depict “culture” as on the side of the oppressed, history suggests the answer is the latter, not the former: in reality, “culture” has usually acted hand-in-hand with the powerful—as it must, given that it is dependent upon some people having sufficient leisure and goods to produce it. Throughout history, art’s medium has simply been too much for its ostensible message—it’s depended on patronage of one sort or another. Hence, a potential intellectual weakness of basing a “left” around the idea of culture: the actual structure of the world of culture simply is the way that the fabulously rich Andrew Carnegie argued society ought to be in his famous 1889 essay, “The Gospel of Wealth.”

Carnegie’s thesis in “The Gospel of Wealth” after all was that the “superior wisdom [and] experience” of the “man of wealth” ought to determine how to spend society’s surplus. To that end, the industrialist wrote, wealth ought to be concentrated: “wealth, passing through the hands of the few, can be made a much more potent force … than if it had been distributed in small sums to the people themselves.” If it’s better for ten people to have $100,000 each than for a hundred to have $10,000, then it ought to be that much better to have one person with a million dollars. Instead of allowing that money to wander around aimlessly, the wealthiest—for Carnegie, a category interchangeable with “smartest”—ought to have charge of it.

Most people today, I think, would easily spot the logical flaw in Carnegie‘s prescription: just because somebody has money doesn’t make them wise, or even that intelligent. Yet while that is certainly true, the obvious flaw in the argument obscures a deeper flaw—at least if considering the arguments of the trader and writer Nassim Taleb, author of Fooled by Randomness and The Black Swan. According to Taleb, the problem with giving power to the wealthy isn’t just that knowing something about someone’s wealth doesn’t necessarily guarantee intelligence—it’s that, over time, the leaders of such a society are likely to become less, rather than more, intelligent.

Taleb illustrates his case by, perhaps coincidentally, reference to “culture”: an area that he correctly characterizes as at least as, if not more so, unequal as any aspect of human life. “It’s a sad fact,” Taleb wrote not long ago, “that among a large cohort of artists and writers, almost all will struggle (say, work for Starbucks) while a small number will derive a disproportionate share of fame and attention.” Only a vanishingly small number of such cultural workers are successful—a reality that is even more pronounced when it comes to cultural works themselves, according to Stanford professor of literature Franco Moratti.

Investigating early lending libraries, Moratti found that the “smaller a collection is, the more canonical it is” [emp. original]; and also, “small size equals safe choices.” That is, of the collections he studied, he found that the smaller they were the more homogenous they were: nearly every library is going to have a copy of the Bible, for instance, while only a very large library is likely to have, say, copies of the Dead Sea Scrolls. The world of “culture” then is just is the way Carnegie wished the rest of the world to be: a world ruled by what economists call a “winner-take-all” effect, in which increasing amounts of a society’s spoils go to fewer and fewer contestants.

Yet, whereas according to Carnegie’s theory this is all to the good—on the theory that the “winners” deserve their wins—according to Taleb what actually results is something quite different. A “winner-take-all” effect, he says, “implies that those who, for some reason, start getting some attention can quickly reach more minds than others, and displace the competitors from the bookshelves.” So even though two competitors might be quite close in quality, whoever is a contest’s winner gets everything—and what that means is, as Taleb says about the art world, “that a large share of the success of the winner of such attention can be attributable to matters that lie outside the piece of art itself, namely luck.” In other words, it’s entirely possible that “the failures also have the same ‘qualities’ attributable to the winner”: the differences between them might not be much, but who now knows about Ben Jonson, William Shakespeare’s playwriting contemporary?

Further, consider what that means over time. Over-rewarding those who might happen to have caught some small edge, in other words, tends to magnify small initial differences. What that would mean is that someone who might possess more over-all merit, but that happened to have been overlooked for some reason, would tend to be buried by anyone who just happened to have had an advantage—deserved or not, small or not. And while, considered from the point of view of society as whole, that’s bad enough—because then the world isn’t using all the talent it has available—think about what happens to such a society over time: contrary to Andrew Carnegie’s theory, that society would tend to produce less capable, not more capable, leaders, because it would be more—not less—likely that they reached their position by sheer happenstance rather than merit.

A society, in other words, that was attempting to maximize the potential talent available to it—and it seems little arguable that such is the obvious goal—should not be trying to bury potential talent, but instead to expose as much of it as possible: to get it working, doing the most good. But whatever the intentions of those involved in it, the “culture industry” as a whole is at least as regressive and unequal as any other: whereas in other industries “star” performers usually only emerge after years and years of training and experience, in “culture” many times such performers either emerge in youth or not at all. Of all parts of human life, in fact, it’s difficult to think of one more like Andrew Carnegie’s dream of inequality than culture.

In that sense then it’s hard to think of a worse model for a leftish kind of politics than culture, which perhaps explains why despite the fact that our universities are bulging with professors of art and literature and so on proclaiming “power to the people,” the United States is as unequal a place today as it has been since the 1920s. For one thing, such a model stands in the way of critiques of American institutions that are built according to the opposite, “Carnegian,” theory—and many American institutions are built according to such a theory.

Take the U.S. Supreme Court, where—as Duke University professor of law Jedediah Purdy has written—the “country puts questions of basic principle into the hands of just a few interpreters.” That, in Taleb’s terms, is bad enough: the fewer people doing the deciding implies a greater variability in outcome, which also means a potentially greater role for chance. It’s worse when it’s considered the court is an institution that only irregularly gains new members: appointing new Supreme Court justices depends whoever happens to be president and the lifespan of somebody else, just for starters. All of these facts, Taleb’s work suggests, implies that selecting Supreme Court justices are prone to chance—and thus that Supreme Court verdicts are too.

None of these things are, I think any reasonable person would say, desirable outcomes for a society. To leave some of the most important decisions of any nation potentially exposed to chance, as the structure of the United States Supreme Court does, seems particularly egregious. To argue against such a structure however depends on a knowledge of probability, a background in logic and science and mathematics—not a knowledge of the history of the sonnet form or the films of Jean Luc Goddard. And yet, Americans today are told that “the left” is primarily a matter of “culture”—which is to say that, though a “cultural left” is apparently possible, it may not be all that desirable.

 

 

 

Lest The Adversary Triumph

… God, who, though his power
Creation could repeat, yet would be loath
Us to abolish, lest the Adversary
Triumph …
Paradise Lost Book XI

… the literary chit-chat which makes the reputation of poets boom and crash in an imaginary stock exchange …
The Anatomy of Criticism

A list of articles for “liberal” magazine Salon.com. The first is an attack on Darwinians like Richard Dawkins; the others ridicule creationists for being anti-Darwinian
A list of articles for “liberal” magazine Salon.com. The first is an attack on Darwinians like Richard Dawkins; the others ridicule creationists for being anti-Darwinian

 

“Son, let me make one thing clear,” Air Force General Curtis Le May, the first head of the Strategic Air Command, supposedly said sometime in the 1950s to a young officer who repeatedly referred to the Soviet Union as the “enemy” during a presentation about Soviet nuclear capabilities. “The Soviet Union,” the general explained, “is our adversary. The enemy is the United States Navy.” Similarly, the “sharp rise in U.S. inequality, especially at the very top of the income scale” in recent years—as Nobel Prize winner Paul Krugman called it, in 1992—might equally be the result of confusion: as Professor Walter Benn Michaels of the University of Illinois at Chicago has written, “the intellectual left has responded to the increase in economic inequality by insisting on the importance of cultural identity.” The simplest explanation for that disconnect, I’d suggest, is that while the “intellectual left” might talk a good game about “speaking truth to power” and whatnot, “power” is just their adversary. The real enemy is science, especially Darwinian biology—and, yet more specifically, a concept called “survivorship bias”—and that enmity may demonstrate that the idea of an oppositional politics based around culture, rather than science, is absurd.

Like a lot of American wars, this one is often invisible to the American public, partly because when academics like University of Chicago English professor W.J.T. Mitchell do write for the public,  they often claim their modest aim is merely to curb scientific hubris. As Mitchell piously wrote in 1998’s The Last Dinosaur Book: The Life and Times of a Cultural Icon, his purpose in that book was merely to note that “[b]iological explanations of human behavior … are notoriously easy, popular, and insidious.” As far as that goes, of course, Mitchell is correct: the history of the twentieth century is replete with failed applications of Darwinian thought to social problems. But then, the twentieth century is replete with a lot of failed intellectual applications—yet academic humanists tend to focus on blaming biology for the mistakes of the past.

Consider for example how many current academics indict a doctrine called “social Darwinism” for the social ills of a century ago. In ascending order of sophistication, here is Rutgers historian Jackson Lears asserting from the orchestra pit, in a 2011 review of books by well-known atheist Sam Harris, that the same “assumptions [that] provided the epistemological foundations for Social Darwinism” did the same “for scientific racism and imperialism,” while from the mezzanine level of middlebrow popular writing here is William Kleinknecht, in The Man Who Sold The World: Ronald Reagan and the Betrayal of Main Street America, claiming that in the late nineteenth and early twentieth centuries, “social Darwinism … had nourished a view of the lower classes as predestined by genetics and breeding to live in squalor.” Finally, a diligent online search discovers, in the upper balcony, Boston University student Evan Razdan’s bald assertion that at the end of the nineteenth century, “Darwinism became a major justification for racism and imperialism.” I could multiply the examples: suffice it to say that for a good many in academe, it is now gospel truth that Darwinism was on the side of the wealthy and powerful during the early part of the twentieth century.

In reality however Darwin was usually thought of as on the side of the poor, not the rich, in the early twentieth century. For investigative reporters like Ida Tarbell, whose The History of the Standard Oil Company is still today the foundation of muckraking journalism, “Darwin’s theory [was] a touchstone,” according to Steve Weinberg’s Taking on the Trust: The Epic Battle of Ida Tarbell and John D. Rockefeller. The literary movement of the day, naturalism, drew its characters “primarily from the lower middle class or the lower class,” as Donald Pizer wrote in Realism and Naturalism in Nineteenth-Century American Fiction, and even a scholar with a pro-religious bent like Doug Underwood must admit, as he does in From Yahweh to Yahoo: The Religious Roots of the Secular Press, that the “naturalists were particularly influenced by the theories of Charles Darwin.” Progressive philosopher John Dewey wrote in 1910’s “The Influence of Darwinism on Philosophy” that Darwin’s On the Origin of Species “introduced a mode of thinking that in the end was bound to transform the logic of knowledge, and hence the treatment of morals, politics, and religion.” (As American philosopher Richard Rorty has noted, Dewey and his pragmatists began “from a picture of human beings as chance products of evolution.”) Finally, Karl Marx—a person no one has ever thought to be on the side of the wealthy—thought so highly of Darwin that he exclaimed, in a letter to Frederick Engels, that On the Origin of Species “contains the basis in natural history for our view.” To blame Darwin for the inequality of the Gilded Age is like blaming Smokey the Bear for forest fires.

Even aside from the plain facts of history, however, you’d think the sheer absurdity of pinning Darwin for the crimes of the robber barons would be self-evident. If a thief cited Matthew 5:40—“And if any man will sue thee at the law, and take away thy coat, let him have thy cloke also”—to justify his theft, nobody would think that he had somehow thereby indicted Jesus. Logically, the idea a criminal cites to justify his crime makes no difference either to the fact of the crime or to the idea: that is why the advocates of civil disobedience, like Martin Luther King Jr., held that lawbreaking in the name of a higher law still requires the lawbreaker to be arrested, tried, and, if found guilty, sentenced. (Conversely, is it somehow worse that King was assassinated by a white supremacist? Or would it have been better had he been murdered in the course of a bank robbery that had nothing to do with his work?) Just because someone commits a crime in the name of an idea, as King sometimes did, doesn’t make the idea itself wrong. nor could it make Martin Luther King Jr. any less dead. And anyway, isn’t the notion of taking a criminal’s word about her motivations at face value dubious?

Somehow however the notion that Darwin is to blame for the desperate situation of the poor at the beginning of twentieth century has been allowed to fester in the American university system: Eric Rauchway, a professor of history at the University of California Davis, even complained in 2007 that anti-Darwinism has become so widespread among his students that it’s now a “cliche of the history paper that during the industrial era” all “misery and suffering” was due to the belief of the period’s “lords of plutocracy” in the doctrines of “‘survival of the fittest’” and “‘natural selection.’” That this makes no sense doesn’t seem to enter anyone’s calculations—despite the fact that most of these “lords,” like John Rockefeller and Andrew Carnegie,  were “good Christian gentlemen,” just like many businessmen are today.

The whole idea of blaming Darwin, as I hope is clear, is at best exaggerated and at worst nonsense. But really to see the point, it’s necessary to ask why all those “progressive” and “radical” thinkers thought Darwin was on their side, not the rich man’s. The answer can be found by thinking clearly about what Darwin actually taught, rather than what some people supposedly used him to justify. And what the biologist taught was the doctrine of natural selection: a process that, understood correctly, is far from a doctrine that favors the wealthy and powerful. It would be closer to the truth to say that, on the contrary, what Darwin taught must always favor the poor against the wealthy.

To many in the humanities, that might sound absurd—but to those uncommitted, let’s begin by understanding Darwin as he understood himself, not by what others have claimed about him. And misconceptions of Darwin begin at the beginning: many people credit Charles Darwin with the idea of evolution, but that was not his chief contribution to human knowledge. A number of very eminent people, including his own grandfather, Erasmus Darwin, had argued for the reality of evolutionary descent long before Charles was even born: in his two-volume work of 1796, Zoonomia; or, the Laws of Organic Life, this older Darwin had for instance asserted that life had been evolving for “millions of ages before the commencement of the history of mankind.” So while the theory of evolution is at times presented as springing unbidden from Erasmus’ grandson Charles’ head, that’s simply not true.

By the time Charles published On the Origin of Species in 1859, the general outline of evolution was old hat to professionals, however shocking it may have been to the general public. On the Origin of Species had the impact it did because of the mechanism Darwin suggested to explain how the evolution of species could have proceeded—not that it presented the facts of evolutionary descent, although it do that in copious detail. Instead, as American philosopher Daniel Dennett has observed, “Darwin’s great idea” was “not the idea of evolution, but the idea of evolution by natural selection.” Or as the biologist Stephen Jay Gould has written, Darwin’s own chief aim in his work was “to advance the theory of natural selection as the most important mechanism of evolution.” Darwin’s contribution wasn’t to introduce the idea that species shared ancestors and hence were not created but evolved—but instead to explain how that could have happened.

What Darwin did was to put evolution together with a means of explaining it. In simplest terms, that natural selection is what Darwin would say it was in the Origin: the idea that, since “[m]ore individuals are born than can possibly survive,” something will inevitably “determine which individual shall live and which shall die.” In such a circumstances, as he would later write in the Historical Sketch of the Progress of Opinion on the Origin of Species, “favourable variations would tend to be preserved, and unfavourable ones would be destroyed.” Or as Stephen Jay Gould has succinctly put it, natural selection is “the unconscious struggle among individual organisms to promote their own personal reproductive success.” The word unconscious is the keyword here: the organisms don’t know why they have succeeded—nor do they need to understand. They just do—to paraphrase Yoda—or do not.

Why any of this should matter to the humanities or to people looking to contest economic inequality ought be immediately apparent—and would be in any rational society. But since the American education system seems designed at the moment to obscure the point I will now describe a scientific concept related to natural selection known as survivorship bias. Although that concept is used in every scientific discipline, it’s a particularly important one to Darwinian biology. There’s an argument, in fact, that survivorship bias is just a generalized version of natural selection, and thus it simply is Darwinian biology.

That’s because the concept of “survivorship bias” describes how human beings are tempted to describe mindless processes as mindful ones. Here I will cite one of the concept’s most well-known contemporary advocates, a trader and professor of something called “risk engineering” at New York University named Nassim Nicholas Taleb—precisely because of his disciplinary distance both from biology and the humanities: his distance from both, as Bertold Brecht might have has described it, “exposes the device” by stripping the idea from its disciplinary contexts. As Taleb says, one example of survivorship bias is the tendency all human beings have to think that someone is “successful because they are good.” Survivorship bias, in short, is the sometimes-dangerous assumption that there’s a cause behind every success. But, as Darwin might have said, that ain’t necessarily so.

Consider for instance a hypothetical experiment Taleb constructs in his Fooled By Randomness: The Hidden Role of Chance in Life and in the Markets, consisting of 10,000 money managers. The rules of this experiment are that “each one has a 50% probability of making $10,000 at the end of the year, and a 50% probability of losing $10,000.” If we should run this experiment five times—five runs through randomness—then at the end of those conjectural five years, by the laws of probability we can expect “313 managers who made money for five years in a row.” Is there anything especially clever about these few? No: their success has nothing to do with any quality each might possess. It’s simply due, as Taleb says, to “pure luck.” But these 313 will think of themselves as very fine fellows.

Now, notice that, by substituting the word “zebra” for the words “money managers” and “10 offspring” for “$10,000” Taleb has more or less described the situation of the Serengeti Plain—and, as early twentieth-century investigative reporter Ida Tarbell realized, the wilds of Cleveland, Ohio. Tarbell, in 1905’s “John D. Rockefeller: A Character Study” actually says that by 1868, when Rockefeller was a young businessman on the make, he “must have seen clearly … that nothing but some advantage not given by nature or recognized by the laws of fair play in business could ever make him a dictator in the industry.” In other words, Rockefeller saw that if he merely allowed “nature,” as it were, to take its course, he stood a good chance of being one of the 9000-odd failures, instead of the 300-odd success stories. Which is why he went forward with the various shady schemes Tarbell goes on to document in her studies of the man and his company. (Whose details are nearly unbelievable—unless you’re familiar with the details of the 2008 housing bubble.) The Christian gentleman John D. Rockefeller, in other words, hardly believed in the “survival of the fittest.”

It should in other words be clear just how necessary the concept of survivorship bias—and thus Darwin’s notion of natural selection—is to any discussion of economic inequality. Max Weber at least, the great founder of sociology, understood it—that’s why, in The Protestant Ethic and the Spirit of Capitalism, Weber famously described the Calvinist doctrine of predestination, in which “God’s grace is, since His decrees cannot change, as impossible for those to whom He has granted it to lose as it is unattainable for those to whom He has denied it.” As Weber knew, if the Chosen of God are known by their worldly success, then there is no room for debate: the successful simply deserve their success in a fashion not dissimilar to the notion of the divine right of kings.

If there’s a possibility that worldly success is however due to chance, i.e. luck, then the road is open to argue about the outcomes of the economic system. Since John D. Rockefeller, at least according to Tarbell, certainly did act as though worldly success was far more due to “chance” rather than the fair outcome of a square game, one could I suppose argue that he was a believer in Darwinism like the believers in the “social Darwinist” camp say. But that seems to stretch the point.

Still, what has this to do with the humanities? The answer is that you could do worse than define the humanities by saying they are the disciplines of the university that ignore survivorship bias—although, if so, that might mean that “business” ought to be classified alongside comparative literature in the course catalogue, at least as Taleb puts it.

Examine economist Gary Smith’s Standard Deviations: Flawed Assumptions, Tortured Data, And Other Ways To Lie With Statistics. As Michael Shermer of Pomona College notes in a review of Smith’s book, Smith shows how business books like Jim Collins’ Good to Great “culled 11 companies out of 1,435 whose stock beat the market average over a 40-year time span and then searched for shared characteristics among them,” or how In Search of Excellence, 1982’s best-seller,  “identified eight common attributes of 43 ‘excellent’ companies.” As Taleb says in his The Black Swan: The Impact of the Highly Improbable, such studies “take a population of hotshots, those with big titles and big jobs, and study their attributes”—they “look at what those big guns have in common: courage, risk taking, optimism and so on, and infer that these traits, most notably risk taking, help you to become successful.” But as Taleb observes, the “graveyard of failed persons [or companies] will be full of people who shared the following traits: courage, risk taking, optimism, et cetera.” The problem with “studies” like these is that they begin with Taleb’s 313, instead of the 10,000.

Another way to describe “survivorship bias” in other words is to say that any real investigation into anything must consider what Taleb calls the “silent evidence”: in the case of the 10,000 money managers, it’s necessary to think of the 9000-odd managers who started the game and failed, and not just the 300-odd managers who succeeded. Such studies will surely always find “commonalities” between the “winners,” just as Taleb’s 313 will surely always discover some common trait between them—and in the same way that a psychic can always “miraculously” know that somebody just died.

Yet, why should the intellectual shallowness of business writers matter to scholars in the humanities, who write not for popular consumption but for peer-review? Well, because as Taleb points out, the threat posed by survivorship bias to shoddy kinds of scholarship is not particular to shoddy studies and shoddy scholars, but instead is endemic to entire species of writing. Take for instance Shermer’s discussion of Walter Isaacson’s 2011 biography of Apple Computer’s Steve Jobs … which I’d go into if it were necessary.

But it isn’t, according to Taleb: the “entire notion of biography,” Taleb says in The Black Swan, “is grounded in the arbitrary ascription of a causal relation between specified traits and subsequent events.” Biography by definition takes a number of already-successful entities and then tries to explain their success, instead of starting with equally-unknown entities and watching them either succeed or fail. Nobody finds Beethoven before birth, and even Jesus Christ didn’t pick up disciples before adulthood. Biographies then might be entertaining, but they can’t possibly have any real intellectual substance. Biographies could only really be valuable if their authors predicted a future success—and nobody could possibly write a predictive biography. Biography then simply is an exercise in survivorship bias.

And if biography, then how about history? About the only historians who discuss the point of survivorship bias are those who write what’s known as “counterfactual” history. A genre largely kicked off by journalist MacKinlay Kantor’s fictitious 1960 speculation, If the South Had Won the Civil War, it’s been defined by former Regius Professor of History at Cambridge University Richard J. Evans as “alternative versions of the past in which one alteration in the timeline leads to a different outcome from the one we know actually occurred.” Or as David Frum, thinking in The Atlantic about what might have happened had the United States not entered World War I in 1917, says about his enterprise: “Like George Bailey in It’s a Wonderful Life, I contemplate these might-have-beens to gain a better appreciation for what actually happened.” In statements like these, historians confront the fact that their discipline is inevitably subject to the problem of survivorship bias.

Maybe that’s why counterfactual history is also a genre with a poor reputation with historians: Evans himself has condemned the genre, in The Guardian, by writing that it “threatens to overwhelm our perceptions of what really happened in the past.” “The problem with counterfactuals,” Evans says, “is that they almost always treat individual human actors … as completely unfettered,” when in fact historical actors are nearly always constrained by larger forces. FDR could, hypothetically, have called for war in 1939—it’s just that he probably wouldn’t have been elected in 1940, and someone else would have been in office on that Sunday in Oahu. Which, sure, is true, and responsible historians have always, as Evans says, tried “to balance out the elements of chance on the one hand, and larger historical forces (economic, cultural, social, international) on the other, and come to some kind of explanation that makes sense.” That, to be sure, is more or less the historian’s job. But I am sure the man on the wire doesn’t like to reminded of the absence of a net either.

The threat posed by survivorship bias extends even into genres that might appear to be immune to it: surely the study of literature, which isn’t about “reality” in any strict sense, is immune to the acid bath of survivorship bias. But look at Taleb’s example of how a consideration of survivorship bias affects just how we think about literature, in the form of a discussion of the reputation of the famous nineteenth French novelist Honoré de Balzac.

Let’s say, Taleb proposes, someone asks you why Balzac deserves to be preserved as a great writer, and in reply “you attribute the success of the nineteenth-century novelist … to his superior ‘realism,’ ‘insights,’ ‘sensitivity,’ ‘treatment of characters,’ ‘ability to keep the reader riveted,’ and so on.” As Taleb points out, those characteristics only work as a justification for preserving Balzac “if, and only if, those who lack what we call talent also lack these qualities.” If, on the other hand, there are actually “dozens of comparable literary masterpieces that happened to perish” merely by chance, then “your idol Balzac was just the beneficiary of disproportionate luck compared to his peers.” Without knowing who Balzac’s competitors were, in other words, we are not in a position to know with certainty whether Balzac’s success is due to something internal to his work, or whether his survival is simply the result of dumb luck. So even literature is threatened by survivorship bias.

If you wanted to define the humanities you could do worse than to say they are the disciplines that pay little to no attention to survivorship bias. Which, one might say, is fine: “In my father’s house are many mansions,” to cite John 14:2. But the trouble may be that, since as Taleb or Smith—and the examples could be multiplied—point out, the work of the humanities share the same “scholarly” standards as those of many “business writers,” it does not really matter how “radical”—or even merely reformist—their claims are. The similarities of method may simply overwhelm the message.

In that sense then, despite the efforts of many academics to center a leftist politics on the classrooms of the English department rather than the scientific lab, that just may not be possible: the humanities will always be centered on fending off survivorship bias in the guise of biology’s threat to “reduce the complexities of human culture to patterns in animal behavior,” as W.J.T. Mitchell says—and in so doing, the disciplines of culture will inevitably end up arguing, as Walter Benn Michaels says, “that the differences that divide us are not the differences between those of who have money and those who don’t but are instead the differences between those of us who are black and those who are white or Asian or Latino or whatever.” The humanities are antagonistic to biology because the central concept of Darwinian biology, natural selection, is a version of the principle of survivorship bias, while survivorship bias is a concept that poses a real and constant intellectual threat to the humanities—and finally, to complete the circle, survivorship bias is the only argument against allowing the rich to run the world according to their liking. It may then not be any wonder why, as the tide has gone out on the American dream, the American academy has essentially responded by saying “let’s talk about something else.” To the gentlemen and ladies of the American disciplines of the humanities, the wealthy are just the adversary.

A Part of the Main

… every man is a peece of the Continent, a part of the maine
—John Donne, Devotions Upon Emergent Occasions

The “natural selection pressures that drive evolution can flip-flop faster than previously thought,” reported the Kansas City Star, six years ago, on a study of Bahamanian lizards. The details are, as always, not nearly as interesting as the newspaper writers make them appear: they involve percentages of as little as two and three percent. But the scientists found them significant, and the larger point remains: Darwin “thought that evolution must occur slowly and gradually,” but actual observed nature doesn’t demonstrate that. Which is to say that change, when it comes, can come suddenly and unexpectedly—something that may hold as equally well for sports, say, as lizards. Like golf, perhaps.

If I were to tell you, for instance, that while seven percent of all white people earning less than $50,000 dollars a year participated in a particular something in 2009, nineteen percent of all white people earning more than $125,000 a year did, one plausible suspect for the role of the particular something might be the Republican Party. After all, Mitt Romney’s strategy to win the presidency this November involved capturing 61 percent of the white vote, according to an unnamed source quoted in the National Journal this past August. But that guess would be wrong: the “particular something” is a round of golf.

Surely it takes no great seer to tell us that if one partner in this twosome is in trouble, the other ought to be looking for a lawyer. Golf has found its numbers to be relatively static: back in 2008, the New York Times ran a story on the “disappearance of golfers.” One expert quoted in the story said that while the “man on the street will tell you that golf is booming because he sees Tiger Woods on TV … the reality is, while we haven’t exactly tanked, the numbers have been disappointing for some time.” Golfers are overwhelmingly whiter and wealthier than their fellow Americans just as Republican voters are, which is to say that, like the Republican party, golf needs to ask whether being whiter and wealthier (and, though I haven’t mentioned it, older) are necessary—or contingent—parts of their identities.

The answer to that question will likely determine the survival of each. “If demographics is destiny, the Republican party has a rendezvous with irrelevance” coming, as one journalist has put the point—and golf, one assumes, faces much the same issue. Still, it seems likely that golf has at least, if not a better, chance of survival than the Republican party: it was already long in existence when the Republican party was born.

I’m actually being facetious there—obviously, anything so important as golf will outlive a mere political party, the transient accumulations of various interests. The question thusly isn’t so much the end, but rather the means: the path whereby golf might succeed. And there, it may be, lies a tale.

The roots of that tale might lie with the work of a doctor named Ann McKee. She works at the Veteran’s Hospital in Bedford, Massachussetts, and it has become part of her job over the past decade to examine the brains of dead football players and other people who may have been exposed to repeated concussions over the course of their lives. She’s become expert in diagnosing—after death, which is the only time it can be diagnosed—a condition known as chronic traumatic encephalopathy, or C.T.E. What’s she’s found, however, is that there are more dangerous things than concussions.

What Dr. McKee’s work has shown, that is, is that while concussions are horrible injuries it’s really the repeated, low-level jarrings that an activity like football can cause the brain that seems to cause C.T.E., a disease that mimics Alzheimer’s in many ways, including a final descent into dementia. And what it’s meant, at least for the doctor, is that she’s found an answer to this question: if her son “had a chance to join the NFL,” Malcolm Gladwell of the New Yorker asked her, “what would she advise him?” And here is what the doctor said: “‘Don’t. Not if you want to have a life after football.’”

“And therefore never send to know for whom the bell tolls,” wrote John Donne four centuries ago: “It tolls for thee.” Dr. McKee’s reply to Gladwell’s question may be just such a church bell tolling in the night: at the least, it is the link between the NFL and those lizards sunning themselves in the Bahamas. For when the mothers of America begin to hear it, and what it might mean for their sons (and possibly their daughters), it may provoke something of a sea change among the behavior of Americans. Like the change in the lizards, it may come suddenly, and not gradually. One day, there just won’t be anybody at the stadium any more.

If that does happen, it seems absurd to think that Americans will abandon sport entirely. Baseball, one expects, would see a huge surge in popularity that would overtake even that wave during the steriod era. Basketball, obviously, would become even more popular than it already is. And, perhaps, just a bit of interest would run over golf’s way. Golf, in other words, unlike the Republican Party, may be on the cusp of a new boom. What seems improbable, in short, can quickly come to seem inevitable.

And so, since it may be that entire societies can, at times, be swept by vast tides that completely overcome that which came before, so too can obscure blog posts in the wilderness called the Internet be swung suddenly from what might appear to be their ostensible subjects. Which might be of some comfort to those who observe the completely evitable tragedies like the one last week in Connecticut, and wonder if, or ever, the United States will decide to do something about its ridiculous gun laws.