Water to the Sea

Yet lives our pilot still. Is’t meet that he
Should leave the helm and like a fearful lad
With tearful eyes add water to the sea
And give more strength to that which hath too much,
Whiles, in his moan, the ship splits on the rock,
Which industry and courage might have saved?
Henry VI, Part III. Act V, scene iv.

Those who make many species are the ‘splitters,’ and those who make few are the ‘lumpers,’” remarked Charles Darwin in an 1857 letter to botanist J.D. Hooker; the title of University of Chicago professor Kenneth Warren’s most recent book, What Was African-American Literature?, announces him as a “lumper.” The chief argument of Warren’s book is that the claim that something called “African-American literature” is “different from the rest of American lit[ature]”—a claim that many of Warren’s colleagues, perhaps no one more so than Harvard’s Henry Louis Gates, Jr., have based their careers upon—is, in reality, a claim that, historically, many writers with large amounts of melanin would have rejected. Take the fact, Warren says, that “literary societies … among free blacks in the antebellum north were not workshops for the production of a distinct black literature but salons for producing works of literary distinction”: these were not people looking to split off—or secede—from the state of literature. Warren’s work is, thereby, aimed against those who, like so many Lears, have divided and subdivided literature by attaching so many different adjectives to literature’s noun—an attack Warren says he makes because “a literature insisting that the problem of the 21st century remains the problem of the color line paradoxically obscures the economic and political problems facing many black Americans, unless those problems can be attributed to racial discrimination.” What Warren sees, I think, is that far too much attention is being paid to the adjective in “African-American literature”—though what he may not see is that the real issue concerns the noun.

The noun being, of course, the word “literature”: Warren’s account worries the “African-American” part of “African-American literature” instead of the “literature” part. Specifically, in Warren’s view what links the adjective to the noun—or “what made African American literature a literature”—was the regime of “constitutionally-sanctioned state-enforced segregation” known as Jim Crow, which made “black literary achievement … count, almost automatically, as an effort on behalf of the ‘race’ as a whole.” Without that institutional circumstance there are writers who are black—but no “black writers.” To Warren, it’s the distinct social structure of Jim Crow, hardening in the 1890s, that creates “black literature,” instead of merely examples of writing produced by people whose skin is darker-colored than that of other writers.

Warren’s argument thereby takes the familiar form of the typical “social construction” argument, as outlined by Ian Hacking in his book, The Social Construction of What? Such arguments begin, Hacking says, when “X is taken for granted,” and “appears to be inevitable”; in the present moment, African-American literature can certainly be said—for some people—to appear to be inevitable: Harvard’s Gates, for instance, has long claimed that “calls for the creation of a [specifically “black”] tradition occurred long before the Jim Crow era.” But it’s just at such moments, Hacking says, that someone will observe that in fact the said X is “the contingent product of the social world.” Which is just what Warren does.

Warren points out that although those who argue for an ahistorical vision of an African-American literature would claim that all black writers were attempting to produce a specifically black literature, Warren notes that the historical evidence points, merely, to an attempt to produce literature: i.e., a member of the noun class without a modifying adjective. At least, until the advent of the Jim Crow system at the end of the nineteenth century: it’s only after that time, Warren says, that “literary work by black writers came to be discussed in terms of how well it served (or failed to serve) as an instrument in the fight against Jim Crow.” In the familiar terms of the hallowed social constructionism argument, Warren is claiming that the adjective is added to the noun later, as a result of specific social forces.

Warren’s is an argument, of course, with a number of detractors, and not simply Gates. In The Postethnic Literary: Reading Paratexts and Transpositions Around 2000, Florian Sedlmeier charged Warren with reducing “African American identity to a legal policy category,” and furthermore that Warren’s account “relegates the functions of authorship and literature to the economic subsystem.” It’s a familiar version of the“reductionist” charge often cited by “postmoderns” against Marxists—an accusation tiresome at best in these days.

More creatively, in a symposium of responses to Warren in the Los Angeles Review of Books, Erica Edwards attempted to one-up Warren by saying that Warren fails to recognize that perhaps the true “invention” of African-American literature was not during the Jim Crow era of legalized segregation, but instead “with the post-Jim Crow creation of black literature classrooms.” Whereas Gates, in short, wishes to locate the origin of African-American literature in Africa prior to (or concurrently with) slavery itself, and Warren instead locates it in the 1890s during the invention of Jim Crow, Edwards wants to locate it in the 1970s, when African-American professors began to construct their own classes and syllabi. Edwards’ argument, at the least, has a certain empirical force: the term “African-American” itself is a product of the civil rights movement and afterwards; that is, the era of the end of Jim Crow, not its beginnings.

Edwards’ argument thereby leads nearly seamlessly into Aldon Lynn Nielsen’s objections, published as part of the same symposium. Nielsen begins by observing that Warren’s claims are not particularly new: Thomas Jefferson, he notes, “held that while Phillis Wheatley [the eighteenth-century black poet] wrote poems, she did not write literature,” while George Schuyler, the black novelist, wrote for The Nation in 1926 that “there was not and never had been an African American literature”—for the perhaps-surprising reason that there was no such thing as an African-American. Schuyler instead felt that the “Negro”—his term—“was no more than a ‘lampblacked Anglo-Saxon.’” In that sense, Schuyler’s argument was even more committed to the notion of “social construction” than Warren is: whereas Warren questions the timelessness of the category of a particular sort of literature, Schuyler questioned the existence of a particular category of person. Warren, that is, merely questions why “African-American literature” should be distinguished—or split from—“American literature”; Schuyler—an even more incorrigible lumper than Warren—questioned why “African-Americans” ought to be distinguished from “Americans.”

Yet, if even the term “African-American,” considered as a noun itself rather than as the adjective it is in the phrase “African-American literature,” can be destabilized, then surely that ought to raise the question, for these sharp-minded intellectuals, of the status of the noun “literature.” For it is precisely the catechism of many today that it is the “liberating” features of literature—that is, exactly, literature’s supposed capacity to produce the sort of argument delineated and catalogued by Hacking, the sort of argument in which it is argued that “X need not have existed”—that will produce, and has produced, whatever “social progress” we currently observe about the world.

That is the idea that “social progress” is the product of an increasing awareness of Nietzsche’s description of language as a “mobile army of metaphors, metonyms, and anthropomorphisms”—or, to use the late American philosopher Richard Rorty’s terminology, to recognize that “social progress” is a matter of redescription by what he called, following literary critic Harold Bloom, “strong poets.” Some version of such a theory is held by what Rorty, following University of Chicago professor Allan Bloom, called “‘the Nietzscheanized left’”: one that takes seriously the late Belgian literature professor Paul de Man’s odd suggestion that “‘one can approach … the problems of politics only on the basis of critical-linguistic analysis,’” or the late French historian Michel Foucault’s insistence that he would not propose a positive program, because “‘to imagine another system is to extend our participation in the present system.’” But such sentiments have hardly been limited to European scholars.

In America, for instance, former Duke University professor of literature Jane Tompkins echoed Foucault’s position in her essay “Sentimental Power: Uncle Tom’s Cabin and the Politics of Literary History.” There, Tompkins approvingly cited novelist Harriet Beecher Stowe’s belief, as expressed in Uncle Tom, that the “political and economic measures that constitute effective action for us, she regards as superficial, mere extensions of the worldly policies that produced the slave system in the first place.’” In the view of people like Tompkins, apparently, “political measures” will somehow sprout out of the ground of their own accord—or at least, by means of the transformative redescriptive powers of “literature.”

Yet if literature is simply a matter of redescription then it must be possible to redescribe “literature” itself: which in this paragraph will be in terms of a growing scientific “literature” (!) that, since the 1930s, has examined the differences between animals and human beings in terms of what are known as “probability guessing experiment[s].” In the classic example of this research—as cited in a 2000 paper called “The Left Hemisphere’s Role in Hypothesis Formation”—if a light is flashed with a ratio of 70% red light to 30% green, animals will tend always to guess red, while human beings will attempt to anticipate which light will be flashed next: in other words, animals will “tend to maximize or always choose the option that has occurred most frequently in the past”—whereas human beings will “tend to match the frequency of previous occurrences in their guesses.” Animals will simply always guess the same answer, while human beings will attempt to divine the pattern: that is, they will make their guesses based on the assumption that the previous series of flashes were meaningful. If the previous three flashes were “red, red, green,” a human being will tend to guess that the next flash will be red, whereas an animal will simply always guess red.

That in turn implies that, since in this specific example there is in fact no pattern and merely a probabilistic ratio of green to red, animals will always outperform human beings in this sort of test: as the authors of the paper write, “choosing the most frequent option all of the time, yields more correct guesses than matching as long as p ≠ 0.5.” Or, as they also note, “if the red light occurs with a frequency of 70% and a green light occurs with a frequency of 30%, overall accuracy will be highest if the subject predicts red all the time.” It’s true, in other words, that attempting to match a pattern will result in being correct 100% of the time—if the pattern is successfully matched. That result has, arguably, consequences for the liberationist claims of social constructionist arguments in general and literature in specific.

I trust that, without much in the way of detail—which I think could be elucidated at tiresome length—it can be stipulated that, more or less, the entire liberatory project of “literature” described above, as held by such luminaries as Foucault or Tompkins, can be said to be an attempt at elaborating rules for “pattern recognition.” Hence, it’s possible to understand how training in literature might be helpful towards fighting discrimination, which after all is obviously about constructing patterns: racists are not racist towards merely 65% of all black people, or are only racist 37% of the time. Racism—and other forms of discrimination—are not probabilistic, they are deterministic: they are rules used by discriminators that are directed at everyone within the class. (It’s true that the phenomenon of “passing” raises questions about classes, but the whole point of “passing” is that individual discriminators are unaware of the class’ “true” boundaries.) So it’s easy to see how pattern-recognition might be a useful skill with which to combat racial or other forms of discrimination.

Matching a pattern, however, suffers from one difficulty: it requires the existence of a pattern to be matched. Yet, in the example discussed in “The Left Hemisphere’s Role in Hypothesis Formation”—as in everything influenced by probability—there is no pattern: there is merely a larger chance of the light being red rather than green in each instance. Attempting then to match a pattern in a situation ruled instead by probability is not only unhelpful, but positively harmful: because there is no pattern, “guessing” simply cannot perform as well as simply maintaining the same choice every time. (Which in this case would at least result in being correct 70% of the time.) In probabilistic situations, in other words, where there is merely a certain probability of a given result rather than a certain pattern, both empirical evidence and mathematics itself demonstrates that the animal procedure of always guessing the same will be more successful than the human attempt at pattern recognition.

Hence, it follows that although training in recognizing patterns—the basis of schooling in literature, it might be said—might be valuable in combatting racism, such training will not be helpful in facing other sorts of problems: as the scientific literature demonstrates, pattern recognition as a strategy only works if there is a pattern. That in turn means that literary training can only be useful in a deterministic, and not probabilistic, world—and therefore, then, the project of “literature,” so-called, can only be “liberatory” in the sense meant by its partisans if the obstacles from which human beings need liberation are pattern-based. And that’s a conclusion, it seems to me, that is questionable at best.

Take, for example, the matter of American health care. Unlike all other industrialized nations, the United States does not have a single, government-run healthcare system, despite the fact that—as Malcolm Gladwell has noted that the American labor movement knew as early as the 1940s— “the safest and most efficient way to provide insurance against ill health or old age [is] to spread the costs and risks of benefits over the biggest and most diverse group possible.”  In other words, insurance works best by lumping, not splitting. The reason why may perhaps be the same as the reason that, as the authors of “The Left Hemisphere’s Role in Hypothesis Formation” point out, it can be said that “humans choose a less optimal strategy than rats” when it comes to probabilistic situations. Contrary to the theories of those in the humanities, in other words, the reality  is that human beings in general—and Americans when it comes to health care—appear to have a basic unfamiliarity with the facts of probability.

One sign of that ignorance is, after all, the growth of casino gambling in the United States even as health care remains a hodgepodge of differing systems—despite the fact that both insurance and casinos run on precisely the same principle. As statistician and trader Nassim Taleb has pointed out, casinos “never (if they do things right) lose money”—so long as they are not run by Donald Trump—because they “simply do not let one gambler make a massive bet” and instead prefer “to have plenty of gamblers make a series of bets of limited size.” In other words, it is not possible for some high roller to bet, say, a Las Vegas casino the entire worth of the casino on a single hand of blackjack, or any other game; casinos just simply limit the stakes to something small enough that the continued existence of the business is not at risk on any one particular event, and then make sure that there are enough bets being made to allow the laws of probability in every game (which are tilted toward the casino) to ensure the continued health of the business. Insurance, as Gladwell observed above, works precisely the same way: the more people paying premiums—and the more widely dispersed they are—the less likely it is that any one catastrophic event can wipe out the insurance fund. Both insurance and casinos are lumpers, not splitters: that, after all, is precisely why all other industrialized nations have put their health care systems on a national basis rather than maintaining the various subsystems that Americans—apparently inveterate splitters—still have.

Health care, of course, is but one of the many issues of American life that, although influenced by, ultimately have little to do with, racial or other kinds of discrimination: what matters about health care, in other words, is that too few Americans are getting it, not merely that too few African-Americans are. The same is true, for instance, about incarceration: although such works as Michelle Alexander’s The New Jim Crow have argued that the fantastically-high rate of incarceration in the United States constitutes a new “racial caste system,” University of Pennsylvania professor of political science Marie Gottschalk has pointed out that “[e]ven if you released every African American from US prisons and jails today, we’d still have a mass incarceration crisis in this country.” The problem with American prisons, in other words, is that there are too many Americans in them, not (just) too many African-Americans—or any other sort of American.

Viewing politics through a literary lens, in sum—as a matter of flashes of insight and redescription, instantiated by Wittgenstein’s duck-rabbit figure and so on—ultimately has costs: costs that have been witnessed again and again in recent American history, from the War on Drugs to the War on Terror. As Warren recognizes, viewing such issues as health care or prisons through a literary, or more specifically racial, lens is ultimately an attempt to fit a square peg through a round hole—or, perhaps even more appositively, to bring a knife to a gun fight. Warren, in short, may as well have cited UCLA philosophy professor Abraham Kaplan’s observation, sometimes called Kaplan’s Law of the Instrument: “Give a boy a hammer and everything he meets has to be pounded.” (Or, as Kaplan put the point more delicately, it ought not to be surprising “to discover that a scientist formulates problems in a way which requires for their solution just those techniques in which he himself is especially skilled.”) Much of the American “left,” in other words, views all problems as matters of redescription and so on—a belief not far from common American exhortations to “think positively” and the like. Certainly, America is far from the post-racial utopia some would like it to be. But curing the disease is not—contrary to the beliefs of many Americans today—the same as diagnosing it.

Like it—or lump it.

Comedy Bang Bang

In other words, the longer a game of chance continues the larger are the spells and runs of luck in themselves,
but the less their relative proportions to the whole amounts involved.
—John Venn. The Logic of Chance. (1888). 


“A probability that is very small for a single operation,” reads the RAND Corporation paper mentioned in journalist Sharon McGrayne’s The Theory That Would Not Die, “say one in a million, can become significant if this operation will occur 10,000 times in the next five years.” The paper, “On the Risk of an Accidental or Unauthorized Nuclear Detonation,” was just what it says on the label: a description of the chances of an unplanned atomic explosion. Previously, American military planners had assumed “that an accident involving an H-bomb could never occur,” but the insight of this paper was that overall risk changes depending upon volume—an insight that ultimately depended upon a discovery first described by mathematician Jacob Bernoulli in 1713. Now called the “Law of Large Numbers,” Bernoulli’s thought was that “it is not enough to take one or another observation … but that a large number of them are needed”—it’s what allows us to conclude, Bernoulli wrote, that “someone who intends to throw at once three sixes with three dice, should be considered reckless even if winning by chance.” Yet, while recognizing the law—which predicted that even low-probability events become likely if there are many of them—considerably changed how the United States handled nuclear weapons, it has had essentially no impact on how the United States handles certain conventional weapons: the estimated 300 million guns held by its citizens. One possible reason why that may be, suggests the work of Vox.com founder Ezra Klein, is that arguments advanced by departments of literature, women’s studies, African-American studies and other such academic “disciplines” more or less openly collude with the National Rifle Association to prevent sensible gun control laws.

The inaugural “issue” of Vox contained Klein’s article “How Politics Makes Us Stupid”—an article that asked the question, “why isn’t good evidence more effective in resolving political debates?” According to the consensus wisdom, Klein says, “many of our most bitter political battles are mere misunderstandings” caused by a lack of information—in this view, all that’s required to resolve disputes is more and better data. But, Klein also writes, current research shows that “the more information partisans get, the deeper their disagreements become”—because there are some disagreements “where people don’t want to find the right answer so much as they want to win the argument.” In other words, while some disagreements can be resolved by considering new evidence—like the Strategic Air Command changed how it handled nuclear weapons in light of a statistician’s recall of Bernoulli’s work—some disagreements, like gun control, cannot.

The work Klein cites was conducted by Yale Law School professor Daniel Kahan, along with several co-authors, and it began—Klein says—by collecting 1,000 Americans and then surveying both their political views and their mathematical skills. At that point, Kahan’s group gave participants a puzzle, which asked them to judge an experiment designed to show whether a new skin cream was more or less likely to make a skin condition worse or better, based on the data presented. The puzzle, however, was jiggered: although many more people got better using the skin cream than got worse using the skin cream, the percentage of people who got worse using the skin cream against those who did not use it was actually higher. In other words, if you paid attention merely to numbers, the data might appear to indicate one thing, while a calculation of percentages showed something else. As it turns out, most people relied on the raw numbers—and were wrong; meanwhile, people with higher mathematical skill were able to work through the problem to the right answer.

Interestingly, however, the results of this study did not demonstrate to Kahan that perhaps it is necessary to increase scientific and mathematical education. Instead, Kahan argues that the attempt by “economists and other empirical social scientists” to shear the “emotional trappings” from the debate about gun control in order to make it “a straightforward question of fact: do guns make society less safe or more” is misguided. Rather, because guns are “not just ‘weapons or pieces of sporting equipment,’” but “are also symbols,” the proper terrain to contest is not the grounds of empirical fact, but the symbolic: “academics and others who want to help resolve the gun controversy should dedicate themselves to identifying with as much precision as possible the cultural visions that animate this dispute.” In other words, what ought to structure this debate is not science, but culture.

To many on what’s known as the “cultural left,” of course, this must be welcome news: it amounts to a recognition of “academic” disciplines like “cultural studies” and the like that have argued for decades that cultural meanings trump scientific understanding. As Canadian philosopher Ian Hacking put it some years ago in The Social Construction of What?, a great deal of work in those fields of “study” have made claims that approach saying “that scientific results, even in fundamental physics, are social constructs.” Yet though the point has, as I can speak from personal experience, become virtual commonsense in departments of the humanities, there are several means of understanding the phrase “social construct.”

As English professor Michael Bérubé has remarked, much of that work can be traced as  “following the argument Heidegger develops at the end of the first section of Being and Time,” where the German philosopher (and member of the Nazi Party) argued that “we could also say that the discovery of Neptune in 1846 cold plausibly be described, from a strictly human vantage point, as the ‘invention’ of Neptune.” In more general terms New York University professor Andrew Ross—the same Ross later burned in what’s become known as the “Sokal Affair”—described one fashion in which such an argument could go: by tracing how a “scientific theory was advanced through power, authority, persuasion and responsiveness to commercial interests.” Of course, as a journalistic piece by Joy Pullmann—writing in the conservative Federalist—described recently, as such views have filtered throughout the academy they have led at least one doctoral student to claim in her dissertation at the education department of the University of North Dakota that “language used in the syllabi” of eight science classes she reviewed

reflects institutionalized STEM teaching practices and views about knowledge that are inherently discriminatory to women and minorities by promoting a view of knowledge as static and unchanging, a view of teaching that promotes the idea of a passive student, and by promoting a chilly climate that marginalizes women.

The language of this description, interestingly, equivocates between the claim that some, or most, scientists are discriminatory (a relatively safe claim) and the notion that there is something inherent about science itself (the radical claim)—which itself indicates something of the “cultural” view. Yet although, as in this latter example, claims regarding the status of science are often advanced on the grounds of discrimination, it seems to escape those making such claims just what sort of ground is conceded politically by taking science as one’s adversary.

For example, here is the problem with Kahan’s argument over gun control: by agreeing to contest on cultural grounds pro-gun control advocates would be conceding their very strongest argument: the Law of Large Numbers is not an incidental feature of science, but one of its very foundations. (It could perhaps even be the foundation, because science proceeds on the basis of replicability.) Kahan’s recommendation, in other words, might not appear so much as a change in tactics as an outright surrender: it’s only in the light of the Law of Large Numbers that the pro gun-control argument is even conceivable. Hence, it is very difficult to understand how an argument can be won if one’s best weapon is, I don’t know, controlled. In effect, conceding the argument made in the RAND paper quoted above is more or less to give up on the very idea of reducing the numbers of firearms, so that American streets could perhaps be safer—and American lives protected.

Yet another, and even larger-scale problem with taking the so-called “cultural turn,” as Kahan advises, however, is that abandoning the tools of the Law of Large Numbers does not merely concede ground on the gun control issue alone. It also does so on a host of other issues—perhaps foremost of them on matters of political representation itself. For example, it prevents an examination of the Electoral College from a scientific, mathematically-knowledgable point of view—as I attempted to do in my piece, “Size Matters,” from last month. It may help to explain what Congressman Steve Israel of New York meant when journalist David Daley, author of a recent book on gerrymandering, interviewed him on the practical effects of gerrymandering in the House of Representatives (a subject that requires strong mathematical knowledge to understand): “‘The Republicans have always been better than Democrats at playing the long game.’” And there are other issues also—all of which is to say that, by attacking science itself, the “cultural left” may literally be preventing government from interceding on the part of the very people for whom they claim to speak.

Some academics involved in such fields have, in fact, begun to recognize this very point: all the way back in 2004, one of the chiefs of this type of specialist, Bruno Latour, dared to ask himself “Was I wrong to participate in the invention of this field known as science studies?” The very idea of questioning the institution of that field can, however, seem preposterous: even now, as Latour also wrote then, there are

entire Ph.D. programs … still running to make sure that good American kids are learning the hard way that facts are made up, that there is no such thing as natural, unmediated, unbiased access to truth, that we are always prisoners of language, that we always speak from a particular standpoint, and so on, while dangerous extremists are using the very same argument of social construction to destroy hard-won evidence that could save our lives.

Indeed. It’s actually to the point, in fact, that it would be pretty easy to think that the supposed “left” doesn’t really want to win these arguments at all—that, perhaps, they just wish to go out …

With a bang.