Stayin’ Alive

And the sun stood still, and the moon stayed,
until the people had avenged themselves upon their enemies.
—Joshua 10:13.

 

“A Sinatra with a cold,” wrote Gay Talese for Esquire in 1966, “can, in a small way, send vibrations through the entertainment industry and beyond as surely as a President of the United States, suddenly sick, can shake the national economy”; in 1994, Nobel laureate economist Paul Krugman mused that a “commitment to a particular … doctrine” can eventually set “the tone for policy-making on all issues, even those which may seem to have nothing to do with that doctrine.” Like a world leader—or a celebrity—the health of an idea can have unforeseen consequences; for example, it is entirely possible that the legal profession’s intellectual bias against mathematics has determined the nation’s racial policy. These days after all, as literary scholar Walter Benn Michaels observed recently, racial justice in the United States is held to what Michaels calls “the ideal of proportional inequality”—an ideal whose nobility, it so happens that Nobel Prize-winner Daniel Kahneman and his colleague Amos Tversky have demonstrated, is matched only by its mathematical futility. The law, in short, has what Oliver Roeder of FiveThirtyEight recently called an “allergy” to mathematics; what I will argue is that, as a consequence, minority policy in the United States has a cold.

“The concept that mathematics can be relevant to the study of law,” law professor Michael I. Meyerson observed in 2002’s Political Numeracy: Mathematical Perspectives on Our Chaotic Constitution, “seems foreign to many modern legal minds.” In fact, he continued, to many lawyers “the absence of mathematics is one of law’s greatest appeals.” The strength of that appeal was on display recently in the 2011 Wisconsin case discussed by Oliver Roeder, Gill v. Whitford—a case that, as Roeder says, “hinges on math” because it involves the invention of a mathematical standard to measure “when a gerrymandered [legislative] map infringes on voters’ rights.” In oral arguments in Gill, Roeder observed, Chief Justice John Roberts said, about the mathematical techniques that are the heart of the case, that it “may be simply my educational background, but I can only describe [them] as sociological gobbledygook”—a derisory slight that recalls 19th-century Supreme Court Justice Joseph Story’s sneer concerning what he called “men of speculative ingenuity, and recluse habits.” Such statements are hardly foreign in the annals of the Supreme Court: “Personal liberties,” Justice Potter Stewart wrote in a 1975 opinion, “are not rooted in the law of averages.” (Stewart’s sentence, perhaps incidentally, uses a phrase—“law of averages”—found nowhere in the actual study of mathematics). Throughout the history of American law, in short, there is strong evidence of bias against the study and application of mathematics to jurisprudence.

Yet without the ability to impose that bias on others, even conclusive demonstrations of the law’s skew would not matter—but of course lawyers, as Nick Robinson remarked just this past summer in the Buffalo Law Review, have “dominated the political leadership of the United States.” As Robinson went on to note, “more than half of all presidents, vice presidents, and members of Congress have come from a law background.” This lawyer-heavy structure has had an effect, Robinson says: for instance, he claims “that lawyer-members of Congress have helped foster the centrality of lawyers and courts in the United States.” Robinson’s research then, which aggregates many studies on the subject, demonstrates that the legal profession is in a position to have effects on the future of the country—and if lawyers can affect the future of the country in one fashion, it stands to reason that they may have affected it in others. Not only then may the law have an anti-mathematical bias, but it is clearly positioned to impose that bias on others.

That bias in turn is what I suspect has led the Americans to what Michaels calls the theory of “proportional representation” when it comes to justice for minority populations. This theory holds, according to Michaels, that a truly just society would be a “society in which white people were proportionately represented in the bottom quintile [of income] (and black people proportionately represented in the top quintile)”—or, as one commenter on Michaels’ work has put it, it’s the idea that “social justice is … served if the top classes at Ivy League colleges contain a percentage of women, black people, and Latinos proportionate to the population.” Within the legal profession, the theory appears to be growing: as Michaels has also observed, the theory of the plaintiffs in the “the recent suit alleging discrimination against women at Goldman Sachs” complained of the ‘“stark” underrepresentation’ [of women] in management” because women represented “‘just 29 percent of vice presidents, 17 percent of managing directors, and 14 percent of partners’”—percentages that, of course, vary greatly from the roughly 50% of the American population who are women. But while the idea of a world in which the population of every institution mirrors the population as a whole may appear plausible to lawyers, it’s absurd to any mathematician.

People without mathematical training, that is, have wildly inaccurate ideas about probability—precisely the point of the work of social scientists Daniel Kahneman and Amos Tversky. “When subjects are instructed to generate a random sequence of hypothetical tosses of a fair coin,” wrote the two psychologists in 1971 (citing an earlier study), “they produce sequences where the proportion of heads in any short segment stays far closer to .50 than the laws of chance would predict.” In other words, when people are asked to write down the possible results of tossing a coin many times, they invariably give answers that are (nearly) half heads and half tails despite the fact that—as Brian Everitt observed in his 1999 book Chance Rules: An Informal Guide to Probability, Risk, and Statistics—in reality “in, say, 20 tosses of a fair coin, the number of heads is unlikely to be exactly 10.” (Everitt goes on to note that “an exact fifty-fifty split of heads and tails has a probability of a little less than 1 in 5.”) Hence, a small sample of 20 tosses has less than a twenty percent chance of being ten heads and ten tails—a fact that may appear yet more significant when it is noted that the chance of getting exactly 500 heads when flipping a coin 1000 times is less than 3%. Approximating the ideal of proportionality, then, is something that mathematics tells us is not simple or easy to do even once, and yet, in the case of college admissions, advocates of proportional representation suggest that colleges, and other American institutions, ought to be required to do something like what baseball player Joe DiMaggio did in the summer of 1941.

In that year in which “the Blitzkrieg raged” (as the Rolling Stones would write later), the baseball player Joe DiMaggio achieved what Gould says is “the greatest and most unattainable dream of all humanity, the hope and chimera of all sages and shaman”: the New York Yankee outfielder hit safely in 56 games. Gould doesn’t mean, of course, that all human history has been devoted to hitting a fist-sized sphere, but rather that while many baseball fans are aware of DiMaggio’s feat, what few are aware of is that the mathematics of DiMaggio’s streak shows that it was “so many standard deviations above the expected distribution that it should not have occurred at all.” In other words, Gould cites Nobel laureate Ed Purcell’s research on the matter.

What that research shows is that, to make it a better-than-even money proposition “that a run of even fifty games will occur once in the history of baseball,” then “baseball’s rosters would have to include either four lifetime .400 batters or fifty-two lifetime .350 batters over careers of one thousand games.” There are, of course, only three men who ever hit more than .350 lifetime (Cobb, Hornsby, and, tragically, Joe Jackson), which is to say that DiMaggio’s streak is, Gould wrote, “the most extraordinary thing that ever happened in American sports.” That in turn is why Gould can say that Joe DiMaggio, even as the Panzers drove a thousand miles of Russian wheatfields, actually attained a state chased by saints for millennia: by holding back, from 15 May to 17 July, 1941, the inevitable march of time like some contemporary Joshua, DiMaggio “cheated death, at least for a while.” To paraphrase Paul Simon, Joe DiMaggio fought a duel that, in every way that can be looked at, he was bound to lose—which is to say, as Gould correctly does, that his victory was in postponing that loss all of us are bound to one day suffer.

Woo woo woo.

What appears to be a simple baseball story, then, actually has a lesson for us here today: it tells us that advocates of proportional representation are thereby suggesting that colleges ought to be more or less required not merely to reproduce Joe DiMaggio’s hitting streak from the summer of 1941, but to do it every single season—a quest that in a practical sense is impossible. The question then must be how such an idea could ever have taken root in the first place—a question that Paul Krugman’s earlier comment about how a commitment to bad thinking about one issue can lead to bad thinking about others may help to answer. Krugman suggested in that essay that one reason why people who ought to know better might tolerate “a largely meaningless concept” was “precisely because they believe[d] they [could] harness it in the service of good policies”—and quite clearly, proponents of the proportional ideal have good intentions, which may be just why it has held on so long despite its manifest absurdity. But good intentions are not enough to ensure the staying power of a bad idea.

“Long streaks always are, and must be,” Gould wrote about DiMaggio’s feat of survival, “a matter of extraordinary luck imposed upon great skill”—which perhaps could be translated, in this instance, by saying that if an idea survives for some considerable length of time it must be because it serves some interest or another. In this case, it seems entirely plausible to think that the notion of “proportional representation” in relation to minority populations survives not because it is just, but instead because it allows the law, in the words of literary scholar Stanley Fish, “to have a formal existence”—that is, “to be distinct, not something else.” Without such a distinction, as Fish notes, the law would be in danger of being “declared subordinate to some other—non-legal—structure of concern,” and if so then “that discourse would be in the business of specifying what the law is.” But the legal desire Fish dresses up in a dinner jacket, attorney David Post of The Volokh Conspiracy website suggests, may merely be the quest to continue to wear a backwards baseball cap.

Apropos of Oliver Roeder’s article about the Supreme Court’s allergy to mathematics, Post says in other words, not only is there “a rather substantial library of academic commentary on ‘innumeracy’ at the court,” but “it is unfortunately well within the norms of our legal culture … to treat mathematics and related disciplines as kinds of communicable diseases with which we want no part.” What’s driving the theory of proportional representation, then, may not be the quest for racial justice, or even the wish to maintain the law’s autonomy, but instead the desire of would-be lawyers to avoid mathematics classes. But if so, then by seeking social justice through the prism of the law—which rules out of court at the outset any consideration of mathematics as a possible tool for thinking about human problems, and hence forbids (or at least, as in Gill v. Whitford, obstructs) certain possible courses of action to remedy social issues—advocates for African-Americans and others may be unnecessarily limiting their available options, which may be far wider, and wilder, than anyone viewing the problems of race through the law’s current framework can now see.

Yet—as any consideration of streaks and runs must, eventually, conclude—just because that is how things are at the moment is no reason to suspect that things will remain that way forever: as Gould says, the “gambler must go bust” when playing an opponent, like history itself, with near-infinite resources. Hence, Paul Simon to the contrary, the impressive thing about the Yankee Clipper’s feat in that last summer before the United States plunged into global war is not that after “Ken Keltner made two great plays at third base and lost DiMaggio the prospect of a lifetime advertising contract with the Heinz ketchup company” Joe DiMaggio left and went away. Instead, it is that the great outfielder lasted as long as he did; just so, in Oliver Roeder’s article he mentions that Sanford Levinson, a professor of law at the University of Texas at Austin and one of the best-known American legal scholars, has diagnosed “the problem [as] a lack of rigorous empirical training at most elite law schools”—which is to say that “the long-term solution would be a change in curriculum.” The law’s streak of avoiding mathematics, in other words, may be like all streaks. In the words of the poet of the subway walls,

Koo-koo …

Ka-choo.

Shut Out

But cloud instead, and ever-during dark
Surrounds me, from the cheerful ways of men
Cut off, and for the book of knowledge fair

And wisdom at one entrance quite shut out
Paradise Lost. Book III, 45-50

Hey everybody, let’s go out the baseball game,” the legendary 1960s Chicago disc jockey Dick Biondi said in the joke that (according to the myth) got him fired. “The boys,” Biondi is alleged to have said, “kiss the girls on the strikes, and …” In the story, of course, Biondi never finished the sentence—but you see where he was going, which is what makes the story interesting to a specific type of philosopher: the epistemologist. Epistemology is the study of how people know things: the question the epistemologist might ask about Biondi’s joke is, how do you know the ending to that story? For many academics today, the answer can be found in another baseball story, this time told by the literary critic Stanley Fish—a story that, oddly enough, also illustrates the political problems with that wildly popular contemporary concept: “diversity.”

As virtually everyone literate knows, “diversity” is one of the great adjectives of the present: something that has it is, ipso facto, usually held to be better than something that doesn’t. As a virtue, “diversity” has tremendous range, because it applies both in natural contexts—“biodiversity” is all the rage among environmentalists—and in social ones: in the 2003 case of Grutter v. Bollinger, for example, the Supreme Court held that the “educational benefits of diversity” were a “compelling state interest.” Yet, what often goes unnoticed about arguments in favor of “diversity” is that they themselves are dependent upon a rather monoglot account of how people know things—which is how we get back to epistemology.

Take, for instance, Stanley Fish’s story about the late, great baseball umpire Bill Klem. “It ain’t nothin’ til I call it,” Klem supposedly once said in response to a batter’s question about whether the previous pitch was a ball or a strike. (It’s a story I’ve retailed before: cf. “Striking Out”). The literature professor Stanley Fish has used that story, in turn, to illustrate what he views as the central lesson of what is sometimes called “postmodernism”: according to The New Yorker, Fish’s (and Klem’s) point is that “balls and strikes come into being only on the call of an umpire,” instead of being “facts in the world.” Klem’s remark in other words—Fish thinks—illustrates just how knowledge is what is sometimes called “socially constructed.”

The notion of “social construction” is the idea—as City College of New York professor Massimo Pigliucci recently put the point—that “no human being, or organized group of human beings, has access to a god’s eye of the world,” and that we ought therefore rely on an epistemic model in which “many individually biased points of view enter into dialogue with each other, yielding a less (but still) biased outcome.” The idea, in other words, is that meaning is—as Canadian philosopher Ian Hacking described the concept in The Social Construction of What?—“the product of historical events, social forces, and ideology.” Or, to put it another way, that we know things because of our culture, or social group: not by means of our own senses and judgement, but by the people around us.

For Pigliucci, this view of how human beings access reality suggests that we ought therefore rely on a particular epistemic model: rather than one in which each person ought to judge evidence for herself, we would instead rely on one in which “many individually biased points of view enter into dialogue with each other, yielding a less (but still) biased outcome.” In other words, we should rely upon diverse points of view, which is one reason why Pigliucci says, for instance, that because of the overall cognitive lack displayed by individuals, we ought “to work toward increasing diversity in the sciences.” Pigliucci’s reasoning is, of course, also what forms the basis of Grutter: “When universities are granted the freedom to assemble student bodies featuring multiple types of diversity,” wrote defendant Lee Bollinger (then dean of the University of Michigan law school) in an editorial for the Washington Post about the case, “the result is a highly sought-after learning environment that attracts the best students.” “Diversity,” in sum, is a tool to combat our epistemic weaknesses.

“Diversity” is thereby justified by means of a particular vision of epistemology: a particular theory of how people know things. On this theory, we are dependent upon other people in order to know anything. Yet, the very basis of Dick Biondi’s “joke” is that you, yourself, can “fill in” the punchline: it doesn’t take a committee to realize what the missing word at the end of the story is. And what that reality—your ability to furnish the missing word—perhaps illustrates is an epistemic distinction Keynes made in his magisterial 1920 work, A Treatise on Probability: a distinction that troubles the epistemology that underlies the concept of “diversity.”

“Now our knowledge,” Keynes writes in chapter two of that work, “seems to be obtained in two ways: directly, as the result of contemplating the objects of acquaintance; and indirectly, by argument” (italics in original). What Keynes is proposing, in other words, is an epistemic division between two ways of knowing—one of them being much like the epistemic model described by Fish or Pigliucci or Bollinger. As Keynes says, “it is usually agreed that we do not have direct knowledge” of such things as “the law of gravity … the cure for phthisis … [or] the contents of Bradshaw”—things like these, in other words, are only known through chains of reasoning, rather than direct experience. In order to know items like these, in other words, we have to have undergone a kind of socialization, otherwise known as education. We are dependent on other people to know those things.

Yet, as Keynes also recognizes, there is also another means of knowing:  “From an acquaintance with a sensation of yellow,” the Canadian economist and thinker wrote, “I can pass directly to a knowledge of the proposition ‘I have a sensation of yellow.’” In this epistemic model, human beings can know things by immediate apprehension—the chief example of this form of knowing being, as Keynes describes, our own senses. What Keynes says, in short, is that people can know things in more than one way: one way through other people yes, as Fish et al. say—but also through our own experience.

Or—to put the point differently—Keynes has a “diverse” epistemology. That would, at least superficially, seem to make Keynes’ argument a support for the theory of “diversity”: after all, he is showing how people can know things differently, which would appear to assist Lee Bollinger and Massimo Pigliucci’s argument for diversity in education. If people can know things in different ways, it would then appear necessary to gather more, and different, kinds of people in order to know anything. But just saying so exposes the weakness at the heart of Bollinger and Pigliucci’s ideal of “diversity.”

Whereas Keynes has a “diverse” epistemology, in short, Bollinger and Pigliucci do not: in their conception, human beings can only know things in one way. That is the way that Keynes called “indirect”: through argumentation and persuasion—or as its sometimes put, “social construction.” In other words, the defenders of “diversity” have a rather monolithic epistemology, which is why Fish for instance once attacked the view that it is possible to “survey the world in a manner free of assumptions about what it is like and then, from that … disinterested position, pick out the set of reasons that will be adequate to its description.” If such a thing were possible, after all, it would be possible to experience a direct encounter with the world—which “diversity” enthusiasts like Fish deny is possible: Fish says, for instance, that “the rhetoric of disinterested inquiry … is in fact”—just how he knows this is unclear—“a very interested assertion of the superiority of one set of beliefs.” In other words, any other epistemological view than their own is merely a deception.

Perhaps though this is all just one of the purest cases of an “academic” dispute: eggheads arguing, as the phrase goes, about how many angels can dance on a pin. At least, until one realizes that the nearly-undisputed triumph of epistemology retailed by Fish and company also has certain quite-real consequences. For example, as the case of Bollinger demonstrates, although the “socially-constructed” epistemology is an excellent means, as has been demonstrated over the past several decades, of—in the words of Fish’s fellow literary critic William Benn Michaels—“battling over what skin color the rich kids should have,” it isn’t so great for, say, dividing up legislative districts: a question that, as Elizabeth Kolbert noted last year in The New Yorker, “may simply be mathematical.” But if so, that presents a problem for those who think of their epistemological views as serving a political cause.

Mathematics, after all, is famously not something that can be understood “culturally”; it is, as Keynes—and before him, a silly fellow named Plato—knew, perhaps the foremost example of the sort of knowing demonstrated by Dick Biondi’s joke. Mathematics, in other words, is the chief example of something known directly: when you understand something in mathematics, you understand it either immediately—or not at all. Which, after all, is the significance of Kolbert’s remarks: to say that re-districting—perhaps the most political act of all in a democracy—is primarily a mathematical operation is to say that to understand redistricting, you have to understand directly the mathematics of the operation. Yet if the “diversity” promoters are correct, then only their epistemology has any legitimacy: an epistemology that a priori prevents anyone from sensibly discussing redistricting. In other words, it’s precisely the epistemological blindspots promoted by the often-ostensibly “politically progressive” promoters of “diversity” that allow the current American establishment to ignore the actual interests of actual people.

Which, one supposes, may be the real joke.

The Color of Water

No one gets lucky til luck comes along.
Eric Clapton
     “It’s In The Way That You Use It”
     Theme Song for The Color of Money (1986).

 

 

The greenish tint to the Olympic pool wasn’t the only thing fishy about the water in Rio last month: a “series of recent reports,” Patrick Redford of Deadspin reported recently, “assert that there was a current in the pool at the Rio Olympics’ Aquatic Stadium that might have skewed the results.” Or—to make the point clear in a way the pool wasn’t—the water in the pool flowed in such a way that it gave the advantage to swimmers starting in certain lanes: as Redford writes, “swimmers in lanes 5 through 8 had a marked advantage over racers in lanes 1 through 4.” According, however, to ESPN’s Michael Wilbon—a noted African-American sportswriter—such results shouldn’t be of concern to people of color: “Advanced analytics,” Wilbon wrote this past May, “and black folks hardly ever mix.” To Wilbon, the rise of statistical analysis poses a threat to African-Americans. But Wilbon is wrong: in reality, the “hidden current” in American life holding back both black Americans and all Americans is not analytics—it’s the suspicions of supposedly “progressive” people like Michael Wilbon.

The thesis of Wilbon’s piece, “Mission Impossible: African-Americans and Analytics”—published on ESPN’s race-themed website, The Undefeated—was that black people have some kind of allergy to statistical analysis: “in ‘BlackWorld,’” Wilbon solemnly intoned, “never is heard an advanced analytical word.” Whereas, in an earlier age, white people like Thomas Jefferson questioned black people’s literacy, nowadays, it seems, it’s ok to question their ability to understand mathematics—a “ridiculous” (according to The Guardian’s Dave Schilling, another black journalist) stereotype that Wilbon attempts to paint as, somehow, politically progressive: Wilbon, that is, excuses his absurd beliefs on the basis that analytics “seems to be a new safe haven for a new ‘Old Boy Network’ of Ivy Leaguers who can hire each other and justify passing on people not given to their analytic philosophies.” Yet, while Wilbon isn’t alone in his distrust of analytics, it’s actually just that “philosophy” that may hold the most promise for political progress—not only for African-Americans, but every American.

Wilbon’s argument, after all, depends on a common thesis heard in the classrooms of American humanities departments: when Wilbon says the “greater the dependence on the numbers, the more challenged people are to tell (or understand) the narrative without them,” he is echoing a common argument deployed every semester in university seminar rooms throughout the United States. Wilbon is, in other words, merely repeating the familiar contention, by now essentially an article of faith within the halls of the humanities, that without a framework—or (as it’s sometimes called), “paradigm”—raw statistics are meaningless: the doctrine sometimes known as “social constructionism.”

That argument is, as nearly everyone who has taken a class in the departments of the humanities in the past several generations knows, that “evidence” only points in a certain direction once certain baseline axioms are assumed. (An argument first put about, by the way, by the physician Galen in the second century AD.) As American literary critic Stanley Fish once rehearsed the argument in the pages of the New York Times, according to its terms investigators “do not survey the world in a manner free of assumptions about what it is like and then, from that (impossible) disinterested position, pick out the set of reasons that will be adequate to its description.” Instead, Fish went on, researchers “begin with the assumption (an act of faith) that the world is an object capable of being described … and they then develop procedures … that yield results, and they call those results reasons for concluding this or that.” According to both Wilbon and Fish, in other words, the answers people find depends not the structure of reality itself, but instead on the baseline assumptions the researcher begins with: what matters is not the raw numbers, but the contexts within which the numbers are interpreted.

What’s important, Wilbon is saying, is the “narrative,” not the numbers: “Imagine,” Wilbon says, “something as pedestrian as home runs and runs batted in adequately explaining [Babe] Ruth’s overall impact” on the sport of baseball. Wilbon’s point is that a knowledge of Ruth’s statistics won’t tell you about the hot dogs the great baseball player ate during games, or the famous “called shot” during the 1932 World Series—what he is arguing is that statistics only point toward reality: they aren’t reality itself. Numbers, by themselves, don’t say anything about reality; they are only a tool with which to access reality, and by no means the only tool available: in one of Wilbon’s examples Stef Curry, the great guard for the NBA’s Golden State Warriors, knew he shot better from the corners—an intuition that later statistical analysis bore out. Wilbon’s point is that both Curry’s intuition and statistical analysis told the same story, implying that there’s no fundamental reason to favor one road to truth over the other.

In a sense, to be sure, Wilbon is right: statistical analysis is merely a tool for getting at reality, not reality itself, and certainly other tools are available. Yet, it’s also true that, as statistician and science fiction author Michael F. Flynn has pointed out, astronomy—now accounted one of the “hardest” of physical sciences, because it deals with obviously real physical objects in space—was once not an observational science, but instead a mathematical one: in ancient times, Chinese astronomers were called “calendar-makers,” and a European astronomer was called a mathematicus. As Flynn says, “astronomy was not about making physical discoveries about physical bodies in the sky”—it was instead “a specialized branch of mathematics for making predictions about sky events.” Without telescopes, in other words, astronomers did not know what, exactly, say, the planet Mars was: all they could do was make predictions, based on mathematical analysis, about what part of the sky it might appear in next—predictions that, over the centuries, became perhaps-startlingly accurate. But as a proto-Wilbon might have said in (for instance) the year 1500, such astronomers had no more direct knowledge of what Mars is than a kindergartner has of the workings of the Federal Reserve.

In the same fashion, Wilbon might point out about the swimming events in Rio, there is no direct evidence of a current in the Olympic pool: the researchers who assert that there was such a current base their arguments on statistical evidence of the races, not examination of the conditions of the pool. Yet the evidence for the existence of a current is pretty persuasive: as the Wall Street Journal reported, fifteen of the sixteen swimmers, both men and women, who swam in the 50-meter freestyle event finals—the one event most susceptible to the influence of a current, because swimmers only swim one length of the pool in a single direction—swam in lanes 4 through 8, and swimmers who swam in outside lanes in early heats and inside lanes in later heats actually got slower. (A phenomena virtually unheard of in top level events like the Olympics.) Barry Revzin, of the website Swim Swam, found that a given Olympic swimmer picked up “a 0.2 percent advantage for each lane … closer to [lane] 8,” Deadspin’s Redford reported, and while that could easily seem “inconsequentially small,” Redford remarked, “it’s worth pointing out that the winner in the women’s 50 meter freestyle only beat the sixth-place finisher by 0.12 seconds.” It’s a very small advantage, in other words, which is to say that it’s very difficult to detect—except by means of the very same statistical analysis distrusted by Wilbon. But although it is a seemingly-small advantage, it is enough to determine the winner of the gold medal. Wilbon in other words is quite right to say that statistical evidence is not a direct transcript of reality—he’s wrong, however, if he is arguing that statistical analysis ought to be ignored.

To be fair, Wilbon is not arguing exactly that: “an entire group of people,” he says, “can’t simply refuse to participate in something as important as this new phenomenon.” Yet Wilbon is worried about the growth of statistical analysis because he views it as a possible means for excluding black people. If, as Wilbon writes, it’s “the emotional appeal,” rather than the “intellect[ual]” appeal, that “resonates with black people”—a statement that, if it were written by a white journalist, would immediately cause a protest—then Wilbon worries that, in a sports future run “by white, analytics-driven executives,” black people will be even further on the outside looking in than they already are. (And that’s pretty far outside: as Wilbon notes, “Nate McMillan, an old-school, pre-analytics player/coach, who was handpicked by old-school, pre-analytics player/coach Larry Bird in Indiana, is the only black coach hired this offseason.”) Wilbon’s implied stance, in other words—implied because he nowhere explicitly says so—is that since statistical evidence cannot be taken at face value, but only through screens and filters that owe more to culture than to the nature of reality itself, therefore the promise (and premise) of statistical analysis could be seen as a kind of ruse designed to perpetuate white dominance at the highest levels of the sport.

Yet there are at least two objections to make about Wilbon’s argument: the first being the empirical observation that in U.S. Supreme Court cases like McCleskey v. Kemp for instance (in which the petitioner argued that, according to statistical analysis, murderers of white people in Georgia were far more likely to receive the death penalty than murderers of black people), or Teamsters v. United States, (in which—according to Encyclopedia.com—the Court ruled, on the basis of statistical evidence, that the Teamsters union had “engaged in a systemwide practice of minority discrimination”), statistical analysis has been advanced to demonstrate the reality of racial bias. (A demonstration against which, by the way, time and again conservatives have countered with arguments against the reality of statistical analysis that essentially mirror Wilbon’s.) To think then that statistical analysis could be inherently biased against black people, as Wilbon appears to imply, is empirically nonsense: it’s arguable, in fact, that statistical analysis of the sort pioneered by people like sociologist Gunnar Myrdal has done at least as much, if not more, as (say) classes on African-American literature to combat racial discrimination.

The more serious issue, however, is a logical objection: Wilbon’s two assertions are in conflict with each other. To reach his conclusions, Wilbon ignores (like others who make similar arguments) the implications of his own reasoning: statistics ought to be ignored, he says, because only “narrative” can grant meaning to otherwise meaningless numbers—but, if it is so that numbers themselves cannot “mean” without a framework to grant them meaning, then they cannot pose the threat that Wilbon says they might. In other words, if Wilbon is right that statistical analysis is biased against black people, then it means that numbers do have meaning in themselves, while conversely if numbers can only be interpreted within a framework, then they cannot be inherently biased against black people. By Wilbon’s own account, in other words, nothing about statistical analysis implies that such analysis can only be pursued by white people, nor could the numbers themselves demand only a single (oppressive) use—because if that were so, then numbers would be capable of providing their own interpretive framework. Wilbon cannot logically advance both propositions simultaneously.

That doesn’t mean, however, that Wilbon’s argument—the argument, it ought to be noted, of many who think of themselves as politically “progressive”—is not having an effect: it’s possible, I think, that the relative success of this argument is precisely what is causing Americans to ignore a “hidden current” in American life. That current is could be described by an “analytical” observation made by professors Sven Steinmo and Jon Watts some two decades ago: “No other democratic system in the world requires support of 60% of legislators to pass government policy”—an observation that, in turn, may be linked to the observable reality that, as political scientists Frances E. Lee and Bruce Oppenheimer have noted, “less populous states consistently receive more federal funding than states with more people.” Understanding the impact of these two observations, and their effects on each other would, I suspect, throw a great deal of light on the reality of American lives, white and black—yet it’s precisely the sort of reflection that the “social construction” dogma advanced by Wilbon and company appears specifically designed to avoid. While to many, even now, the arguments for “social construction” and such might appear utterly liberatory, it’s possible to tell a tale in which it is just such doctrines that are the tools of oppression today.

Such an account would be, however—I suppose Michael Wilbon or Stanley Fish might tell us—simply a story about the one that got away.

Striking Out

When a man’s verses cannot be understood … it strikes a man more dead than a great reckoning in a little room.
As You Like It. III, iii.

 

There’s a story sometimes told by the literary critic Stanley Fish about baseball, and specifically the legendary early twentieth-century umpire Bill Klem. According to the story, Klem is working behind the plate one day. The pitcher throws a pitch; the ball comes into the plate, the batter doesn’t swing, and the catcher catches it. Klem doesn’t say anything. The batter turns around and says (Fish tells us),

“O.K., so what was it, a ball or a strike?” And Klem says, “Sonny, it ain’t nothing ’till I call it.” What the batter is assuming is that balls and strikes are facts in the world and that the umpire’s job is to accurately say which one each pitch is. But in fact balls and strikes come into being only on the call of an umpire.

Fish is expressing here what is now the standard view of American departments of the humanities: the dogma (a word precisely used) known as “social constructionism.” As Fish says elsewhere, under this dogma, “what is and is not a reason will always be a matter of faith, that is of the assumptions that are bedrock within a discursive system which because it rests upon them cannot (without self-destructing) call them into question.” To many within the academy, this view is inherently liberating: the notion that truth isn’t “out there” but rather “in here” is thought to be a sub rosa method of aiding the political change that, many have thought, has long been due in the United States. Yet, while joining the “social construction” bandwagon is certainly the way towards success in the American academy, it isn’t entirely obvious that it’s an especially good way to practice American politics: specifically, because the academy’s focus on the doctrines of “social constructionism” as a means of political change has obscured another possible approach—an approach also suggested by baseball. Or, to be more precise, suggested by the World Series of 1904 that didn’t happen.

“He’d have to give them,” wrote Will Hively, in Discover magazine in 1996, “a mathematical explanation of why we need the electoral college.” The article describes how one Alan Natapoff, a physicist at the Massachusetts Institute of Technology, became involved in the question of the Electoral College: the group, assembled once every four years, that actually elects an American president. (For those who have forgotten their high school civics lessons, the way an American presidential election works is that each American state elects a number of “electors” equal in number to that state’s representation  in Congress; i.e., the number of congresspeople each state is entitled to by population, plus two senators. Those electors then meet to cast their votes in what is the actual election.) The Electoral College has been derided for years: the House of Representatives introduced a constitutional amendment to abolish it in 1969, for instance, while at about the same time the American Bar Association called the college “archaic, undemocratic, complex, ambiguous, indirect, and dangerous.” Such criticisms have a point: as has been seen a number times in American history (most recently in 2000), the Electoral College makes it possible to elect a president without a majority of the votes. But to Natapoff, such criticisms fundamentally miss the point because, according to him, they misunderstood the math.

The example Natapoff turned to in order to support his argument for the Electoral College was drawn from baseball. As Anthony Ramirez wrote in a New York Times article about Natapoff and his argument, also from 1996, the physicist’s favorite analogy is to the World Series—a contest in which, as Natapoff says, “the team that scores the most runs overall is like a candidate who gets the most popular votes.” But scoring more runs than your opponent is not enough to win the World Series, as Natapoff goes on to say: in order to become the champion baseball team of the year, “that team needs to win the most games.” And scoring runs is not the same as winning games.

Take, for instance, the 1960 World Series: in that contest, as Lively says in Discover, “the New York Yankees, with the awesome slugging combination of Mickey Mantle, Roger Maris, and Bill ‘Moose’ Skowron, scored more than twice as many total runs as the Pittsburgh Pirates, 55 to 27.” Despite that difference in production, the Pirates won the last game of the series (in perhaps the most exciting game in Series history—the only one that has ever ended with a ninth-inning, walk-off home run) and thusly won the series, four games to three. Nobody would dispute, Natapoff’s argument runs, that the Pirates deserved to win the series—and so, similarly, nobody should dispute the legitimacy of the Electoral College.

Why? Because if, as Lively writes, in the World Series “[r]uns must be grouped in a way that wins games,” in the Electoral College “votes must be grouped in a way that wins states.” Take, for instance, the election of 1888—a famous case for political scientists studying the Electoral College. In that election, Democratic candidate Grover Cleveland gained over 5.5 million votes to Republican candidate Benjamin Harrison’s 5.4 million votes. But Harrison not only won more states than Cleveland, but also won states with more electoral votes: including New York, Pennsylvania, Ohio, and Illinois, each of whom had at least six more electoral votes than the most populous state Cleveland won, Missouri. In this fashion, Natapoff argues that Harrison is like the Pirates: although he did not win more votes than Cleveland (just as the Pirates did not score more runs than the Yankees), still he deserved to win—on the grounds that the total numbers of popular votes do not matter, but rather how those votes are spread around the country.

In this argument, then, games are to states just as runs are to votes. It’s an analogy that has an easy appeal to it: everyone feels they understand the World Series (just as everyone feels they understand Stanley Fish’s umpire analogy) and so that understanding appears to transfer easily to the matter of presidential elections. Yet, while clever, in fact most people do not understand the purpose of the World Series: although people think it is the task of the Series to identify the best baseball team in the major leagues, that is not what it is designed to do. It is not the purpose of the World Series to discover the best team in baseball, but instead to put on an exhibition that will draw a large audience, and thus make a great deal of money. Or so said the New York Giants, in 1904.

As many people do not know, there was no World Series in 1904. A World Series, as baseball fans do know, is a competition between the champions of the National League and the American League—which, because the American League was only founded in 1901, meant that the first World Series was held in 1903, between the Boston Americans (soon to become the Red Sox) and the same Pittsburgh Pirates also involved in Natapoff’s example. But that series was merely a private agreement between the two clubs; it created no binding precedent. Hence, when in 1904 the Americans again won their league and the New York Giants won the National League—each achieving that distinction by winning more games than any other team over the course of the season—there was no requirement that the two teams had to play each other. And the Giants saw no reason to do so.

As legendary Giants manager, John McGraw, said at the time, the Giants were the champions of the “only real major league”: that is, the Giants’ title came against tougher competition than the Boston team faced. So, as The Scrapbook History of Baseball notes, the Giants, “who had won the National League by a wide margin, stuck to … their plan, refusing to play any American League club … in the proposed ‘exhibition’ series (as they considered it).” The Giants, sensibly enough, felt that they could not gain much by playing Boston—they would be expected to beat the team from the younger league—and, conversely, they could lose a great deal. And mathematically speaking, they were right: there was no reason to put their prestige on the line by facing an inferior opponent that stood a real chance to win a series that, for that very reason, could not possibly answer the question of which was the better team.

“That there is,” writes Nate Silver and Dayn Perry in Baseball Between the Numbers: Why Everything You Know About the Game Is Wrong, “a great deal of luck involved in the playoffs is an incontrovertible mathematical fact.” But just how much luck is involved is something that the average fan hasn’t considered—though former Caltech physicist Leonard Mlodinow has. In Mlodinow’s book, The Drunkard’s Walk: How Randomness Rules Our Lives, the scientist writes that—just by virtue of doing the math—it can be concluded that “in a 7-game series there is a sizable chance that the inferior team will be crowned champion”:

For instance, if one team is good enough to warrant beating another in 55 percent of its games, the weaker team will nevertheless win a 7-game series about 4 times out of 10. And if the superior team could be expected to beat its opponent, on average, 2 out of each 3 times they meet, the inferior team will still win a 7-game series about once every 5 matchups.

What Mlodinow means is this: let’s say that, for every game, we roll a one-hundred sided die to determine whether the team with the 55 percent edge wins or not. If we do that four times, there’s still a good chance that the inferior team is still in the series: that is, that the superior team has not won all the games. In fact, there’s a real possibility that the inferior team might turn the tables, and instead sweep the superior team. Seven games, in short, is just not enough games to demonstrate conclusively that one team is better than another.

In fact, in order to eliminate randomness as much as possible—that is, make it as likely as possible for the better team to win—the World Series would have to be much longer than it currently is: “In the lopsided 2/3-probability case,” Mlodinow says, “you’d have to play a series consisting of at minimum the best of 23 games to determine the winner with what is called statistical significance, meaning the weaker team would be crowned champion 5 percent or less of the time.” In other words, even in a case where one team has a two-thirds likelihood of winning a game, it would still take 23 games to make the chance of the weaker team winning the series less than 5 percent—and even then, there would still be a chance that the weaker team could still win the series. Mathematically then, winning a seven-game series is meaningless—there have been just too few games to eliminate the potential for a lesser team to beat a better team.

Just how mathematically meaningless a seven-game series is can be demonstrated by the case of a team that is only five percent better than another team: “in the case of one team’s having only a 55-45 edge,” Mlodinow goes on to say, “the shortest statistically significant ‘world series’ would be the best of 269 games” (emp. added). “So,” Mlodinow writes, “sports playoff series can be fun and exciting, but being crowned ‘world champion’ is not a very reliable indication that a team is actually the best one.” Which, as a matter of fact about the history of the World Series, is simply a point that true baseball professionals have always acknowledged: the World Series is not a competition, but an exhibition.

What the New York Giants were saying in 1904 then—and Mlodinow more recently—is that establishing the real worth of something requires a lot of trials: many, many different repetitions. That’s something that, all of us, ought to know from experience: to learn anything, for instance, requires a lot of practice. (Even if the famous “10,000 hour rule” New Yorker writer Malcolm Gladwell concocted for this book, Outliers: The Story of Success, has been complicated by those who did the original research Gladwell based his research upon.) More formally, scientists and mathematicians call this the “Law of Large Numbers.”

What that law means, as the Encyclopedia of Mathematics defines it, is that “the frequency of occurence of a random event tends to become equal to its probability as the number of trials increases.” Or, to use the more natural language of Wikipedia, “the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.” What the Law of Large Numbers implies is that Natapoff’s analogy between the Electoral College and the World Series just might be correct—though for the opposite reason Natapoff brought it up. Namely, if the Electoral College is like the World Series, and the World Series is not designed to find the best team in baseball but instead be merely an exhibition, then that implies that the Electoral College is not a serious attempt to find the best president—because what the Law would appear to advise is that, in order to obtain a better result, it is better to gather more voters.

Yet the currently-fashionable dogma of the academy, it would seem, is expressly-designed to dismiss that possibility: if, as Fish says, “balls and strikes” (or just things in general) are the creations of the “umpire” (also known as a “discursive system”), then it is very difficult to confront the wrongheadedness of Natapoff’s defense of the Electoral College—or, for that matter, the wrongheadedness of the Electoral College itself. After all, what does an individual run matter—isn’t what’s important the game in which it is scored? Or, to put it another way, isn’t it more important where (to Natapoff, in which state; to Fish, less geographically inclined, in which “discursive system”) a vote is cast, rather than whether it was cast? The answer in favor of the former at the expense of the latter to many, if not most, literary-type intellectuals is clear—but as any statistician will tell you, it’s possible for any run of luck to continue for quite a bit longer than the average person might expect. (That’s one reason why it takes at least 23 games to minimize the randomness between two closely-matched baseball teams.) Even so, it remains difficult to believe—as it would seem that many today, both within and without the academy, do—that the umpire can continue to call every pitch a strike.

 

Beams of Enlightenment

And why thou beholdest thou the mote that is in thy brother’s eye, but considerest not the beam that is in thine own eye?
Matthew 7:3

 

“Do you know what Pied Piper’s product is?” the CEO of the company, Jack Barker, asks his CTO, Richard, during a scene in HBO’s series Silicon Valley—while two horses do, in the background, what Jack is (metaphorically) doing to Richard in the foreground. Jack is the experienced hand brought in to run the company Richard founded as a young programmer; on the other hand, Richard is so ingenuous that Jack has to explain to him the real point of everything they are doing: “The product isn’t the platform, and the product isn’t your algorithm either, and it’s not even the software. … Pied Piper’s product is its stock. Whatever makes the value of that stock go up, that is what we’re going to make.” With that, the television show effectively dramatizes the case many on the liberal left have been trying to make for decades: that the United States is in trouble because of something called “financialization”—or what Kevin Phillips (author of 1969’s The Emerging Republican Majority) has called, in one of the first uses of the term, “a prolonged split between the divergent real and financial economies.” Yet few on that side of the political aisle have considered how their own arguments about an entirely different subject are, more or less, the same as those powering “financialization”—how, in other words, the argument that has enhanced Wall Street at the expense of Main Street—Eugene Fama’s “efficient market hypothesis”—is precisely the same as the liberal left’s argument against the SAT.

That the United States has turned from an economy largely centered around manufacturing to one that centers on services, especially financial ones, can be measured by such data as the fact that the total fraction of America’s Gross Domestic Product consumed by the financial industry is now, according to economist Thomas Philippon of New York University, “around 9%,” while just more than a century ago it was under two. Most appear to agree that this is a bad thing: “Our economic illness has a name: financialization,” Time magazine columnist Rana Foroohar argues in her Makers and Takers: The Rise of Finance and the Fall of American Business, while Bruce Bartlett, who worked in both the Reagan and George H.W. Bush Administrations (which is to say that he is not exactly the stereotypical lefty), claimed in the New York Times in 2013 that “[f]inancialization is also an important factor in the growth of income inequality.” In a 2007 Bloomberg News article, Lawrence E. Mitchell—a professor of law at George Washington Law School—denounced how “stock market considerations” have come “to trump those that improve the actual workings of a business.” The consensus view appears to be that it is bad for a business to be, as Jack is on Silicon Valley, more concerned with its stock price than on what it actually does.

Still, if it is such a bad idea, why do companies do it? One possible answer might be found in the timing, which seems to have happened some time after the 1960s: as John Bellamy Foster put it in a 2007 Monthly Review article entitled “The Financialization of Capitalism,” the “fundamental issue of a gravitational shift toward finance in capitalism as a whole … has been around since the late 1960s.” Undoubtedly, that turn was conditioned by numerous historical forces, but it’s also true that it was during the 1960s that the “efficient market hypothesis,” pioneered above all by the research of Eugene Fama of the University of Chicago, became the dominant intellectual force in the study of economics and in business schools—the incubators of the corporate leaders of today. And Fama’s argument was—and is—an intellectual cruise missile aimed at the very idea that the value of a company might be separate from its stock price.

As I have discussed previously (“Lions For Lambs”), Eugene Fama’s 1965 paper “The Behavior of Stock Market Prices” demonstrated that “the future path of the price level of a security is no more predictable than the path of a series of cumulated random numbers”—or in other words, that there was no rational way to beat the stock market. Also known as the “efficient market hypothesis,” the idea is largely that—as Fama’s intellectual comrade Burton Malkiel observed in his book, A Random Walk Down Wall Street (which has gone through more than five editions since its first publication in 1973),“the evidence points mainly toward the efficiency of the market in adjusting so rapidly to new information that it is impossible to devise successful trading strategies on the basis of such news announcements.” Translated, that means that it’s essentially impossible to do better than the market by paying close attention to what investors call a company’s “fundamental value.”

Yet, if there’s never a divergence between a company’s real worth and the price of its stock, that means that there’s no other means to measuring a company’s real worth than by its stock. From Fama’s or Malkiel’s perspective, “stock market considerations” simply are “the actual workings of a business.” They argued against the very idea that there even could be such a distinction: that there could be something about a company that is not already reflected in its price.

To a lot of educated people on the liberal-left, of course, such an argument will affirm many of their prejudices: against the evils of usury, and the like. At the same time, however, many of them might be taken aback if it’s pointed out that Eugene Fama’s case against fundamental economic analysis is the same as the case many educators make, when it comes to college admissions, against the SAT. Take, for example, a 1993 argument made in The Atlantic by Stanley Fish, former chairman of the English Department at Duke University and dean of the humanities at the University of Illinois at Chicago.

In “Reverse Racism, or, How the Pot Got to Call the Kettle Black,” the Miltonist argued against noted conservative Dinesh D’Souza’s contention, in 1991’s Illiberal Education, that affirmative-action in college admissions tends “‘to depreciate the importance of merit criteria.’” The evidence that D’Souza used to advance that thesis is, Fish tells us, the “many examples of white or Asian students denied admission to colleges and universities even though their SAT scores were higher than the scores of some others—often African-Americans—who were admitted to the same institution.” But, Fish says, the SAT has been attacked as a means of college admissions for decades.

Fish cites David Owen’s None of the Above: Behind the Myth of Scholastic Aptitude as an example. There, Owen says that the

correlation between SAT scores and college grades … is lower than the correlation between height and weight; in other words, you would have a better chance of predicting a person’s height by looking at his weight than you would of predicting his freshman grades by looking only at his SAT scores.”

As Fish intimates, most educational professionals these days would agree that the only way to judge a student these days is not by SAT, but by GPA—grade point average.

To judge students by grade point average, however, is just what the SAT was designed to avoid: as Nicholas Lemann describes in copious detail in The Big Test: The Secret History of the American Meritocracy, the whole purpose of the SAT was to discover students whose talents couldn’t be discerned by any other method. The premise of the test’s designers, in short, was that students possessed, as Lemann says, “innate abilities”—and that the SAT could suss those abilities out. What the SAT was designed to do, then, was to find those students stuck in, say, some lethargic, claustrophobic small town whose public schools could not, perhaps, do enough for them intellectually and who stagnated as a result—and put those previously-unknown abilities to work in the service of the nation.

Now, as Lemann remarked in an interview with PBS’ Frontline,  James Conant (president of Harvard and chief proponent of the SAT at the time it became prominent in American life, in the early 1950s) “believed that you would look out across America and you would find just out in the middle of nowhere, springing up from the good American soil, these very intelligent, talented people”—if, that is, America adopted the SAT to do the “looking out.” The SAT would enable American universities to find students that grade point averages did not—a premise that, necessarily, entails believing that a student’s worth could be more than (and thus distinguishable from) her GPA. That’s what, after all, “aptitude” means: “potential ability,” not “proven ability.” That’s why Conant sometimes asked those constructing the test, “Are you sure this is a pure aptitude test, pure intelligence? That’s what I want to measure, because that is the way I think we can give poor boys the best chance and take away the advantage of rich boys.” The Educational Testing Service (the company that administered the SAT), in sum, believed that there could be something about a student that was not reflected in her grades.

To use an intellectual’s term, that means that the argument against the SAT is isomorphic with the “efficient market hypothesis.” In biology, two structures are isomorphic with each other if they share a form or structure: a human eye is isomorphic with an insect’s eye because they both take in visual information and transmit it to the brain, even though they have different origins. Hence, as biologist Stephen Jay Gould once remarked, two arguments are isomorphic if they are “structurally similar point for point, even though the subject matter differs.” Just as Eugene Fama argued that a company could not be valued other than by its stock price—which has had the effective consequence of meaning that a company’s product is now not whatever superficial business it is supposedly in, but its stock price—educational professionals have argued that the only way to measure a student’s value is to look at her grades.

Now, does that mean that the “financialization” of the United States’ economy is the fault of the liberal left, instead of the usual conservative suspects? Or, to put it more provocatively, is the rise of the 1% at the expense of the 99% the fault of affirmative action? The short answer, obviously, is that I don’t have the slightest idea. (But then, neither do you.) What it does mean, I think, is that at least some of what’s happened to the United States in the past several decades is due to patterns of thought common to both sides of the American political congregation: most perniciously, in the related notions that all value is always and everywhere visible, or that it takes no time and patience for value to manifest itself—and that at least some of the erosion of those ideas is due to the efforts of those who meant well. Granted, it’s always hardest to admit wrongdoing when not only were your intentions pure, but even the immediate effects were also—but it’s also very much more powerful. The point, anyway, is that if you are trying to persuade, it’s probably best to avoid that other four-lettered word associated with horses.

 

 

Double Vision

Ill deeds are doubled with an evil word.
The Comedy of Errors. III, ii

The century just past had been both one of the most violent ever recorded—and also perhaps the highest flowering of civilized achievement since Roman times. A great war had just ended, and the danger of starvation and death had receded for millions; new discoveries in agriculture meant that many more people were surviving into adulthood. Trade was becoming more than a local matter; a pioneering Westerner had just re-established a direct connection with China. As well, although most recent contact with Europe’s Islamic neighbors had been violent, there were also signs that new intellectual contacts were being made; new ideas were circulating from foreign sources, putting in question truths that had been long established. Under these circumstances a scholar from one of the world’s most respected universities made—or said something that allowed his enemies to make it appear he had made—a seemingly-astonishing claim: that philosophy, reason, and science taught one kind of truth, and religion another, and that there was no need to reconcile the two. A real intellect, he implied, had no obligation to be correct: he or she had only to be interesting. To many among his audience that appeared to be the height of both sheer brainpower and politically-efficacious intellectual work—but then, none of them were familiar with either the history of German auto-making, or the practical difficulties of the office of the United States Attorney for the Southern District of New York.

Some literary scholars of a previous generation, of course, will get the joke: it’s a reference to then-Johns Hopkins University Miltonist Stanley Fish’s assertion, in his 1976 essay “Interpreting ‘Interpreting the Variorum,’” that, as an interpreter, he has no “obligation to be right,” but “only that [he] be interesting.” At the time, the profession of literary study was undergoing a profound struggle to “open the canon” to a wide range of previously-neglected writers, especially members of minority groups like African-Americans, women, and homosexuals. Fish’s remark, then, was meant to allow literary scholars to study those writers—many of whom would have been judged “wrong” according to previous notions of literary correctness. By suggesting that the proper frame of reference was not “correct/incorrect,” or “right/wrong,” Fish implied that the proper standard was instead something less rigid: a criteria that thusly allowed for the importation of new pieces of writing and new ideas to flourish. Fish’s method, in other words, might appear to be an elegant strategy that allowed for, and resulted in, an intellectual flowering in recent decades: the canon of approved books has been revamped, and a lot of people who probably would not have been studied—along with a lot of people who might not have done the studying—entered the curriculum who might not have had the change of mind Fish’s remark signified not have become standard in American classrooms.

I put things in the somewhat cumbersome way I do in the last sentence because of course Fish’s line did not arrive in a vacuum: the way had been prepared in American thought long before 1976. Forty years prior, for example, F. Scott Fitzgerald had claimed, in his essay “The Crack-Up” for Esquire, that “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” In 1949 Fitzgerald’s fellow novelist, James Baldwin, similarly asserted that “literature and sociology are not the same.” And thirty years after Fish’s essay, the notion had become so accepted that American philosopher Richard Rorty could casually say that the “difference between intellectuals and the masses is the difference between those who can remember and use different vocabularies at the same time, and those who can remember only one.” So when Fish wrote what he wrote, he was merely putting down something that a number of American intellectuals had been privately thinking for some time—a notion that has, sometime between now and then, become American conventional wisdom.

Even some scientists have come to accept some version of the idea: before his death, the biologist Stephen Jay Gould promulgated the notion of what he called “non-overlapping magisteria”: the idea that while science might hold to one version of truth, religion might hold another. “The net of science,” Gould wrote in 1997, “covers the empirical universe,” while the “net of religion extends over questions of moral meaning and value.” Or, as Gould put it more flippantly, “we [i.e., scientists] study how the heavens go, and they [i.e., theologians] determine how to go to heaven.” “Science,” as medical doctor (and book reviewer) John Carmody put the point in The Australian earlier this year, “is our attempt to understand the physical and biological worlds of which we are a part by careful observation and measurement, followed by rigorous analysis of our findings,” while religion “and, indeed, the arts are, by contrast, our attempts to find fulfilling and congenial ways of living in our world.” The notion then that there are two distinct “realms” of truth is a well-accepted one: nearly every thinking, educated person alive today subscribes to some version of it. Indeed, it’s a belief that appears necessary to the pluralistic, tolerant society that many envision the United States is—or should be.

Yet, the description with which I began this essay, although it does in some sense apply to Stanley Fish’s United States of the 1970s, also applies—as the learned knew, but did not say, at the time of Fish’s 1976 remark—to another historical era: Europe’s thirteenth century. At that time, just as during Fish’s, the learned of the world were engaged in trying to expand the curriculum: in this case, they were attempting to recoup the work of Aristotle, largely lost to the West since the fall of Rome. But the Arabs had preserved Aristotle’s work: “In 832,” as Arthur Little, of the Jesuits, wrote in 1947, “the Abbaside Caliph, Almamun,” had the Greek’s work translated “into Arabic, roughly but not inaccurately,” in which language Aristotle’s works “spread through the whole Moslem world, first to Persia in the hand of Avicenna, then to Spain where its greatest exponent was Averroes, the Cordovan Moor.” In order to read and teach Aristotle without interference from the authorities, Little tells us, Averroes (Ibn Rushd) decided that “Aristotle’s doctrine was the esoteric doctrine of the Koran in opposition to the vulgar doctrine of the Koran defended by the orthodox Moslem priests”—that is, the Arabic scholar decided that there was one “truth” for the masses and another, far more subtle, for the learned. Averroes’ conception was, in turn, imported to the West along with the works of Aristotle: if the ancient Greek was at times referred to as the Master, his Arabic disciple was referred to as the Commentator.

Eventually, Aristotle’s works reached Paris, and the university there, sometime towards the end of the twelfth century. Gerard of Cremona, for example, had translated the Physics into Latin from the Arabic of the Spanish Moors sometime before he died in 1187; others had translated various parts of Aristotle’s Greek corpus either just before or just afterwards. For some time, it seems, they circulated in samizdat fashion among the young students of Paris: not part of the regular curriculum, but read and argued over by the brightest, or at least most well-read. At some point, they encountered a young man who would become known to history as Siger of Brabant—or perhaps rather, he encountered them. And like many other young, studious people, Siger fell in love with these books.

It’s a love story, in other words—and one that, like a lot of other love stories, has a sad, if not tragic, ending. For what Siger was learning by reading Aristotle—and Averroes’ commentary on Aristotle—was nearly wholly incompatible with what he was learning in his other studies through the rest of the curriculum—an experience that he was not, as the experience of Averroes before him had demonstrated, alone in having. The difference, however, is that whereas most other readers and teachers of the learned Greek sought to reconcile him to Christian beliefs (despite the fact that Aristotle long predated Christianity), Siger—as Richard E. Rubenstein puts it in his Aristotle’s Children—presented “Aristotle’s ideas about nature and human nature without attempting to reconcile them with traditional Christian beliefs.” And even more: as Rubenstein remarks, “Siger seemed to relish the discontinuities between Aristotelian scientia and Christian faith.” At the same time, however, Siger also held—as he wrote—that people ought not “try to investigate by reason those things which are above reason or to refute arguments for the contrary position.” But assertions like this also left Siger vulnerable.

Vulnerable, that is, to the charge that what he and his friends were teaching was what Rubenstein calls “the scandalous doctrine of Double Truth.” Or, in other words, the belief that “a proposition [that] could be true scientifically but false theologically, or the other way round.” Whether Siger and his colleagues did, or did not, hold to such a doctrine—there have been arguments about the point for centuries now— isn’t really material, however: as one commenter, Vincent P. Benitez, has put it, either way Siger’s work highlighted just how the “partitioning of Christian intellectual life in the thirteenth century … had become rather pronounced.” So pronounced, in fact, that it suggested that many supposed “intellectuals” of the day “accepted contradictories as simultaneously true.” And that—as it would not to F. Scott Fitzgerald later—posed a problem to the medievals, because it ran up against a rule of logic.

And not just any rule of logic: it’s one that Aristotle himself said was the most essential to any rational thought whatever. That rule of logic is usually known by the name the Law of Non-Contradiction, usually placed as the second of the three classical rules of logic in the ancient world. (The others being the Law of Identity—A is A—and the Law of the Excluded Middle—either A is A or it is not-A.) As Aristotle himself put it, the “most certain of all basic principles is that contradictory propositions are not true simultaneously.” Or—as another of Aristotle’s Arabic commenters, Avicenna (Ibn-Sina) put it in one of its most famous formulations—that rule goes like this: “Anyone who denies the law of non-contradiction should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned is not the same as not to be burned.” In short, a thing cannot be both true and not true at the same time.

Put in Avicenna’s way, of course, the Law of Non-Contradiction will sound distinctly horrible to most American undergraduates, perhaps particularly those who attend the most exclusive colleges: it sounds like—and, like a lot of things, has been—a justification for the worst kind of authoritarian, even totalitarian, rule, and even torture. In that sense, it might appear that attacking the law of non-contradiction could be the height of oppositional intellectual work: the kind of thing that nearly every American undergraduate attracted to the humanities aspires to do. Who is not, aside from members of the Bush Administration legal team (for that matter, nearly every regime known to history) and viewers of the television show 24, against torture? Who does not know that black-and-white morality is foolish, that the world is composed of various “shades of gray,” that “binary oppositions” can always be dismantled, and that it is the duty of the properly educated to instruct the lower orders in the world’s real complexity? Such views might appear obvious—especially if one is unfamiliar with the recent history of Volkswagen.

In mid-September of 2015, the Environmental Protection Agency of the United States issued a violation notice to the German automaker Volkswagen. The EPA had learned that, although the diesel engines Volkswagen built were passing U.S. emissions tests, they were doing it on the sly: each car’s software could detect when the car’s engine was being tested by government monitors, and if so could reduce the pollutants that engine was emitting. Just more than six months later, Volkswagen agreed to pay a settlement of 15.3 billion dollars in the largest auto-related class-action lawsuit in the history of the United States. That much, at least, is news; what interests me, however,  about this story in relation to this talk about academics and monks was a curious article put out by The New Yorker in October of 2015. Entitled “An Engineering Theory of the Volkswagen Scandal,” Paul Kedrosky—perhaps significantly—“a venture investor and a former equity analyst,” explains these events as perhaps not the result of “engineers … under orders from management to beat the tests by any means necessary.” Instead, the whole thing may simply have been the result of an “evolution” of technology that “subtly and stealthily, even organically, subverted the rules.” In other words, Kedrosky wishes us to entertain the possibility that the scandal ought to be understood in terms of the undergraduate’s idea of shades of gray.

Kedrosky takes his theory from a book by sociologist Diane Vaughn, about the Challenger space shuttle disaster of 1986. In her book, Vaughn describes how, over nine launches from 1983 onwards, the space shuttle organization had launched Challenger under colder and colder temperatures, until NASA’s engineers had “effectively declared the mildly abnormal normal,” Kedrosky says—and until, one very frigid January morning in Florida, the shuttle blew into thousands of pieces moments after liftoff. Kedrosky’s attempt at an analogy is that maybe the Volkswagen scandal developed similarly: “Perhaps it started with tweaks that optimized some aspect of diesel performance and then evolved over time.” If so, then “at no one step would it necessarily have felt like a vast, emissions-fixing conspiracy by Volkswagen engineers.” Instead—as this story goes—it would have felt like Tuesday.

The rest of Kedrosky’s thrust is relatively easy to play out, of course—because we have heard a similar story before. Take, for instance, another New Yorker story; this one, a profile of the United States Attorney for the Southern District of New York, Preet Bharara. Mr. Bharara, as the representative of the U.S. Justice Department in New York City, is in charge of prosecuting Wall Street types; because he took office in 2009, at the crest of the financial crisis that began in 2007, many thought he would end up arresting and charging a number of executives as a result of the widely-acknowledged chicaneries involved in creating the mess. But as Jeffrey Toobin laconically observes in his piece, “No leading executive was prosecuted.” Even more notable, however, is the reasoning Bharara gives for his inaction.

“Without going into specifics,” Toobin reports, Bharara told him “that his team had looked at Wall Street executives and found no evidence of criminal behavior.” Sometimes, Bharara went on to explain, “‘when you see a bad thing happen, like you see a building go up in flames, you have to wonder if there’s arson’”—but “‘sometimes it’s not arson, it’s an accident.’” In other words, to Bharara, it’s entirely plausible to think of the entire financial meltdown of 2007-8, which ended three giant Wall Street firms (Bear Stearns, Merrill Lynch, and Lehman Brothers) and two arms of the United States government (Fannie Mae and Freddie Mac), and is usually thought to have been caused by predatory lending practices driven by Wall Street’s appetite for complex financial instruments, as essentially analogous to Diane Vaughn’s view of the Challenger disaster—or Kedrosky’s view of Volkswagen’s cavalier thoughts about environmental regulation. To put it in another way, both Kedrosky and Bharara must possess, in Fitzgerald’s terms, “first-rate intelligences”: in Kedrosky’s version of Volkswagen’s actions or Bharara’s view of Wall Street, crimes were committed, but nobody committed them. They were both crimes and not-crimes at the same time.

These men can, in other words, hold opposed ideas in their head simultaneously. To many, that makes these men modern—or even, to some minds, “post-modern.” Contemporary intellectuals like to cite examples—like the “rabbit-duck” illusion referred to by Wittgenstein, which can be seen as either a rabbit or a duck, or the “Schroedinger’s Cat” thought experiment, whereby the cat is neither dead nor alive until the box is opened, or the fact that light is both a wave and a particle—designed to show how out-of-date the Law of Noncontradiction is. In that sense, we might as easily blame contemporary physics as contemporary work in the humanities for Kedrosky or Bharara’s difficulties in saying whether an act was a crime or not—and for that matter, maybe the similarity between Stanley Fish and Siger of Brabant is merely a coincidence. Still, in the course of reading for this piece I did discover another apparent coincidence in Arthur Little’s same article I previously cited. “Unlike Thomas Aquinas,” the Jesuit wrote 1947, “whose sole aim was truth, Siger desired most of all to find the world interesting.” The similarity to Stanley Fish’s 1976 remarks about himself—that he has no obligation to be right, only to be interesting—are, I think, striking. Like Bharara, I cannot demonstrate whether Fish knew of this article of Little’s, written thirty years before his own.

But then again, if I have no obligation to be right, what does it matter?

This Doubtful Strife

Let me be umpire in this doubtful strife.
Henry VI. Act IV, Scene 1.

 

“Mike Carey is out as CBS’s NFL rules analyst,” wrote Claire McNear recently for (former ESPN writer and Grantland founder) Bill Simmons’ new website, The Ringer, “and we are one step closer to having robot referees.” McNear is referring to Carey and CBS’s “mutual agreement” to part last week: the former NFL referee, with 24 years of on-field experience, was not able to translate those years into an ability to convey rules decisions to CBS’s audience. McNear goes on to argue that Carey’s firing/resignation is simply another milestone on the path to computerized refereeing—a march that, she says, reached another milestone just days earlier, when the NBA released “Last Two Minute reports, which detail the officiating crew’s internal review of game calls.” About that release, it seems, the National Basketball Referees Association said it encourages “the idea that perfection in officiating is possible,” a standard that the association went on to say “is neither possible nor desirable” because “if every possible infraction were to be called, the game would be unwatchable.” It’s an argument that will appear familiar for many with experience in the humanities: at least since William Blake’s “dark satanic mills,” writers and artists have opposed the impact of science and technology—usually for reasons advertised as “political.” Yet, at least with regard to the recent history of the United States, that’s a pretty contestable proposition: it’s more than questionable, in other words, whether the humanities’ opposition to the sciences hasn’t had pernicious rather than beneficial effects. The work of the humanities, that is, by undermining the role of science, may not be helping to create the better society its proponents often say will result. Instead, the humanities may actually be helping to create a more unequal society.

That the humanities, that supposed bastion of “political correctness” and radical leftism, could in reality function as the chief support of the status quo might sound surprising at first, of course—according to any number of right-wing publications, departments of the humanities are strongholds of radicalism. But any real look around campus shouldn’t find it that confounding to think of the humanities as, in reality, something else : as Joe Pinsker reported for The Atlantic last year, data from the National Center for Education Statistics demonstrates that “the amount of money a college student’s parents make does correlate with what that person studies.” That is, while kids “from lower-income families tend toward ‘useful’ majors, such as computer science, math, and physics,” those “whose parents make more money flock to history, English, and the performing arts.” It’s a result that should not be that astonishing: as Pinsker observes, not only is it so that “the priciest, top-tier schools don’t offer Law Enforcement as a major,” it’s a point that cuts across national boundaries; Pinsker also reports that Greg Clark of the University of California found recently that students with “rare, elite surnames” at Great Britain’s Cambridge University “were much more likely to study classics, English, and history, and much less likely to study computer science and economics.” Far from being the hotbeds of far-left thought they are often portrayed as, in other words, departments of the humanities are much more likely to house the most elite, most privileged student body on campus.

It’s in those terms that the success of many of the more fashionable doctrines on American college campuses over the past several decades might best be examined: although deconstruction and many more recent schools of thought have long been thought of as radical political movements, they could also be thought of as intellectual weapons designed in the first place—long before they are put to any wider use—to keep the sciences at bay. That might explain just why, far from being the potent tools for social justice they are often said to be, these anti-scientific doctrines often produce among their students—as philosopher Martha Nussbaum of the University of Chicago remarked some two decades ago—a “virtually complete turning from the material side of life, toward a type of verbal and symbolic politics.” Instead of an engagement with the realities of American political life, in other words, many (if not all) students in the humanities prefer to practice politics by using “words in a subversive way, in academic publications of lofty obscurity and disdainful abstractness.” In this way, “one need not engage with messy things such as legislatures and movements in order to act daringly.” Even better, it is only in this fashion, it is said, that the conceptual traps of the past can be escaped.

One of the justifications for this entire practice, as it happens, was once laid out by the literary critic, Stanley Fish. The story goes that Bill Klem, a legendary umpire, was once behind the plate plying his trade:

The pitcher winds up, throws the ball. The pitch comes. The batter doesn’t swing. Klem for an instant says nothing. The batter turns around and says “O.K., so what was it, a ball or a strike?” And Klem says, “Sonny, it ain’t nothing ’till I call it.”

The story, Fish says, is illustrative of the notion that “of course the world is real and independent of our observations but that accounts of the world are produced by observers and are therefore relative to their capacities, education, training, etc.” It’s by these means, in other words, that academic pursuits like “cultural studies” and the like have come into being: means by which sociologists of science, for example, show how the productions of science may be the result not merely of objects in the world, but also the predilections of scientists to look in one direction and not another. Cancer or the planet Saturn, in other words, are not merely objects, but also exist—perhaps chiefly—by their place within the languages with which people describe them: an argument that has the great advantage of preserving the humanities against the tide of the sciences.

But, isn’t that for the best? Aren’t the humanities preserving an aspect of ourselves incapable of being captured by the net of the sciences? Or, as the union of professional basketball referees put it in their statement, don’t they protect, at the very least, that which “would cease to exist as a form of entertainment in this country” by their ministrations? Perhaps. Yet, as ought to be apparent, if the critics of science can demonstrate that scientists have their blind spots, then so too do the humanists—for one thing, an education devoted entirely to reading leaves out a rather simple lesson in economics.

Correlation is not causation, of course, but it is true that as the theories of academic humanists became politically wilder, the gulf between haves and have-nots in America became greater. As Nobel Prize-winning economist Joseph Stiglitz observed a few years ago, “inequality in America has been widening for decades”; to take one of Stiglitz’s examples, “the six heirs to the Walmart empire”—an empire that only began in the early 1960s—now “possess a combined wealth of some $90 billion, which is equivalent to the wealth of the entire bottom 30 percent of U.S. society.” To put the facts another way—as Christopher Ingraham pointed out in the Washington Post last year—“the wealthiest 10 percent of U.S. households have captured a whopping 76 percent of all the wealth in America.” At the same time, as University of Illinois at Chicago literary critic Walter Benn Michaels has noted, “social mobility” in the United States is now “lower than in both France and Germany”—so much so, in fact, that “[a]nyone born poor in Chicago has a better chance of achieving the American Dream by learning German and moving to Berlin.” (A point perhaps highlighted by the fact that Germany has made its universities free to any who wish to attend them.) In any case, it’s a development made all the more infuriating by the fact that diagnosing the harm of it involves merely the most remedial forms of mathematics.

“When too much money is concentrated at the top of society,” Stiglitz continued not long ago, “spending by the average American is necessarily reduced.” Although—in the sense that it is a creation of human society—what Stiglitz is referring to is “socially constructed,” it is also simply a fact of nature that would exist whether the economy in question involved Aztecs or ants. In whatever underlying substrate, it is simply the case that those at the top of a pyramid will spend less than those near the bottom. “Consider someone like Mitt Romney”—Stiglitz asks—“whose income in 2010 was $21.7 million.” Even were Romney to become even more flamboyant than Donald Trump, “he would spend only a fraction of that sum in a typical year to support himself and his wife in their several homes.” “But,” Stiglitz continues, “take the same amount of money and divide it among 500 people—say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all of the money gets spent.” In other words, by dividing the money more equally, more economic activity is generated—and hence the more equal society is also the more prosperous society.

Still, to understand Stiglitz’ point requires understanding a sequence of connected, ideas—among them a basic understanding of mathematics, a form of thinking that does not care who thinks it. In that sense, then, the humanities’ opposition to scientific, mathematical thought takes on rather a different sense than it is often cracked up to be. By training its students to ignore the evidence—and more significantly, the manner of argument—of mathematics and the sciences, the humanities are raising up a generation (or several) to ignore the evidence of impoverishment that is all around us here in 21st century America. Even worse, it fails to give students a means of combatting that impoverishment: an education without an understanding of mathematics cannot cope with, for instance, the difference between $10,000 and $10 billion—and why that difference might have a greater significance than simply being “unfair.” Hence, to ignore the failures of today’s humanities is also to ignore just how close the United States is … to striking out.

Art Will Not Save You—And Neither Will Stanley

 

But I was lucky, and that, I believe, made all the difference.
—Stanley Fish. “My Life Report” 31 October 2011, New York Times. 

 

Pfc. Bowe Bergdahl, United States Army, is the subject of the new season of Serial, the National Public Radio show that tells “One story. Week by week.” as the advertising tagline has it. NPR is doing a show about Bergdahl because of what Bergdahl chose to do on the night of 30 June 2009: as Serial reports, that night he walked off his “small outpost in eastern Afghanistan and into hostile territory,” where he was captured by Taliban guerrillas and held prisoner for nearly five years. Bergdahl’s actions have led some to call him a deserter and a traitor; as a result of leaving his unit Bergdahl faces a life sentence from a military court. But the line Bergdahl crossed when he stepped beyond the concertina wire and into the desert of Paktika Province was far greater than the line between a loyal soldier and a criminal. When Bowe Bergdahl wandered into the wilderness, he also crossed the line between the sciences and the humanities—and demonstrated why the political hopes some people place in the humanities is not only illogical, but arguably holding up actual political progress.

Bergdahl can be said to have crossed that line because what happens to him when he is tried by a military court regarding what happened will, likely, turn on what the intent behind his act was: in legal terms, this is known as mens rea, which is Latin for “guilty mind.” Intent is one of the necessary components prosecutors must prove to convict Bergdahl for desertion: according to Article 85 of the Uniform Code of Military Justice, to be convicted of desertion Bergdahl must be shown to have had the “intent to remain away” from his unit “permanently.” It’s this matter of intent that demonstrates the difference between the humanities and the sciences.

The old devil, Stanley Fish, once demonstrated that border in an essay in the New York Times designed to explain what it is that literary critics, and other people who engage in interpretation, do, and how it differs from other lines of work:

Suppose you’re looking at a rock formation and see in it what seems to be the word ‘help.’ You look more closely and decide that, no, what you’re seeing is an effect of erosion, random marks that just happen to resemble an English word. The moment you decide that nature caused the effect, you will have lost all interest in interpreting the formation, because you no longer believe that it has been produced intentionally, and therefore you no longer believe that it’s a word, a bearer of meaning.

To put it another way, matters of interpretation concern agents who possess intent: any other kind of discussion is of no concern to the humanities. Conversely, the sciences can be said to concern all those things not produced by an agent, or more specifically an agent who intended to convey something to some other agent.

It’s a line that seems clear enough, even in what might be marginal cases: when a beaver builds a dam, surely he intends to build that dam, but it also seems inarguable that the beaver intends nothing more to be conveyed to other beavers than, “here is my dam.” More questionable cases might be when, say, a bird or some other animal performs a “mating dance”: surely the bird intends his beloved to respond, but still it would seem ludicrous to put a scholar of, say, Jane Austen’s novels to the task of recovering the bird’s message. That would certainly be overkill.

Yes yes, you will impatiently say, but what has that to do with Bergdahl? The answer, I think, might be this: if Bergdahl’s lawyer had a scientific, instead of a humanistic, sort of mind, he might ask how many soldiers were stationed in Afghanistan during Bergdahl’s time there, and how many overall. The reason a scientist would ask that question about, say, a flock of birds he was studying is because, to a scientist, the overall numbers matter. The reason why they matter demonstrates just what the difference between science and the humanities is, but also why the faith some place in the political utility of the humanities is ridiculous.

The reason why the overall numbers of the flock would matter to a scientist is because sample size matters: a behavior that one bird in a flock of twelve birds exhibited is probably not as significant as a behavior that one bird in a flock of millions exhibited. As Nassim Taleb put it in his book, Fooled By Randomness, how impressive it is if a monkey has managed to type a verbatim copy of the Iliad “Depends On The Number of Monkeys.” “If there are five monkeys in the game,” Taleb elaborates, “I would be rather impressed with the Iliad writer”—but if, on the other hand, “there are a billion to the power one billion monkeys I would be less impressed.” Or to put it in another context, the “greater the number of businessmen, the greater the likelihood of one of them performing in a stellar manner just by luck.” What matters to a scientist, in other words, isn’t just what a given bird does—it’s how big the flock was in the first place.

To a lawyer, of course, none of that would be significant: the court that tries Bergdahl will not view that question as a relevant one in determining whether he is guilty of the crime of desertion. That is because, as a discipline concerned with interpretation, such a question will have been ruled out of court, as we say, before the court has even met: to consider how many birds in the flock there were when one of them behaved strangely, in other words, is to have a priori ceased to consider that bird as an agent because when one asks how many other birds there are, the implication is that what matters more is simply the role of chance rather than any intent on the part of the bird. Any lawyer that brought up the fact that Bergdahl was the only one out of so many thousands of soldiers to have done what he did, without taking up the matter of Bergdahl’s intent, would not be acting as a lawyer.

By the way, in case you’re wondering, roughly 65,000 soldiers were in Afghanistan by early October of 2009, behind the “surge” ordered by President Barack Obama shortly after taking office. The number, according to a contemporary story by The Washington Post, would be “more than double the number there when Bush left office,” which is to say that when Bergdahl left his tiny outpost at the end of June that year, the military was in the midst of a massive buildup of troops. The sample size, in Taleb’s terms, was growing rapidly at that time—with what effects on Bergdahl’s situation, if any, I await enlightenment, if there be any.

Whether that matters or not in terms of Bergdahl’s story—in Serial or anywhere else—remains to be seen; as a legal matter it would be very surprising if any military lawyer brought it up. What that, in turn, suggests is that the caution with which Stanley Fish has greeted many in the profession of literary study regarding the application of such work to actual political change is thoroughly justified: “when you get to the end” of the road many of those within the humanities have been traveling at least since the 1960s or 70s, Fish has remarked for instance, “nothing will have changed except the answers you might give to some traditional questions in philosophy and literary theory.” It’s a warning of crisis that even now may be reaching its peak as the nation realizes that, after all, the great political story of our time has not been about the minor league struggles within academia, but rather the story of how a small number of monkeys have managed to seize huge proportions of the planet’s total wealth: as Bernie Sanders, the political candidate, tweeted recently in a claim rated “True” by Politifact, “the Walton family of Walmart own more wealth than the bottom 40 percent of America.”

In that story, the intent of the monkeys hardly matters.

Talk That Talk

Talk that talk.
“Boom Boom.”
    John Lee Hooker. 1961.

 

Is the “cultural left” possible? What I mean by “cultural left” is those who, in historian Todd Gitlin’s phrase, “marched on the English department while the Right took the White House”—and in that sense a “cultural left” is surely possible, because we have one. Then again however, there are a lot of things that exist but yet have little rational grounds for doing so, such as the Tea Party or the concept of race. So, did the strategy of leftists invading the nation’s humanities departments ever really make any sense? In other words, is it even possible to conjoin a sympathy for and solidarity with society’s downtrodden with a belief that the means to further their interests is to write, teach, and produce art and other “cultural” products? Or, is that idea like using a chainsaw to drive nails?

Despite current prejudices, which often these days depict “culture” as on the side of the oppressed, history suggests the answer is the latter, not the former: in reality, “culture” has usually acted hand-in-hand with the powerful—as it must, given that it is dependent upon some people having sufficient leisure and goods to produce it. Throughout history, art’s medium has simply been too much for its ostensible message—it’s depended on patronage of one sort or another. Hence, a potential intellectual weakness of basing a “left” around the idea of culture: the actual structure of the world of culture simply is the way that the fabulously rich Andrew Carnegie argued society ought to be in his famous 1889 essay, “The Gospel of Wealth.”

Carnegie’s thesis in “The Gospel of Wealth” after all was that the “superior wisdom [and] experience” of the “man of wealth” ought to determine how to spend society’s surplus. To that end, the industrialist wrote, wealth ought to be concentrated: “wealth, passing through the hands of the few, can be made a much more potent force … than if it had been distributed in small sums to the people themselves.” If it’s better for ten people to have $100,000 each than for a hundred to have $10,000, then it ought to be that much better to have one person with a million dollars. Instead of allowing that money to wander around aimlessly, the wealthiest—for Carnegie, a category interchangeable with “smartest”—ought to have charge of it.

Most people today, I think, would easily spot the logical flaw in Carnegie‘s prescription: just because somebody has money doesn’t make them wise, or even that intelligent. Yet while that is certainly true, the obvious flaw in the argument obscures a deeper flaw—at least if considering the arguments of the trader and writer Nassim Taleb, author of Fooled by Randomness and The Black Swan. According to Taleb, the problem with giving power to the wealthy isn’t just that knowing something about someone’s wealth doesn’t necessarily guarantee intelligence—it’s that, over time, the leaders of such a society are likely to become less, rather than more, intelligent.

Taleb illustrates his case by, perhaps coincidentally, reference to “culture”: an area that he correctly characterizes as at least as, if not more so, unequal as any aspect of human life. “It’s a sad fact,” Taleb wrote not long ago, “that among a large cohort of artists and writers, almost all will struggle (say, work for Starbucks) while a small number will derive a disproportionate share of fame and attention.” Only a vanishingly small number of such cultural workers are successful—a reality that is even more pronounced when it comes to cultural works themselves, according to Stanford professor of literature Franco Moratti.

Investigating early lending libraries, Moratti found that the “smaller a collection is, the more canonical it is” [emp. original]; and also, “small size equals safe choices.” That is, of the collections he studied, he found that the smaller they were the more homogenous they were: nearly every library is going to have a copy of the Bible, for instance, while only a very large library is likely to have, say, copies of the Dead Sea Scrolls. The world of “culture” then is just is the way Carnegie wished the rest of the world to be: a world ruled by what economists call a “winner-take-all” effect, in which increasing amounts of a society’s spoils go to fewer and fewer contestants.

Yet, whereas according to Carnegie’s theory this is all to the good—on the theory that the “winners” deserve their wins—according to Taleb what actually results is something quite different. A “winner-take-all” effect, he says, “implies that those who, for some reason, start getting some attention can quickly reach more minds than others, and displace the competitors from the bookshelves.” So even though two competitors might be quite close in quality, whoever is a contest’s winner gets everything—and what that means is, as Taleb says about the art world, “that a large share of the success of the winner of such attention can be attributable to matters that lie outside the piece of art itself, namely luck.” In other words, it’s entirely possible that “the failures also have the same ‘qualities’ attributable to the winner”: the differences between them might not be much, but who now knows about Ben Jonson, William Shakespeare’s playwriting contemporary?

Further, consider what that means over time. Over-rewarding those who might happen to have caught some small edge, in other words, tends to magnify small initial differences. What that would mean is that someone who might possess more over-all merit, but that happened to have been overlooked for some reason, would tend to be buried by anyone who just happened to have had an advantage—deserved or not, small or not. And while, considered from the point of view of society as whole, that’s bad enough—because then the world isn’t using all the talent it has available—think about what happens to such a society over time: contrary to Andrew Carnegie’s theory, that society would tend to produce less capable, not more capable, leaders, because it would be more—not less—likely that they reached their position by sheer happenstance rather than merit.

A society, in other words, that was attempting to maximize the potential talent available to it—and it seems little arguable that such is the obvious goal—should not be trying to bury potential talent, but instead to expose as much of it as possible: to get it working, doing the most good. But whatever the intentions of those involved in it, the “culture industry” as a whole is at least as regressive and unequal as any other: whereas in other industries “star” performers usually only emerge after years and years of training and experience, in “culture” many times such performers either emerge in youth or not at all. Of all parts of human life, in fact, it’s difficult to think of one more like Andrew Carnegie’s dream of inequality than culture.

In that sense then it’s hard to think of a worse model for a leftish kind of politics than culture, which perhaps explains why despite the fact that our universities are bulging with professors of art and literature and so on proclaiming “power to the people,” the United States is as unequal a place today as it has been since the 1920s. For one thing, such a model stands in the way of critiques of American institutions that are built according to the opposite, “Carnegian,” theory—and many American institutions are built according to such a theory.

Take the U.S. Supreme Court, where—as Duke University professor of law Jedediah Purdy has written—the “country puts questions of basic principle into the hands of just a few interpreters.” That, in Taleb’s terms, is bad enough: the fewer people doing the deciding implies a greater variability in outcome, which also means a potentially greater role for chance. It’s worse when it’s considered the court is an institution that only irregularly gains new members: appointing new Supreme Court justices depends whoever happens to be president and the lifespan of somebody else, just for starters. All of these facts, Taleb’s work suggests, implies that selecting Supreme Court justices are prone to chance—and thus that Supreme Court verdicts are too.

None of these things are, I think any reasonable person would say, desirable outcomes for a society. To leave some of the most important decisions of any nation potentially exposed to chance, as the structure of the United States Supreme Court does, seems particularly egregious. To argue against such a structure however depends on a knowledge of probability, a background in logic and science and mathematics—not a knowledge of the history of the sonnet form or the films of Jean Luc Goddard. And yet, Americans today are told that “the left” is primarily a matter of “culture”—which is to say that, though a “cultural left” is apparently possible, it may not be all that desirable.

 

 

 

His Dark Materials

But all these in their pregnant causes mixed
Confusedly, and which thus must ever fight.
Unless the Almighty Maker them ordain
His dark materials to create more worlds
—Paradise Lost II, 913-16

One of the theses of what’s known as the “academic Left” in America is that “nothing is natural,” or, as the literary critic (and “tenured radical”) Stanley Fish more properly puts it, “the thesis that the things we see and the categories we place them in … have their source in culture rather than nature.” It’s a thesis however, that seems to be obviously wrong in the case of professional golf. Without taking the time to do a full study of the PGA Tour’s website, which does list place of birth, it seems undoubtable that most of today’s American tour players originate south of the Mason-Dixon line: either in the former Confederacy or in other Sun Belt states. Thus it seems difficult to argue that there’s something about “Southern culture” that gives Southerners a leg up toward the professional ranks, rather than just the opportunity to play golf more times a year.

Let’s just look, in order to keep things manageable, at the current top ten: Jordan Speith, this year’s Masters winner, is from Texas, while Jimmy Walker, in second place, is just from up the road in Oklahoma. Rory McIlroy doesn’t count (though he is from Northern Ireland, for what that’s worth), while J.B. Holmes is from Kentucky. Patrick Reed is also from Texas, and Bubba Watson is from Florida. Dustin Johnson is from South Carolina, while Charlie Hoffman is from southern California. Hideki Matsuyama is from Ehime, Japan, which is located on the southern island of Shikoku in the archipelago, while Robert Streb rounds out the top ten and keeps the score even between Texas and Oklahoma.

Not until we reach Ryan Moore, at the fifteenth spot, do we find a golfer from an indisputably Northern state: Moore is from Tacoma, Washington. Washington however was not admitted to the Union until 1889; not until the seventeenth spot do we find a golfer from a Civil War-era Union state beside California. Gary Woodland, as it happens one of the longest drivers on tour, is from Kansas.

This geographic division has largely been stable in the history of American golf. It’s true of course that many great American golfers were Northerners, particularly at the beginnings of the game (like Francis Ouimet, “Chick” Evans, or Walter Hagan—from Massachusetts, Illinois, and Michigan respectively), and arguably the greatest of all time was from Ohio: Jack Nicklaus. But Byron Nelson and Ben Hogan were Texans, and of course Bobby Jones, one of the top three golfers ever, was a Georgian.

Yet while it might be true that nearly all of the great players are Southern, the division of labor in American golf is that nearly all of the great courses are Northern. In the latest Golf Digest ranking for instance, out of the top twenty courses only three—Augusta National, which is #1, Seminole in Florida, and Kiaweh in South Carolina—are in the South. New York (home to Winged Foot and Shinnecock, among others) and Pennsylvania (home to Merion and Oakmont) had the most courses in the top twenty; other Northern states included Michigan, Illinois, and Ohio. If it were access to great courses that made great golfers, in other words—a thesis that would appear to have a greater affinity with the notion that “culture,” rather than “nature,” was what produced great golfers, then we’d expect the PGA Tour to be dominated by Northerners.

That of course is not so, which perhaps makes it all the stranger that, if looked at by region, it is usually “the South” that champions “culture” and “the North” that champions “nature”—at least if you consider, as a proxy, how evolutionary biology is taught. Consider for instance a 2002 map generated by Lawrence S. Lerner of California State University at Long Beach:

v6i8g11

(Link here: http://bigthink.com/strange-maps/97-nil-where-and-how-evolution-is-taught-in-the-us). I realize that the map may be dated now, but still—although with some exceptions—the map generally shows that evolutionary biology is at least a controversial idea in the states of the former Confederacy, while Union states like Connecticut, New Jersey, and Pennsylvania are ranked by Professor Lerner as “Very good/excellent” in the matter of teaching Darwinian biology. In other words, it might be said that the states that are producing the best golfers are both the ones with the best weather and a belief that nature has little to do with anything.

Yet, as Professor Fish’s remarks above demonstrate, it’s the “radical” humanities professors of the nation’s top universities that are the foremost proponents of the notion that “culture” trumps “nature”—a fact that the cleverest creationists have not led slide. An article entitled “The Postmodern Sin of Intelligent Design Creationism” in a 2010 issue of Science and Education, for instance, lays out how “Intelligent Design Creationists” “try to advance their premodern view by adopting (if only tactically) a radical postmodern perspective.” In Darwinism and the Divine: Evolutionary Thought and Natural Theology, Alister McGrath argues not only “that it cannot be maintained that Darwin’s theory caused the ‘abandonment of natural theology,’” and also approvingly cites Fish: “Stanley Fish has rightly argued that the notion of ‘evidence’ is often tautologically determined by … interpretive assumptions.” So there really is a sense in which the the deepest part of the Bible Belt fully agrees with the most radical scholars at Berkeley and other top schools.

In Surprised By Sin: The Reader in Paradise Lost, Stanley Fish’s most famous work of scholarship, Fish argues that Satan is evil because he is “the poem’s true materialist”—and while Fish might say that he is merely reporting John Milton’s view, not revealing his own, still it’s difficult not to take away the conclusion that there’s something inherently wrong with the philosophical doctrine of materialism. (Not to be confused with the vulgar notion that life consists merely in piling up stuff, the philosophic version says that all existence is composed only of matter.) Or with the related doctrine of empiricism: “always an experimental scientist,” Fish has said more recently in the Preface to Surprised By Sin’s Second Edition, Satan busies himself “by mining the trails and entrails of empirical evidence.” Fish of course would be careful to distance himself from more vulgar thinkers regarding these matters—a distance that is there, sure—but it’s difficult not to see why creationists shouldn’t mine him for their own views.

Now, one way to explain that might be that both Fish and his creationist “frenemies” are drinking from the Pure Light of the Well of Truth. But there’s a possible materialistic candidate to explain just why humanities professors might end up with views similar to those of the most fundamentalist Christians: a similar mode of production. The political scientist Anne Norton remarks, in a book about the conservative scholar Leo Strauss, that the pedagogical technique pursued by Strauss—reading “a passage in a text” and asking questions about it—is also one pursued in “the shul and the madrasa, in seminaries and in Bible study groups.” At the time of Strauss’ arrival in the United States as a refugee from a 1930s Europe about to be engulfed in war, “this way of reading had fallen out of favor in the universities,” but as a result of Strauss’ career at the University of Chicago, along with that of philosophers Mortimer Adler (who founded the Great Books Program) and Robert Hutchins, it’s become at least a not-untypical pedagogical method in the humanities since.

At the least, that mode of humanistic study would explain what the philosopher Richard Rorty meant when he repeated Irving Howe’s “much-quoted jibe—‘These people don’t want to take over the government; they just want to take over the English Department.’” It explains, in other words, just how the American left might have “become an object of contempt,” as Rorty says—because it is a left that no longer believes that “the vast inequalities within American society could be corrected by using the institutions of a constitutional democracy.” How could it, after all, given a commitment against empiricism or materialism? Taking a practical perspective on the American political machinery would require taking on just the beliefs that are suicidal if your goal is to achieve tenure in the humanities at Stanford or Yale.

If you happen to think that most things aren’t due to the meddling of supernatural creatures, and you’ve given up on thoughts of tenure because you dislike both creationist nut-jobs and that “largely academic crowd cynical about America, disengaged from practice, and producing ever-more-abstract, jargon-ridden interpretations of cultural phenomena,” while at the same time you think that putting something in the place of God called “the free market”—which is what, exactly?—isn’t the answer either, why, then the answer is perfectly natural.

You are writing about golf.