Shut Out

But cloud instead, and ever-during dark
Surrounds me, from the cheerful ways of men
Cut off, and for the book of knowledge fair

And wisdom at one entrance quite shut out
Paradise Lost. Book III, 45-50

Hey everybody, let’s go out the baseball game,” the legendary 1960s Chicago disc jockey Dick Biondi said in the joke that (according to the myth) got him fired. “The boys,” Biondi is alleged to have said, “kiss the girls on the strikes, and …” In the story, of course, Biondi never finished the sentence—but you see where he was going, which is what makes the story interesting to a specific type of philosopher: the epistemologist. Epistemology is the study of how people know things: the question the epistemologist might ask about Biondi’s joke is, how do you know the ending to that story? For many academics today, the answer can be found in another baseball story, this time told by the literary critic Stanley Fish—a story that, oddly enough, also illustrates the political problems with that wildly popular contemporary concept: “diversity.”

As virtually everyone literate knows, “diversity” is one of the great adjectives of the present: something that has it is, ipso facto, usually held to be better than something that doesn’t. As a virtue, “diversity” has tremendous range, because it applies both in natural contexts—“biodiversity” is all the rage among environmentalists—and in social ones: in the 2003 case of Grutter v. Bollinger, for example, the Supreme Court held that the “educational benefits of diversity” were a “compelling state interest.” Yet, what often goes unnoticed about arguments in favor of “diversity” is that they themselves are dependent upon a rather monoglot account of how people know things—which is how we get back to epistemology.

Take, for instance, Stanley Fish’s story about the late, great baseball umpire Bill Klem. “It ain’t nothin’ til I call it,” Klem supposedly once said in response to a batter’s question about whether the previous pitch was a ball or a strike. (It’s a story I’ve retailed before: cf. “Striking Out”). The literature professor Stanley Fish has used that story, in turn, to illustrate what he views as the central lesson of what is sometimes called “postmodernism”: according to The New Yorker, Fish’s (and Klem’s) point is that “balls and strikes come into being only on the call of an umpire,” instead of being “facts in the world.” Klem’s remark in other words—Fish thinks—illustrates just how knowledge is what is sometimes called “socially constructed.”

The notion of “social construction” is the idea—as City College of New York professor Massimo Pigliucci recently put the point—that “no human being, or organized group of human beings, has access to a god’s eye of the world,” and that we ought therefore rely on an epistemic model in which “many individually biased points of view enter into dialogue with each other, yielding a less (but still) biased outcome.” The idea, in other words, is that meaning is—as Canadian philosopher Ian Hacking described the concept in The Social Construction of What?—“the product of historical events, social forces, and ideology.” Or, to put it another way, that we know things because of our culture, or social group: not by means of our own senses and judgement, but by the people around us.

For Pigliucci, this view of how human beings access reality suggests that we ought therefore rely on a particular epistemic model: rather than one in which each person ought to judge evidence for herself, we would instead rely on one in which “many individually biased points of view enter into dialogue with each other, yielding a less (but still) biased outcome.” In other words, we should rely upon diverse points of view, which is one reason why Pigliucci says, for instance, that because of the overall cognitive lack displayed by individuals, we ought “to work toward increasing diversity in the sciences.” Pigliucci’s reasoning is, of course, also what forms the basis of Grutter: “When universities are granted the freedom to assemble student bodies featuring multiple types of diversity,” wrote defendant Lee Bollinger (then dean of the University of Michigan law school) in an editorial for the Washington Post about the case, “the result is a highly sought-after learning environment that attracts the best students.” “Diversity,” in sum, is a tool to combat our epistemic weaknesses.

“Diversity” is thereby justified by means of a particular vision of epistemology: a particular theory of how people know things. On this theory, we are dependent upon other people in order to know anything. Yet, the very basis of Dick Biondi’s “joke” is that you, yourself, can “fill in” the punchline: it doesn’t take a committee to realize what the missing word at the end of the story is. And what that reality—your ability to furnish the missing word—perhaps illustrates is an epistemic distinction Keynes made in his magisterial 1920 work, A Treatise on Probability: a distinction that troubles the epistemology that underlies the concept of “diversity.”

“Now our knowledge,” Keynes writes in chapter two of that work, “seems to be obtained in two ways: directly, as the result of contemplating the objects of acquaintance; and indirectly, by argument” (italics in original). What Keynes is proposing, in other words, is an epistemic division between two ways of knowing—one of them being much like the epistemic model described by Fish or Pigliucci or Bollinger. As Keynes says, “it is usually agreed that we do not have direct knowledge” of such things as “the law of gravity … the cure for phthisis … [or] the contents of Bradshaw”—things like these, in other words, are only known through chains of reasoning, rather than direct experience. In order to know items like these, in other words, we have to have undergone a kind of socialization, otherwise known as education. We are dependent on other people to know those things.

Yet, as Keynes also recognizes, there is also another means of knowing:  “From an acquaintance with a sensation of yellow,” the Canadian economist and thinker wrote, “I can pass directly to a knowledge of the proposition ‘I have a sensation of yellow.’” In this epistemic model, human beings can know things by immediate apprehension—the chief example of this form of knowing being, as Keynes describes, our own senses. What Keynes says, in short, is that people can know things in more than one way: one way through other people yes, as Fish et al. say—but also through our own experience.

Or—to put the point differently—Keynes has a “diverse” epistemology. That would, at least superficially, seem to make Keynes’ argument a support for the theory of “diversity”: after all, he is showing how people can know things differently, which would appear to assist Lee Bollinger and Massimo Pigliucci’s argument for diversity in education. If people can know things in different ways, it would then appear necessary to gather more, and different, kinds of people in order to know anything. But just saying so exposes the weakness at the heart of Bollinger and Pigliucci’s ideal of “diversity.”

Whereas Keynes has a “diverse” epistemology, in short, Bollinger and Pigliucci do not: in their conception, human beings can only know things in one way. That is the way that Keynes called “indirect”: through argumentation and persuasion—or as its sometimes put, “social construction.” In other words, the defenders of “diversity” have a rather monolithic epistemology, which is why Fish for instance once attacked the view that it is possible to “survey the world in a manner free of assumptions about what it is like and then, from that … disinterested position, pick out the set of reasons that will be adequate to its description.” If such a thing were possible, after all, it would be possible to experience a direct encounter with the world—which “diversity” enthusiasts like Fish deny is possible: Fish says, for instance, that “the rhetoric of disinterested inquiry … is in fact”—just how he knows this is unclear—“a very interested assertion of the superiority of one set of beliefs.” In other words, any other epistemological view than their own is merely a deception.

Perhaps though this is all just one of the purest cases of an “academic” dispute: eggheads arguing, as the phrase goes, about how many angels can dance on a pin. At least, until one realizes that the nearly-undisputed triumph of epistemology retailed by Fish and company also has certain quite-real consequences. For example, as the case of Bollinger demonstrates, although the “socially-constructed” epistemology is an excellent means, as has been demonstrated over the past several decades, of—in the words of Fish’s fellow literary critic William Benn Michaels—“battling over what skin color the rich kids should have,” it isn’t so great for, say, dividing up legislative districts: a question that, as Elizabeth Kolbert noted last year in The New Yorker, “may simply be mathematical.” But if so, that presents a problem for those who think of their epistemological views as serving a political cause.

Mathematics, after all, is famously not something that can be understood “culturally”; it is, as Keynes—and before him, a silly fellow named Plato—knew, perhaps the foremost example of the sort of knowing demonstrated by Dick Biondi’s joke. Mathematics, in other words, is the chief example of something known directly: when you understand something in mathematics, you understand it either immediately—or not at all. Which, after all, is the significance of Kolbert’s remarks: to say that re-districting—perhaps the most political act of all in a democracy—is primarily a mathematical operation is to say that to understand redistricting, you have to understand directly the mathematics of the operation. Yet if the “diversity” promoters are correct, then only their epistemology has any legitimacy: an epistemology that a priori prevents anyone from sensibly discussing redistricting. In other words, it’s precisely the epistemological blindspots promoted by the often-ostensibly “politically progressive” promoters of “diversity” that allow the current American establishment to ignore the actual interests of actual people.

Which, one supposes, may be the real joke.


The Color of Water

No one gets lucky til luck comes along.
Eric Clapton
     “It’s In The Way That You Use It”
     Theme Song for The Color of Money (1986).



The greenish tint to the Olympic pool wasn’t the only thing fishy about the water in Rio last month: a “series of recent reports,” Patrick Redford of Deadspin reported recently, “assert that there was a current in the pool at the Rio Olympics’ Aquatic Stadium that might have skewed the results.” Or—to make the point clear in a way the pool wasn’t—the water in the pool flowed in such a way that it gave the advantage to swimmers starting in certain lanes: as Redford writes, “swimmers in lanes 5 through 8 had a marked advantage over racers in lanes 1 through 4.” According, however, to ESPN’s Michael Wilbon—a noted African-American sportswriter—such results shouldn’t be of concern to people of color: “Advanced analytics,” Wilbon wrote this past May, “and black folks hardly ever mix.” To Wilbon, the rise of statistical analysis poses a threat to African-Americans. But Wilbon is wrong: in reality, the “hidden current” in American life holding back both black Americans and all Americans is not analytics—it’s the suspicions of supposedly “progressive” people like Michael Wilbon.

The thesis of Wilbon’s piece, “Mission Impossible: African-Americans and Analytics”—published on ESPN’s race-themed website, The Undefeated—was that black people have some kind of allergy to statistical analysis: “in ‘BlackWorld,’” Wilbon solemnly intoned, “never is heard an advanced analytical word.” Whereas, in an earlier age, white people like Thomas Jefferson questioned black people’s literacy, nowadays, it seems, it’s ok to question their ability to understand mathematics—a “ridiculous” (according to The Guardian’s Dave Schilling, another black journalist) stereotype that Wilbon attempts to paint as, somehow, politically progressive: Wilbon, that is, excuses his absurd beliefs on the basis that analytics “seems to be a new safe haven for a new ‘Old Boy Network’ of Ivy Leaguers who can hire each other and justify passing on people not given to their analytic philosophies.” Yet, while Wilbon isn’t alone in his distrust of analytics, it’s actually just that “philosophy” that may hold the most promise for political progress—not only for African-Americans, but every American.

Wilbon’s argument, after all, depends on a common thesis heard in the classrooms of American humanities departments: when Wilbon says the “greater the dependence on the numbers, the more challenged people are to tell (or understand) the narrative without them,” he is echoing a common argument deployed every semester in university seminar rooms throughout the United States. Wilbon is, in other words, merely repeating the familiar contention, by now essentially an article of faith within the halls of the humanities, that without a framework—or (as it’s sometimes called), “paradigm”—raw statistics are meaningless: the doctrine sometimes known as “social constructionism.”

That argument is, as nearly everyone who has taken a class in the departments of the humanities in the past several generations knows, that “evidence” only points in a certain direction once certain baseline axioms are assumed. (An argument first put about, by the way, by the physician Galen in the second century AD.) As American literary critic Stanley Fish once rehearsed the argument in the pages of the New York Times, according to its terms investigators “do not survey the world in a manner free of assumptions about what it is like and then, from that (impossible) disinterested position, pick out the set of reasons that will be adequate to its description.” Instead, Fish went on, researchers “begin with the assumption (an act of faith) that the world is an object capable of being described … and they then develop procedures … that yield results, and they call those results reasons for concluding this or that.” According to both Wilbon and Fish, in other words, the answers people find depends not the structure of reality itself, but instead on the baseline assumptions the researcher begins with: what matters is not the raw numbers, but the contexts within which the numbers are interpreted.

What’s important, Wilbon is saying, is the “narrative,” not the numbers: “Imagine,” Wilbon says, “something as pedestrian as home runs and runs batted in adequately explaining [Babe] Ruth’s overall impact” on the sport of baseball. Wilbon’s point is that a knowledge of Ruth’s statistics won’t tell you about the hot dogs the great baseball player ate during games, or the famous “called shot” during the 1932 World Series—what he is arguing is that statistics only point toward reality: they aren’t reality itself. Numbers, by themselves, don’t say anything about reality; they are only a tool with which to access reality, and by no means the only tool available: in one of Wilbon’s examples Stef Curry, the great guard for the NBA’s Golden State Warriors, knew he shot better from the corners—an intuition that later statistical analysis bore out. Wilbon’s point is that both Curry’s intuition and statistical analysis told the same story, implying that there’s no fundamental reason to favor one road to truth over the other.

In a sense, to be sure, Wilbon is right: statistical analysis is merely a tool for getting at reality, not reality itself, and certainly other tools are available. Yet, it’s also true that, as statistician and science fiction author Michael F. Flynn has pointed out, astronomy—now accounted one of the “hardest” of physical sciences, because it deals with obviously real physical objects in space—was once not an observational science, but instead a mathematical one: in ancient times, Chinese astronomers were called “calendar-makers,” and a European astronomer was called a mathematicus. As Flynn says, “astronomy was not about making physical discoveries about physical bodies in the sky”—it was instead “a specialized branch of mathematics for making predictions about sky events.” Without telescopes, in other words, astronomers did not know what, exactly, say, the planet Mars was: all they could do was make predictions, based on mathematical analysis, about what part of the sky it might appear in next—predictions that, over the centuries, became perhaps-startlingly accurate. But as a proto-Wilbon might have said in (for instance) the year 1500, such astronomers had no more direct knowledge of what Mars is than a kindergartner has of the workings of the Federal Reserve.

In the same fashion, Wilbon might point out about the swimming events in Rio, there is no direct evidence of a current in the Olympic pool: the researchers who assert that there was such a current base their arguments on statistical evidence of the races, not examination of the conditions of the pool. Yet the evidence for the existence of a current is pretty persuasive: as the Wall Street Journal reported, fifteen of the sixteen swimmers, both men and women, who swam in the 50-meter freestyle event finals—the one event most susceptible to the influence of a current, because swimmers only swim one length of the pool in a single direction—swam in lanes 4 through 8, and swimmers who swam in outside lanes in early heats and inside lanes in later heats actually got slower. (A phenomena virtually unheard of in top level events like the Olympics.) Barry Revzin, of the website Swim Swam, found that a given Olympic swimmer picked up “a 0.2 percent advantage for each lane … closer to [lane] 8,” Deadspin’s Redford reported, and while that could easily seem “inconsequentially small,” Redford remarked, “it’s worth pointing out that the winner in the women’s 50 meter freestyle only beat the sixth-place finisher by 0.12 seconds.” It’s a very small advantage, in other words, which is to say that it’s very difficult to detect—except by means of the very same statistical analysis distrusted by Wilbon. But although it is a seemingly-small advantage, it is enough to determine the winner of the gold medal. Wilbon in other words is quite right to say that statistical evidence is not a direct transcript of reality—he’s wrong, however, if he is arguing that statistical analysis ought to be ignored.

To be fair, Wilbon is not arguing exactly that: “an entire group of people,” he says, “can’t simply refuse to participate in something as important as this new phenomenon.” Yet Wilbon is worried about the growth of statistical analysis because he views it as a possible means for excluding black people. If, as Wilbon writes, it’s “the emotional appeal,” rather than the “intellect[ual]” appeal, that “resonates with black people”—a statement that, if it were written by a white journalist, would immediately cause a protest—then Wilbon worries that, in a sports future run “by white, analytics-driven executives,” black people will be even further on the outside looking in than they already are. (And that’s pretty far outside: as Wilbon notes, “Nate McMillan, an old-school, pre-analytics player/coach, who was handpicked by old-school, pre-analytics player/coach Larry Bird in Indiana, is the only black coach hired this offseason.”) Wilbon’s implied stance, in other words—implied because he nowhere explicitly says so—is that since statistical evidence cannot be taken at face value, but only through screens and filters that owe more to culture than to the nature of reality itself, therefore the promise (and premise) of statistical analysis could be seen as a kind of ruse designed to perpetuate white dominance at the highest levels of the sport.

Yet there are at least two objections to make about Wilbon’s argument: the first being the empirical observation that in U.S. Supreme Court cases like McCleskey v. Kemp for instance (in which the petitioner argued that, according to statistical analysis, murderers of white people in Georgia were far more likely to receive the death penalty than murderers of black people), or Teamsters v. United States, (in which—according to—the Court ruled, on the basis of statistical evidence, that the Teamsters union had “engaged in a systemwide practice of minority discrimination”), statistical analysis has been advanced to demonstrate the reality of racial bias. (A demonstration against which, by the way, time and again conservatives have countered with arguments against the reality of statistical analysis that essentially mirror Wilbon’s.) To think then that statistical analysis could be inherently biased against black people, as Wilbon appears to imply, is empirically nonsense: it’s arguable, in fact, that statistical analysis of the sort pioneered by people like sociologist Gunnar Myrdal has done at least as much, if not more, as (say) classes on African-American literature to combat racial discrimination.

The more serious issue, however, is a logical objection: Wilbon’s two assertions are in conflict with each other. To reach his conclusions, Wilbon ignores (like others who make similar arguments) the implications of his own reasoning: statistics ought to be ignored, he says, because only “narrative” can grant meaning to otherwise meaningless numbers—but, if it is so that numbers themselves cannot “mean” without a framework to grant them meaning, then they cannot pose the threat that Wilbon says they might. In other words, if Wilbon is right that statistical analysis is biased against black people, then it means that numbers do have meaning in themselves, while conversely if numbers can only be interpreted within a framework, then they cannot be inherently biased against black people. By Wilbon’s own account, in other words, nothing about statistical analysis implies that such analysis can only be pursued by white people, nor could the numbers themselves demand only a single (oppressive) use—because if that were so, then numbers would be capable of providing their own interpretive framework. Wilbon cannot logically advance both propositions simultaneously.

That doesn’t mean, however, that Wilbon’s argument—the argument, it ought to be noted, of many who think of themselves as politically “progressive”—is not having an effect: it’s possible, I think, that the relative success of this argument is precisely what is causing Americans to ignore a “hidden current” in American life. That current is could be described by an “analytical” observation made by professors Sven Steinmo and Jon Watts some two decades ago: “No other democratic system in the world requires support of 60% of legislators to pass government policy”—an observation that, in turn, may be linked to the observable reality that, as political scientists Frances E. Lee and Bruce Oppenheimer have noted, “less populous states consistently receive more federal funding than states with more people.” Understanding the impact of these two observations, and their effects on each other would, I suspect, throw a great deal of light on the reality of American lives, white and black—yet it’s precisely the sort of reflection that the “social construction” dogma advanced by Wilbon and company appears specifically designed to avoid. While to many, even now, the arguments for “social construction” and such might appear utterly liberatory, it’s possible to tell a tale in which it is just such doctrines that are the tools of oppression today.

Such an account would be, however—I suppose Michael Wilbon or Stanley Fish might tell us—simply a story about the one that got away.

Striking Out

When a man’s verses cannot be understood … it strikes a man more dead than a great reckoning in a little room.
As You Like It. III, iii.


There’s a story sometimes told by the literary critic Stanley Fish about baseball, and specifically the legendary early twentieth-century umpire Bill Klem. According to the story, Klem is working behind the plate one day. The pitcher throws a pitch; the ball comes into the plate, the batter doesn’t swing, and the catcher catches it. Klem doesn’t say anything. The batter turns around and says (Fish tells us),

“O.K., so what was it, a ball or a strike?” And Klem says, “Sonny, it ain’t nothing ’till I call it.” What the batter is assuming is that balls and strikes are facts in the world and that the umpire’s job is to accurately say which one each pitch is. But in fact balls and strikes come into being only on the call of an umpire.

Fish is expressing here what is now the standard view of American departments of the humanities: the dogma (a word precisely used) known as “social constructionism.” As Fish says elsewhere, under this dogma, “what is and is not a reason will always be a matter of faith, that is of the assumptions that are bedrock within a discursive system which because it rests upon them cannot (without self-destructing) call them into question.” To many within the academy, this view is inherently liberating: the notion that truth isn’t “out there” but rather “in here” is thought to be a sub rosa method of aiding the political change that, many have thought, has long been due in the United States. Yet, while joining the “social construction” bandwagon is certainly the way towards success in the American academy, it isn’t entirely obvious that it’s an especially good way to practice American politics: specifically, because the academy’s focus on the doctrines of “social constructionism” as a means of political change has obscured another possible approach—an approach also suggested by baseball. Or, to be more precise, suggested by the World Series of 1904 that didn’t happen.

“He’d have to give them,” wrote Will Hively, in Discover magazine in 1996, “a mathematical explanation of why we need the electoral college.” The article describes how one Alan Natapoff, a physicist at the Massachusetts Institute of Technology, became involved in the question of the Electoral College: the group, assembled once every four years, that actually elects an American president. (For those who have forgotten their high school civics lessons, the way an American presidential election works is that each American state elects a number of “electors” equal in number to that state’s representation  in Congress; i.e., the number of congresspeople each state is entitled to by population, plus two senators. Those electors then meet to cast their votes in what is the actual election.) The Electoral College has been derided for years: the House of Representatives introduced a constitutional amendment to abolish it in 1969, for instance, while at about the same time the American Bar Association called the college “archaic, undemocratic, complex, ambiguous, indirect, and dangerous.” Such criticisms have a point: as has been seen a number times in American history (most recently in 2000), the Electoral College makes it possible to elect a president without a majority of the votes. But to Natapoff, such criticisms fundamentally miss the point because, according to him, they misunderstood the math.

The example Natapoff turned to in order to support his argument for the Electoral College was drawn from baseball. As Anthony Ramirez wrote in a New York Times article about Natapoff and his argument, also from 1996, the physicist’s favorite analogy is to the World Series—a contest in which, as Natapoff says, “the team that scores the most runs overall is like a candidate who gets the most popular votes.” But scoring more runs than your opponent is not enough to win the World Series, as Natapoff goes on to say: in order to become the champion baseball team of the year, “that team needs to win the most games.” And scoring runs is not the same as winning games.

Take, for instance, the 1960 World Series: in that contest, as Lively says in Discover, “the New York Yankees, with the awesome slugging combination of Mickey Mantle, Roger Maris, and Bill ‘Moose’ Skowron, scored more than twice as many total runs as the Pittsburgh Pirates, 55 to 27.” Despite that difference in production, the Pirates won the last game of the series (in perhaps the most exciting game in Series history—the only one that has ever ended with a ninth-inning, walk-off home run) and thusly won the series, four games to three. Nobody would dispute, Natapoff’s argument runs, that the Pirates deserved to win the series—and so, similarly, nobody should dispute the legitimacy of the Electoral College.

Why? Because if, as Lively writes, in the World Series “[r]uns must be grouped in a way that wins games,” in the Electoral College “votes must be grouped in a way that wins states.” Take, for instance, the election of 1888—a famous case for political scientists studying the Electoral College. In that election, Democratic candidate Grover Cleveland gained over 5.5 million votes to Republican candidate Benjamin Harrison’s 5.4 million votes. But Harrison not only won more states than Cleveland, but also won states with more electoral votes: including New York, Pennsylvania, Ohio, and Illinois, each of whom had at least six more electoral votes than the most populous state Cleveland won, Missouri. In this fashion, Natapoff argues that Harrison is like the Pirates: although he did not win more votes than Cleveland (just as the Pirates did not score more runs than the Yankees), still he deserved to win—on the grounds that the total numbers of popular votes do not matter, but rather how those votes are spread around the country.

In this argument, then, games are to states just as runs are to votes. It’s an analogy that has an easy appeal to it: everyone feels they understand the World Series (just as everyone feels they understand Stanley Fish’s umpire analogy) and so that understanding appears to transfer easily to the matter of presidential elections. Yet, while clever, in fact most people do not understand the purpose of the World Series: although people think it is the task of the Series to identify the best baseball team in the major leagues, that is not what it is designed to do. It is not the purpose of the World Series to discover the best team in baseball, but instead to put on an exhibition that will draw a large audience, and thus make a great deal of money. Or so said the New York Giants, in 1904.

As many people do not know, there was no World Series in 1904. A World Series, as baseball fans do know, is a competition between the champions of the National League and the American League—which, because the American League was only founded in 1901, meant that the first World Series was held in 1903, between the Boston Americans (soon to become the Red Sox) and the same Pittsburgh Pirates also involved in Natapoff’s example. But that series was merely a private agreement between the two clubs; it created no binding precedent. Hence, when in 1904 the Americans again won their league and the New York Giants won the National League—each achieving that distinction by winning more games than any other team over the course of the season—there was no requirement that the two teams had to play each other. And the Giants saw no reason to do so.

As legendary Giants manager, John McGraw, said at the time, the Giants were the champions of the “only real major league”: that is, the Giants’ title came against tougher competition than the Boston team faced. So, as The Scrapbook History of Baseball notes, the Giants, “who had won the National League by a wide margin, stuck to … their plan, refusing to play any American League club … in the proposed ‘exhibition’ series (as they considered it).” The Giants, sensibly enough, felt that they could not gain much by playing Boston—they would be expected to beat the team from the younger league—and, conversely, they could lose a great deal. And mathematically speaking, they were right: there was no reason to put their prestige on the line by facing an inferior opponent that stood a real chance to win a series that, for that very reason, could not possibly answer the question of which was the better team.

“That there is,” writes Nate Silver and Dayn Perry in Baseball Between the Numbers: Why Everything You Know About the Game Is Wrong, “a great deal of luck involved in the playoffs is an incontrovertible mathematical fact.” But just how much luck is involved is something that the average fan hasn’t considered—though former Caltech physicist Leonard Mlodinow has. In Mlodinow’s book, The Drunkard’s Walk: How Randomness Rules Our Lives, the scientist writes that—just by virtue of doing the math—it can be concluded that “in a 7-game series there is a sizable chance that the inferior team will be crowned champion”:

For instance, if one team is good enough to warrant beating another in 55 percent of its games, the weaker team will nevertheless win a 7-game series about 4 times out of 10. And if the superior team could be expected to beat its opponent, on average, 2 out of each 3 times they meet, the inferior team will still win a 7-game series about once every 5 matchups.

What Mlodinow means is this: let’s say that, for every game, we roll a one-hundred sided die to determine whether the team with the 55 percent edge wins or not. If we do that four times, there’s still a good chance that the inferior team is still in the series: that is, that the superior team has not won all the games. In fact, there’s a real possibility that the inferior team might turn the tables, and instead sweep the superior team. Seven games, in short, is just not enough games to demonstrate conclusively that one team is better than another.

In fact, in order to eliminate randomness as much as possible—that is, make it as likely as possible for the better team to win—the World Series would have to be much longer than it currently is: “In the lopsided 2/3-probability case,” Mlodinow says, “you’d have to play a series consisting of at minimum the best of 23 games to determine the winner with what is called statistical significance, meaning the weaker team would be crowned champion 5 percent or less of the time.” In other words, even in a case where one team has a two-thirds likelihood of winning a game, it would still take 23 games to make the chance of the weaker team winning the series less than 5 percent—and even then, there would still be a chance that the weaker team could still win the series. Mathematically then, winning a seven-game series is meaningless—there have been just too few games to eliminate the potential for a lesser team to beat a better team.

Just how mathematically meaningless a seven-game series is can be demonstrated by the case of a team that is only five percent better than another team: “in the case of one team’s having only a 55-45 edge,” Mlodinow goes on to say, “the shortest statistically significant ‘world series’ would be the best of 269 games” (emp. added). “So,” Mlodinow writes, “sports playoff series can be fun and exciting, but being crowned ‘world champion’ is not a very reliable indication that a team is actually the best one.” Which, as a matter of fact about the history of the World Series, is simply a point that true baseball professionals have always acknowledged: the World Series is not a competition, but an exhibition.

What the New York Giants were saying in 1904 then—and Mlodinow more recently—is that establishing the real worth of something requires a lot of trials: many, many different repetitions. That’s something that, all of us, ought to know from experience: to learn anything, for instance, requires a lot of practice. (Even if the famous “10,000 hour rule” New Yorker writer Malcolm Gladwell concocted for this book, Outliers: The Story of Success, has been complicated by those who did the original research Gladwell based his research upon.) More formally, scientists and mathematicians call this the “Law of Large Numbers.”

What that law means, as the Encyclopedia of Mathematics defines it, is that “the frequency of occurence of a random event tends to become equal to its probability as the number of trials increases.” Or, to use the more natural language of Wikipedia, “the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.” What the Law of Large Numbers implies is that Natapoff’s analogy between the Electoral College and the World Series just might be correct—though for the opposite reason Natapoff brought it up. Namely, if the Electoral College is like the World Series, and the World Series is not designed to find the best team in baseball but instead be merely an exhibition, then that implies that the Electoral College is not a serious attempt to find the best president—because what the Law would appear to advise is that, in order to obtain a better result, it is better to gather more voters.

Yet the currently-fashionable dogma of the academy, it would seem, is expressly-designed to dismiss that possibility: if, as Fish says, “balls and strikes” (or just things in general) are the creations of the “umpire” (also known as a “discursive system”), then it is very difficult to confront the wrongheadedness of Natapoff’s defense of the Electoral College—or, for that matter, the wrongheadedness of the Electoral College itself. After all, what does an individual run matter—isn’t what’s important the game in which it is scored? Or, to put it another way, isn’t it more important where (to Natapoff, in which state; to Fish, less geographically inclined, in which “discursive system”) a vote is cast, rather than whether it was cast? The answer in favor of the former at the expense of the latter to many, if not most, literary-type intellectuals is clear—but as any statistician will tell you, it’s possible for any run of luck to continue for quite a bit longer than the average person might expect. (That’s one reason why it takes at least 23 games to minimize the randomness between two closely-matched baseball teams.) Even so, it remains difficult to believe—as it would seem that many today, both within and without the academy, do—that the umpire can continue to call every pitch a strike.


Beams of Enlightenment

And why thou beholdest thou the mote that is in thy brother’s eye, but considerest not the beam that is in thine own eye?
Matthew 7:3


“Do you know what Pied Piper’s product is?” the CEO of the company, Jack Barker, asks his CTO, Richard, during a scene in HBO’s series Silicon Valley—while two horses do, in the background, what Jack is (metaphorically) doing to Richard in the foreground. Jack is the experienced hand brought in to run the company Richard founded as a young programmer; on the other hand, Richard is so ingenuous that Jack has to explain to him the real point of everything they are doing: “The product isn’t the platform, and the product isn’t your algorithm either, and it’s not even the software. … Pied Piper’s product is its stock. Whatever makes the value of that stock go up, that is what we’re going to make.” With that, the television show effectively dramatizes the case many on the liberal left have been trying to make for decades: that the United States is in trouble because of something called “financialization”—or what Kevin Phillips (author of 1969’s The Emerging Republican Majority) has called, in one of the first uses of the term, “a prolonged split between the divergent real and financial economies.” Yet few on that side of the political aisle have considered how their own arguments about an entirely different subject are, more or less, the same as those powering “financialization”—how, in other words, the argument that has enhanced Wall Street at the expense of Main Street—Eugene Fama’s “efficient market hypothesis”—is precisely the same as the liberal left’s argument against the SAT.

That the United States has turned from an economy largely centered around manufacturing to one that centers on services, especially financial ones, can be measured by such data as the fact that the total fraction of America’s Gross Domestic Product consumed by the financial industry is now, according to economist Thomas Philippon of New York University, “around 9%,” while just more than a century ago it was under two. Most appear to agree that this is a bad thing: “Our economic illness has a name: financialization,” Time magazine columnist Rana Foroohar argues in her Makers and Takers: The Rise of Finance and the Fall of American Business, while Bruce Bartlett, who worked in both the Reagan and George H.W. Bush Administrations (which is to say that he is not exactly the stereotypical lefty), claimed in the New York Times in 2013 that “[f]inancialization is also an important factor in the growth of income inequality.” In a 2007 Bloomberg News article, Lawrence E. Mitchell—a professor of law at George Washington Law School—denounced how “stock market considerations” have come “to trump those that improve the actual workings of a business.” The consensus view appears to be that it is bad for a business to be, as Jack is on Silicon Valley, more concerned with its stock price than on what it actually does.

Still, if it is such a bad idea, why do companies do it? One possible answer might be found in the timing, which seems to have happened some time after the 1960s: as John Bellamy Foster put it in a 2007 Monthly Review article entitled “The Financialization of Capitalism,” the “fundamental issue of a gravitational shift toward finance in capitalism as a whole … has been around since the late 1960s.” Undoubtedly, that turn was conditioned by numerous historical forces, but it’s also true that it was during the 1960s that the “efficient market hypothesis,” pioneered above all by the research of Eugene Fama of the University of Chicago, became the dominant intellectual force in the study of economics and in business schools—the incubators of the corporate leaders of today. And Fama’s argument was—and is—an intellectual cruise missile aimed at the very idea that the value of a company might be separate from its stock price.

As I have discussed previously (“Lions For Lambs”), Eugene Fama’s 1965 paper “The Behavior of Stock Market Prices” demonstrated that “the future path of the price level of a security is no more predictable than the path of a series of cumulated random numbers”—or in other words, that there was no rational way to beat the stock market. Also known as the “efficient market hypothesis,” the idea is largely that—as Fama’s intellectual comrade Burton Malkiel observed in his book, A Random Walk Down Wall Street (which has gone through more than five editions since its first publication in 1973),“the evidence points mainly toward the efficiency of the market in adjusting so rapidly to new information that it is impossible to devise successful trading strategies on the basis of such news announcements.” Translated, that means that it’s essentially impossible to do better than the market by paying close attention to what investors call a company’s “fundamental value.”

Yet, if there’s never a divergence between a company’s real worth and the price of its stock, that means that there’s no other means to measuring a company’s real worth than by its stock. From Fama’s or Malkiel’s perspective, “stock market considerations” simply are “the actual workings of a business.” They argued against the very idea that there even could be such a distinction: that there could be something about a company that is not already reflected in its price.

To a lot of educated people on the liberal-left, of course, such an argument will affirm many of their prejudices: against the evils of usury, and the like. At the same time, however, many of them might be taken aback if it’s pointed out that Eugene Fama’s case against fundamental economic analysis is the same as the case many educators make, when it comes to college admissions, against the SAT. Take, for example, a 1993 argument made in The Atlantic by Stanley Fish, former chairman of the English Department at Duke University and dean of the humanities at the University of Illinois at Chicago.

In “Reverse Racism, or, How the Pot Got to Call the Kettle Black,” the Miltonist argued against noted conservative Dinesh D’Souza’s contention, in 1991’s Illiberal Education, that affirmative-action in college admissions tends “‘to depreciate the importance of merit criteria.’” The evidence that D’Souza used to advance that thesis is, Fish tells us, the “many examples of white or Asian students denied admission to colleges and universities even though their SAT scores were higher than the scores of some others—often African-Americans—who were admitted to the same institution.” But, Fish says, the SAT has been attacked as a means of college admissions for decades.

Fish cites David Owen’s None of the Above: Behind the Myth of Scholastic Aptitude as an example. There, Owen says that the

correlation between SAT scores and college grades … is lower than the correlation between height and weight; in other words, you would have a better chance of predicting a person’s height by looking at his weight than you would of predicting his freshman grades by looking only at his SAT scores.”

As Fish intimates, most educational professionals these days would agree that the only way to judge a student these days is not by SAT, but by GPA—grade point average.

To judge students by grade point average, however, is just what the SAT was designed to avoid: as Nicholas Lemann describes in copious detail in The Big Test: The Secret History of the American Meritocracy, the whole purpose of the SAT was to discover students whose talents couldn’t be discerned by any other method. The premise of the test’s designers, in short, was that students possessed, as Lemann says, “innate abilities”—and that the SAT could suss those abilities out. What the SAT was designed to do, then, was to find those students stuck in, say, some lethargic, claustrophobic small town whose public schools could not, perhaps, do enough for them intellectually and who stagnated as a result—and put those previously-unknown abilities to work in the service of the nation.

Now, as Lemann remarked in an interview with PBS’ Frontline,  James Conant (president of Harvard and chief proponent of the SAT at the time it became prominent in American life, in the early 1950s) “believed that you would look out across America and you would find just out in the middle of nowhere, springing up from the good American soil, these very intelligent, talented people”—if, that is, America adopted the SAT to do the “looking out.” The SAT would enable American universities to find students that grade point averages did not—a premise that, necessarily, entails believing that a student’s worth could be more than (and thus distinguishable from) her GPA. That’s what, after all, “aptitude” means: “potential ability,” not “proven ability.” That’s why Conant sometimes asked those constructing the test, “Are you sure this is a pure aptitude test, pure intelligence? That’s what I want to measure, because that is the way I think we can give poor boys the best chance and take away the advantage of rich boys.” The Educational Testing Service (the company that administered the SAT), in sum, believed that there could be something about a student that was not reflected in her grades.

To use an intellectual’s term, that means that the argument against the SAT is isomorphic with the “efficient market hypothesis.” In biology, two structures are isomorphic with each other if they share a form or structure: a human eye is isomorphic with an insect’s eye because they both take in visual information and transmit it to the brain, even though they have different origins. Hence, as biologist Stephen Jay Gould once remarked, two arguments are isomorphic if they are “structurally similar point for point, even though the subject matter differs.” Just as Eugene Fama argued that a company could not be valued other than by its stock price—which has had the effective consequence of meaning that a company’s product is now not whatever superficial business it is supposedly in, but its stock price—educational professionals have argued that the only way to measure a student’s value is to look at her grades.

Now, does that mean that the “financialization” of the United States’ economy is the fault of the liberal left, instead of the usual conservative suspects? Or, to put it more provocatively, is the rise of the 1% at the expense of the 99% the fault of affirmative action? The short answer, obviously, is that I don’t have the slightest idea. (But then, neither do you.) What it does mean, I think, is that at least some of what’s happened to the United States in the past several decades is due to patterns of thought common to both sides of the American political congregation: most perniciously, in the related notions that all value is always and everywhere visible, or that it takes no time and patience for value to manifest itself—and that at least some of the erosion of those ideas is due to the efforts of those who meant well. Granted, it’s always hardest to admit wrongdoing when not only were your intentions pure, but even the immediate effects were also—but it’s also very much more powerful. The point, anyway, is that if you are trying to persuade, it’s probably best to avoid that other four-lettered word associated with horses.



Double Vision

Ill deeds are doubled with an evil word.
The Comedy of Errors. III, ii

The century just past had been both one of the most violent ever recorded—and also perhaps the highest flowering of civilized achievement since Roman times. A great war had just ended, and the danger of starvation and death had receded for millions; new discoveries in agriculture meant that many more people were surviving into adulthood. Trade was becoming more than a local matter; a pioneering Westerner had just re-established a direct connection with China. As well, although most recent contact with Europe’s Islamic neighbors had been violent, there were also signs that new intellectual contacts were being made; new ideas were circulating from foreign sources, putting in question truths that had been long established. Under these circumstances a scholar from one of the world’s most respected universities made—or said something that allowed his enemies to make it appear he had made—a seemingly-astonishing claim: that philosophy, reason, and science taught one kind of truth, and religion another, and that there was no need to reconcile the two. A real intellect, he implied, had no obligation to be correct: he or she had only to be interesting. To many among his audience that appeared to be the height of both sheer brainpower and politically-efficacious intellectual work—but then, none of them were familiar with either the history of German auto-making, or the practical difficulties of the office of the United States Attorney for the Southern District of New York.

Some literary scholars of a previous generation, of course, will get the joke: it’s a reference to then-Johns Hopkins University Miltonist Stanley Fish’s assertion, in his 1976 essay “Interpreting ‘Interpreting the Variorum,’” that, as an interpreter, he has no “obligation to be right,” but “only that [he] be interesting.” At the time, the profession of literary study was undergoing a profound struggle to “open the canon” to a wide range of previously-neglected writers, especially members of minority groups like African-Americans, women, and homosexuals. Fish’s remark, then, was meant to allow literary scholars to study those writers—many of whom would have been judged “wrong” according to previous notions of literary correctness. By suggesting that the proper frame of reference was not “correct/incorrect,” or “right/wrong,” Fish implied that the proper standard was instead something less rigid: a criteria that thusly allowed for the importation of new pieces of writing and new ideas to flourish. Fish’s method, in other words, might appear to be an elegant strategy that allowed for, and resulted in, an intellectual flowering in recent decades: the canon of approved books has been revamped, and a lot of people who probably would not have been studied—along with a lot of people who might not have done the studying—entered the curriculum who might not have had the change of mind Fish’s remark signified not have become standard in American classrooms.

I put things in the somewhat cumbersome way I do in the last sentence because of course Fish’s line did not arrive in a vacuum: the way had been prepared in American thought long before 1976. Forty years prior, for example, F. Scott Fitzgerald had claimed, in his essay “The Crack-Up” for Esquire, that “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” In 1949 Fitzgerald’s fellow novelist, James Baldwin, similarly asserted that “literature and sociology are not the same.” And thirty years after Fish’s essay, the notion had become so accepted that American philosopher Richard Rorty could casually say that the “difference between intellectuals and the masses is the difference between those who can remember and use different vocabularies at the same time, and those who can remember only one.” So when Fish wrote what he wrote, he was merely putting down something that a number of American intellectuals had been privately thinking for some time—a notion that has, sometime between now and then, become American conventional wisdom.

Even some scientists have come to accept some version of the idea: before his death, the biologist Stephen Jay Gould promulgated the notion of what he called “non-overlapping magisteria”: the idea that while science might hold to one version of truth, religion might hold another. “The net of science,” Gould wrote in 1997, “covers the empirical universe,” while the “net of religion extends over questions of moral meaning and value.” Or, as Gould put it more flippantly, “we [i.e., scientists] study how the heavens go, and they [i.e., theologians] determine how to go to heaven.” “Science,” as medical doctor (and book reviewer) John Carmody put the point in The Australian earlier this year, “is our attempt to understand the physical and biological worlds of which we are a part by careful observation and measurement, followed by rigorous analysis of our findings,” while religion “and, indeed, the arts are, by contrast, our attempts to find fulfilling and congenial ways of living in our world.” The notion then that there are two distinct “realms” of truth is a well-accepted one: nearly every thinking, educated person alive today subscribes to some version of it. Indeed, it’s a belief that appears necessary to the pluralistic, tolerant society that many envision the United States is—or should be.

Yet, the description with which I began this essay, although it does in some sense apply to Stanley Fish’s United States of the 1970s, also applies—as the learned knew, but did not say, at the time of Fish’s 1976 remark—to another historical era: Europe’s thirteenth century. At that time, just as during Fish’s, the learned of the world were engaged in trying to expand the curriculum: in this case, they were attempting to recoup the work of Aristotle, largely lost to the West since the fall of Rome. But the Arabs had preserved Aristotle’s work: “In 832,” as Arthur Little, of the Jesuits, wrote in 1947, “the Abbaside Caliph, Almamun,” had the Greek’s work translated “into Arabic, roughly but not inaccurately,” in which language Aristotle’s works “spread through the whole Moslem world, first to Persia in the hand of Avicenna, then to Spain where its greatest exponent was Averroes, the Cordovan Moor.” In order to read and teach Aristotle without interference from the authorities, Little tells us, Averroes (Ibn Rushd) decided that “Aristotle’s doctrine was the esoteric doctrine of the Koran in opposition to the vulgar doctrine of the Koran defended by the orthodox Moslem priests”—that is, the Arabic scholar decided that there was one “truth” for the masses and another, far more subtle, for the learned. Averroes’ conception was, in turn, imported to the West along with the works of Aristotle: if the ancient Greek was at times referred to as the Master, his Arabic disciple was referred to as the Commentator.

Eventually, Aristotle’s works reached Paris, and the university there, sometime towards the end of the twelfth century. Gerard of Cremona, for example, had translated the Physics into Latin from the Arabic of the Spanish Moors sometime before he died in 1187; others had translated various parts of Aristotle’s Greek corpus either just before or just afterwards. For some time, it seems, they circulated in samizdat fashion among the young students of Paris: not part of the regular curriculum, but read and argued over by the brightest, or at least most well-read. At some point, they encountered a young man who would become known to history as Siger of Brabant—or perhaps rather, he encountered them. And like many other young, studious people, Siger fell in love with these books.

It’s a love story, in other words—and one that, like a lot of other love stories, has a sad, if not tragic, ending. For what Siger was learning by reading Aristotle—and Averroes’ commentary on Aristotle—was nearly wholly incompatible with what he was learning in his other studies through the rest of the curriculum—an experience that he was not, as the experience of Averroes before him had demonstrated, alone in having. The difference, however, is that whereas most other readers and teachers of the learned Greek sought to reconcile him to Christian beliefs (despite the fact that Aristotle long predated Christianity), Siger—as Richard E. Rubenstein puts it in his Aristotle’s Children—presented “Aristotle’s ideas about nature and human nature without attempting to reconcile them with traditional Christian beliefs.” And even more: as Rubenstein remarks, “Siger seemed to relish the discontinuities between Aristotelian scientia and Christian faith.” At the same time, however, Siger also held—as he wrote—that people ought not “try to investigate by reason those things which are above reason or to refute arguments for the contrary position.” But assertions like this also left Siger vulnerable.

Vulnerable, that is, to the charge that what he and his friends were teaching was what Rubenstein calls “the scandalous doctrine of Double Truth.” Or, in other words, the belief that “a proposition [that] could be true scientifically but false theologically, or the other way round.” Whether Siger and his colleagues did, or did not, hold to such a doctrine—there have been arguments about the point for centuries now— isn’t really material, however: as one commenter, Vincent P. Benitez, has put it, either way Siger’s work highlighted just how the “partitioning of Christian intellectual life in the thirteenth century … had become rather pronounced.” So pronounced, in fact, that it suggested that many supposed “intellectuals” of the day “accepted contradictories as simultaneously true.” And that—as it would not to F. Scott Fitzgerald later—posed a problem to the medievals, because it ran up against a rule of logic.

And not just any rule of logic: it’s one that Aristotle himself said was the most essential to any rational thought whatever. That rule of logic is usually known by the name the Law of Non-Contradiction, usually placed as the second of the three classical rules of logic in the ancient world. (The others being the Law of Identity—A is A—and the Law of the Excluded Middle—either A is A or it is not-A.) As Aristotle himself put it, the “most certain of all basic principles is that contradictory propositions are not true simultaneously.” Or—as another of Aristotle’s Arabic commenters, Avicenna (Ibn-Sina) put it in one of its most famous formulations—that rule goes like this: “Anyone who denies the law of non-contradiction should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned is not the same as not to be burned.” In short, a thing cannot be both true and not true at the same time.

Put in Avicenna’s way, of course, the Law of Non-Contradiction will sound distinctly horrible to most American undergraduates, perhaps particularly those who attend the most exclusive colleges: it sounds like—and, like a lot of things, has been—a justification for the worst kind of authoritarian, even totalitarian, rule, and even torture. In that sense, it might appear that attacking the law of non-contradiction could be the height of oppositional intellectual work: the kind of thing that nearly every American undergraduate attracted to the humanities aspires to do. Who is not, aside from members of the Bush Administration legal team (for that matter, nearly every regime known to history) and viewers of the television show 24, against torture? Who does not know that black-and-white morality is foolish, that the world is composed of various “shades of gray,” that “binary oppositions” can always be dismantled, and that it is the duty of the properly educated to instruct the lower orders in the world’s real complexity? Such views might appear obvious—especially if one is unfamiliar with the recent history of Volkswagen.

In mid-September of 2015, the Environmental Protection Agency of the United States issued a violation notice to the German automaker Volkswagen. The EPA had learned that, although the diesel engines Volkswagen built were passing U.S. emissions tests, they were doing it on the sly: each car’s software could detect when the car’s engine was being tested by government monitors, and if so could reduce the pollutants that engine was emitting. Just more than six months later, Volkswagen agreed to pay a settlement of 15.3 billion dollars in the largest auto-related class-action lawsuit in the history of the United States. That much, at least, is news; what interests me, however,  about this story in relation to this talk about academics and monks was a curious article put out by The New Yorker in October of 2015. Entitled “An Engineering Theory of the Volkswagen Scandal,” Paul Kedrosky—perhaps significantly—“a venture investor and a former equity analyst,” explains these events as perhaps not the result of “engineers … under orders from management to beat the tests by any means necessary.” Instead, the whole thing may simply have been the result of an “evolution” of technology that “subtly and stealthily, even organically, subverted the rules.” In other words, Kedrosky wishes us to entertain the possibility that the scandal ought to be understood in terms of the undergraduate’s idea of shades of gray.

Kedrosky takes his theory from a book by sociologist Diane Vaughn, about the Challenger space shuttle disaster of 1986. In her book, Vaughn describes how, over nine launches from 1983 onwards, the space shuttle organization had launched Challenger under colder and colder temperatures, until NASA’s engineers had “effectively declared the mildly abnormal normal,” Kedrosky says—and until, one very frigid January morning in Florida, the shuttle blew into thousands of pieces moments after liftoff. Kedrosky’s attempt at an analogy is that maybe the Volkswagen scandal developed similarly: “Perhaps it started with tweaks that optimized some aspect of diesel performance and then evolved over time.” If so, then “at no one step would it necessarily have felt like a vast, emissions-fixing conspiracy by Volkswagen engineers.” Instead—as this story goes—it would have felt like Tuesday.

The rest of Kedrosky’s thrust is relatively easy to play out, of course—because we have heard a similar story before. Take, for instance, another New Yorker story; this one, a profile of the United States Attorney for the Southern District of New York, Preet Bharara. Mr. Bharara, as the representative of the U.S. Justice Department in New York City, is in charge of prosecuting Wall Street types; because he took office in 2009, at the crest of the financial crisis that began in 2007, many thought he would end up arresting and charging a number of executives as a result of the widely-acknowledged chicaneries involved in creating the mess. But as Jeffrey Toobin laconically observes in his piece, “No leading executive was prosecuted.” Even more notable, however, is the reasoning Bharara gives for his inaction.

“Without going into specifics,” Toobin reports, Bharara told him “that his team had looked at Wall Street executives and found no evidence of criminal behavior.” Sometimes, Bharara went on to explain, “‘when you see a bad thing happen, like you see a building go up in flames, you have to wonder if there’s arson’”—but “‘sometimes it’s not arson, it’s an accident.’” In other words, to Bharara, it’s entirely plausible to think of the entire financial meltdown of 2007-8, which ended three giant Wall Street firms (Bear Stearns, Merrill Lynch, and Lehman Brothers) and two arms of the United States government (Fannie Mae and Freddie Mac), and is usually thought to have been caused by predatory lending practices driven by Wall Street’s appetite for complex financial instruments, as essentially analogous to Diane Vaughn’s view of the Challenger disaster—or Kedrosky’s view of Volkswagen’s cavalier thoughts about environmental regulation. To put it in another way, both Kedrosky and Bharara must possess, in Fitzgerald’s terms, “first-rate intelligences”: in Kedrosky’s version of Volkswagen’s actions or Bharara’s view of Wall Street, crimes were committed, but nobody committed them. They were both crimes and not-crimes at the same time.

These men can, in other words, hold opposed ideas in their head simultaneously. To many, that makes these men modern—or even, to some minds, “post-modern.” Contemporary intellectuals like to cite examples—like the “rabbit-duck” illusion referred to by Wittgenstein, which can be seen as either a rabbit or a duck, or the “Schroedinger’s Cat” thought experiment, whereby the cat is neither dead nor alive until the box is opened, or the fact that light is both a wave and a particle—designed to show how out-of-date the Law of Noncontradiction is. In that sense, we might as easily blame contemporary physics as contemporary work in the humanities for Kedrosky or Bharara’s difficulties in saying whether an act was a crime or not—and for that matter, maybe the similarity between Stanley Fish and Siger of Brabant is merely a coincidence. Still, in the course of reading for this piece I did discover another apparent coincidence in Arthur Little’s same article I previously cited. “Unlike Thomas Aquinas,” the Jesuit wrote 1947, “whose sole aim was truth, Siger desired most of all to find the world interesting.” The similarity to Stanley Fish’s 1976 remarks about himself—that he has no obligation to be right, only to be interesting—are, I think, striking. Like Bharara, I cannot demonstrate whether Fish knew of this article of Little’s, written thirty years before his own.

But then again, if I have no obligation to be right, what does it matter?

This Doubtful Strife

Let me be umpire in this doubtful strife.
Henry VI. Act IV, Scene 1.


“Mike Carey is out as CBS’s NFL rules analyst,” wrote Claire McNear recently for (former ESPN writer and Grantland founder) Bill Simmons’ new website, The Ringer, “and we are one step closer to having robot referees.” McNear is referring to Carey and CBS’s “mutual agreement” to part last week: the former NFL referee, with 24 years of on-field experience, was not able to translate those years into an ability to convey rules decisions to CBS’s audience. McNear goes on to argue that Carey’s firing/resignation is simply another milestone on the path to computerized refereeing—a march that, she says, reached another milestone just days earlier, when the NBA released “Last Two Minute reports, which detail the officiating crew’s internal review of game calls.” About that release, it seems, the National Basketball Referees Association said it encourages “the idea that perfection in officiating is possible,” a standard that the association went on to say “is neither possible nor desirable” because “if every possible infraction were to be called, the game would be unwatchable.” It’s an argument that will appear familiar for many with experience in the humanities: at least since William Blake’s “dark satanic mills,” writers and artists have opposed the impact of science and technology—usually for reasons advertised as “political.” Yet, at least with regard to the recent history of the United States, that’s a pretty contestable proposition: it’s more than questionable, in other words, whether the humanities’ opposition to the sciences hasn’t had pernicious rather than beneficial effects. The work of the humanities, that is, by undermining the role of science, may not be helping to create the better society its proponents often say will result. Instead, the humanities may actually be helping to create a more unequal society.

That the humanities, that supposed bastion of “political correctness” and radical leftism, could in reality function as the chief support of the status quo might sound surprising at first, of course—according to any number of right-wing publications, departments of the humanities are strongholds of radicalism. But any real look around campus shouldn’t find it that confounding to think of the humanities as, in reality, something else : as Joe Pinsker reported for The Atlantic last year, data from the National Center for Education Statistics demonstrates that “the amount of money a college student’s parents make does correlate with what that person studies.” That is, while kids “from lower-income families tend toward ‘useful’ majors, such as computer science, math, and physics,” those “whose parents make more money flock to history, English, and the performing arts.” It’s a result that should not be that astonishing: as Pinsker observes, not only is it so that “the priciest, top-tier schools don’t offer Law Enforcement as a major,” it’s a point that cuts across national boundaries; Pinsker also reports that Greg Clark of the University of California found recently that students with “rare, elite surnames” at Great Britain’s Cambridge University “were much more likely to study classics, English, and history, and much less likely to study computer science and economics.” Far from being the hotbeds of far-left thought they are often portrayed as, in other words, departments of the humanities are much more likely to house the most elite, most privileged student body on campus.

It’s in those terms that the success of many of the more fashionable doctrines on American college campuses over the past several decades might best be examined: although deconstruction and many more recent schools of thought have long been thought of as radical political movements, they could also be thought of as intellectual weapons designed in the first place—long before they are put to any wider use—to keep the sciences at bay. That might explain just why, far from being the potent tools for social justice they are often said to be, these anti-scientific doctrines often produce among their students—as philosopher Martha Nussbaum of the University of Chicago remarked some two decades ago—a “virtually complete turning from the material side of life, toward a type of verbal and symbolic politics.” Instead of an engagement with the realities of American political life, in other words, many (if not all) students in the humanities prefer to practice politics by using “words in a subversive way, in academic publications of lofty obscurity and disdainful abstractness.” In this way, “one need not engage with messy things such as legislatures and movements in order to act daringly.” Even better, it is only in this fashion, it is said, that the conceptual traps of the past can be escaped.

One of the justifications for this entire practice, as it happens, was once laid out by the literary critic, Stanley Fish. The story goes that Bill Klem, a legendary umpire, was once behind the plate plying his trade:

The pitcher winds up, throws the ball. The pitch comes. The batter doesn’t swing. Klem for an instant says nothing. The batter turns around and says “O.K., so what was it, a ball or a strike?” And Klem says, “Sonny, it ain’t nothing ’till I call it.”

The story, Fish says, is illustrative of the notion that “of course the world is real and independent of our observations but that accounts of the world are produced by observers and are therefore relative to their capacities, education, training, etc.” It’s by these means, in other words, that academic pursuits like “cultural studies” and the like have come into being: means by which sociologists of science, for example, show how the productions of science may be the result not merely of objects in the world, but also the predilections of scientists to look in one direction and not another. Cancer or the planet Saturn, in other words, are not merely objects, but also exist—perhaps chiefly—by their place within the languages with which people describe them: an argument that has the great advantage of preserving the humanities against the tide of the sciences.

But, isn’t that for the best? Aren’t the humanities preserving an aspect of ourselves incapable of being captured by the net of the sciences? Or, as the union of professional basketball referees put it in their statement, don’t they protect, at the very least, that which “would cease to exist as a form of entertainment in this country” by their ministrations? Perhaps. Yet, as ought to be apparent, if the critics of science can demonstrate that scientists have their blind spots, then so too do the humanists—for one thing, an education devoted entirely to reading leaves out a rather simple lesson in economics.

Correlation is not causation, of course, but it is true that as the theories of academic humanists became politically wilder, the gulf between haves and have-nots in America became greater. As Nobel Prize-winning economist Joseph Stiglitz observed a few years ago, “inequality in America has been widening for decades”; to take one of Stiglitz’s examples, “the six heirs to the Walmart empire”—an empire that only began in the early 1960s—now “possess a combined wealth of some $90 billion, which is equivalent to the wealth of the entire bottom 30 percent of U.S. society.” To put the facts another way—as Christopher Ingraham pointed out in the Washington Post last year—“the wealthiest 10 percent of U.S. households have captured a whopping 76 percent of all the wealth in America.” At the same time, as University of Illinois at Chicago literary critic Walter Benn Michaels has noted, “social mobility” in the United States is now “lower than in both France and Germany”—so much so, in fact, that “[a]nyone born poor in Chicago has a better chance of achieving the American Dream by learning German and moving to Berlin.” (A point perhaps highlighted by the fact that Germany has made its universities free to any who wish to attend them.) In any case, it’s a development made all the more infuriating by the fact that diagnosing the harm of it involves merely the most remedial forms of mathematics.

“When too much money is concentrated at the top of society,” Stiglitz continued not long ago, “spending by the average American is necessarily reduced.” Although—in the sense that it is a creation of human society—what Stiglitz is referring to is “socially constructed,” it is also simply a fact of nature that would exist whether the economy in question involved Aztecs or ants. In whatever underlying substrate, it is simply the case that those at the top of a pyramid will spend less than those near the bottom. “Consider someone like Mitt Romney”—Stiglitz asks—“whose income in 2010 was $21.7 million.” Even were Romney to become even more flamboyant than Donald Trump, “he would spend only a fraction of that sum in a typical year to support himself and his wife in their several homes.” “But,” Stiglitz continues, “take the same amount of money and divide it among 500 people—say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all of the money gets spent.” In other words, by dividing the money more equally, more economic activity is generated—and hence the more equal society is also the more prosperous society.

Still, to understand Stiglitz’ point requires understanding a sequence of connected, ideas—among them a basic understanding of mathematics, a form of thinking that does not care who thinks it. In that sense, then, the humanities’ opposition to scientific, mathematical thought takes on rather a different sense than it is often cracked up to be. By training its students to ignore the evidence—and more significantly, the manner of argument—of mathematics and the sciences, the humanities are raising up a generation (or several) to ignore the evidence of impoverishment that is all around us here in 21st century America. Even worse, it fails to give students a means of combatting that impoverishment: an education without an understanding of mathematics cannot cope with, for instance, the difference between $10,000 and $10 billion—and why that difference might have a greater significance than simply being “unfair.” Hence, to ignore the failures of today’s humanities is also to ignore just how close the United States is … to striking out.

Art Will Not Save You—And Neither Will Stanley


But I was lucky, and that, I believe, made all the difference.
—Stanley Fish. “My Life Report” 31 October 2011, New York Times. 


Pfc. Bowe Bergdahl, United States Army, is the subject of the new season of Serial, the National Public Radio show that tells “One story. Week by week.” as the advertising tagline has it. NPR is doing a show about Bergdahl because of what Bergdahl chose to do on the night of 30 June 2009: as Serial reports, that night he walked off his “small outpost in eastern Afghanistan and into hostile territory,” where he was captured by Taliban guerrillas and held prisoner for nearly five years. Bergdahl’s actions have led some to call him a deserter and a traitor; as a result of leaving his unit Bergdahl faces a life sentence from a military court. But the line Bergdahl crossed when he stepped beyond the concertina wire and into the desert of Paktika Province was far greater than the line between a loyal soldier and a criminal. When Bowe Bergdahl wandered into the wilderness, he also crossed the line between the sciences and the humanities—and demonstrated why the political hopes some people place in the humanities is not only illogical, but arguably holding up actual political progress.

Bergdahl can be said to have crossed that line because what happens to him when he is tried by a military court regarding what happened will, likely, turn on what the intent behind his act was: in legal terms, this is known as mens rea, which is Latin for “guilty mind.” Intent is one of the necessary components prosecutors must prove to convict Bergdahl for desertion: according to Article 85 of the Uniform Code of Military Justice, to be convicted of desertion Bergdahl must be shown to have had the “intent to remain away” from his unit “permanently.” It’s this matter of intent that demonstrates the difference between the humanities and the sciences.

The old devil, Stanley Fish, once demonstrated that border in an essay in the New York Times designed to explain what it is that literary critics, and other people who engage in interpretation, do, and how it differs from other lines of work:

Suppose you’re looking at a rock formation and see in it what seems to be the word ‘help.’ You look more closely and decide that, no, what you’re seeing is an effect of erosion, random marks that just happen to resemble an English word. The moment you decide that nature caused the effect, you will have lost all interest in interpreting the formation, because you no longer believe that it has been produced intentionally, and therefore you no longer believe that it’s a word, a bearer of meaning.

To put it another way, matters of interpretation concern agents who possess intent: any other kind of discussion is of no concern to the humanities. Conversely, the sciences can be said to concern all those things not produced by an agent, or more specifically an agent who intended to convey something to some other agent.

It’s a line that seems clear enough, even in what might be marginal cases: when a beaver builds a dam, surely he intends to build that dam, but it also seems inarguable that the beaver intends nothing more to be conveyed to other beavers than, “here is my dam.” More questionable cases might be when, say, a bird or some other animal performs a “mating dance”: surely the bird intends his beloved to respond, but still it would seem ludicrous to put a scholar of, say, Jane Austen’s novels to the task of recovering the bird’s message. That would certainly be overkill.

Yes yes, you will impatiently say, but what has that to do with Bergdahl? The answer, I think, might be this: if Bergdahl’s lawyer had a scientific, instead of a humanistic, sort of mind, he might ask how many soldiers were stationed in Afghanistan during Bergdahl’s time there, and how many overall. The reason a scientist would ask that question about, say, a flock of birds he was studying is because, to a scientist, the overall numbers matter. The reason why they matter demonstrates just what the difference between science and the humanities is, but also why the faith some place in the political utility of the humanities is ridiculous.

The reason why the overall numbers of the flock would matter to a scientist is because sample size matters: a behavior that one bird in a flock of twelve birds exhibited is probably not as significant as a behavior that one bird in a flock of millions exhibited. As Nassim Taleb put it in his book, Fooled By Randomness, how impressive it is if a monkey has managed to type a verbatim copy of the Iliad “Depends On The Number of Monkeys.” “If there are five monkeys in the game,” Taleb elaborates, “I would be rather impressed with the Iliad writer”—but if, on the other hand, “there are a billion to the power one billion monkeys I would be less impressed.” Or to put it in another context, the “greater the number of businessmen, the greater the likelihood of one of them performing in a stellar manner just by luck.” What matters to a scientist, in other words, isn’t just what a given bird does—it’s how big the flock was in the first place.

To a lawyer, of course, none of that would be significant: the court that tries Bergdahl will not view that question as a relevant one in determining whether he is guilty of the crime of desertion. That is because, as a discipline concerned with interpretation, such a question will have been ruled out of court, as we say, before the court has even met: to consider how many birds in the flock there were when one of them behaved strangely, in other words, is to have a priori ceased to consider that bird as an agent because when one asks how many other birds there are, the implication is that what matters more is simply the role of chance rather than any intent on the part of the bird. Any lawyer that brought up the fact that Bergdahl was the only one out of so many thousands of soldiers to have done what he did, without taking up the matter of Bergdahl’s intent, would not be acting as a lawyer.

By the way, in case you’re wondering, roughly 65,000 soldiers were in Afghanistan by early October of 2009, behind the “surge” ordered by President Barack Obama shortly after taking office. The number, according to a contemporary story by The Washington Post, would be “more than double the number there when Bush left office,” which is to say that when Bergdahl left his tiny outpost at the end of June that year, the military was in the midst of a massive buildup of troops. The sample size, in Taleb’s terms, was growing rapidly at that time—with what effects on Bergdahl’s situation, if any, I await enlightenment, if there be any.

Whether that matters or not in terms of Bergdahl’s story—in Serial or anywhere else—remains to be seen; as a legal matter it would be very surprising if any military lawyer brought it up. What that, in turn, suggests is that the caution with which Stanley Fish has greeted many in the profession of literary study regarding the application of such work to actual political change is thoroughly justified: “when you get to the end” of the road many of those within the humanities have been traveling at least since the 1960s or 70s, Fish has remarked for instance, “nothing will have changed except the answers you might give to some traditional questions in philosophy and literary theory.” It’s a warning of crisis that even now may be reaching its peak as the nation realizes that, after all, the great political story of our time has not been about the minor league struggles within academia, but rather the story of how a small number of monkeys have managed to seize huge proportions of the planet’s total wealth: as Bernie Sanders, the political candidate, tweeted recently in a claim rated “True” by Politifact, “the Walton family of Walmart own more wealth than the bottom 40 percent of America.”

In that story, the intent of the monkeys hardly matters.