No Justice, No Peace


‘She’s never found peace since she left his arms, and never will again till she’s as he is now!’
—Thomas Hardy. Jude the Obscure. (1895).

Done because we are too menny,” writes little “Father Time,” in Thomas Hardy’s Jude the Obscure—a suicide note that is meant to explain why the little boy has killed his siblings, and then hanged himself. The boy’s family, in other words, is poor, which is why Father Time’s father Jude (the titular obscurity) is never able, as he wished, to become the scholar he once dreamed of becoming. Yet, although Jude is a great tragedy, it is also something of a mathematical textbook: the principle taught by little Jude instructs not merely about why his father does not get into university, but perhaps also about just why, as Natasha Warikoo remarked in last week’s London Review of Books blog, “[o]ne third of Oxford colleges admitted no black British students in 2015.” Unfortunately, Warikoo never considers that possibility suggested by Jude: although Warikoo considers a number of reasons why black British students do not go to Oxford, she does not consider what we might call, in honor of Jude, the “Judean Principle”: that minorities simply cannot be proportionately represented everywhere always. Why? Well, because of the field goal percentages of the 1980-81 Philadelphia 76ers—and math.

“The Labour MP David Lammy,” wrote Warikoo, “believes that Oxford and Cambridge are engaging in social apartheid,” while “others have blamed the admissions system.” These explanations, Warikoo suggests, are incorrect: due to interviews with “15 undergraduates at Oxford who were born in the UK to immigrant parents, and 52 of their white peers born to British parents,” she believes that the reason for the “massive underrepresentation” of black British students is “related to a university culture that does not welcome them.” Or in other words, the problem is racism. But while it’s undoubtedly the case that many people, even today, are prejudiced, is prejudice really adequate to explain the case here?

Consider, after all, what it is that Warikoo is claiming—beginning with the idea of “massive underrepresentation.” As Walter Benn Michaels of the University of Illinois at Chicago has pointed out, the goal of many on the political “left” these days appears to be a “society in which white people were proportionately represented in the bottom quintile (and black people proportionately represented in the top quintile)”—in other words, a society in which every social strata contained precisely the same proportion of minority groups. In line with that notion, Warikoo assumes that, because Oxford and Cambridge do not contain the same proportion of black British people as the larger society does, that necessarily implies the racism of the system. But such an argument betrays an ignorance of how mathematics works—or more specifically, as MacArthur grant-winning psychologist Amos Tversky and his co-authors explained more than three decades ago, how basketball works.

In “The Hot Hand in Basketball: On the Misperception of Random Sequences,” Tversky and company investigated an entire season’s worth of shooting data from the NBA’s Philadelphia 76ers in order to discover whether there was evidence “that the performance of a player during a particular period is significantly better than expected on the basis of the player’s own record”—that is, whether players sometimes shot better (or “got hot”) than their overall shot record would predict. Prior to the research, it seems, everyone involved in basketball—fans, players, and coaches—appeared to believe that sometimes players did “get hot”—a belief that seems to predict that, sometimes, players have a better chance of making the second basket of a series than they did the first one:

Consider a professional basketball player who makes 50% of his shots. This player will occasionally hit four or more shots in a row. Such runs can properly be called streak shooting, however, only if their length or frequency exceeds what is expected on the basis of chance alone.

In other words, if a player really did get “hot,” or was “clutch,” then that fact would be reflected in the statistical record by a showing that sometimes players made second and third (and so on) baskets at a rate higher than that player’s chance of making a first basket: “the probability of a hit should be greater following a hit than following a miss.” If the “hot hand” existed, in other words, there should be evidence for it.

Unfortunately—or not—there was no such evidence, the investigators found: after analyzing the data for the nine players who took the vast majority of the 76ers shots for the 1980-81 season, Tversky and company found that “for eight of the nine players the probability of a hit is actually lower following a hit … than following a miss,” which is clearly “contrary to the hot-hand hypothesis.” (The exception is Daryl Dawkins, who played center—and was best known, as older fans may recall, for his backboard shattering dunks; i.e., a high-percentage shot.) There was no such thing as the “hot hand,” in short. (To use an odd turn of phrase with regards to the NBA.)

Yet, what has that to do with the fact that there were no black British students at one third of Oxford’s colleges in 2015? After all, not many British people play basketball, black or not. But as Tversky and his co-authors argue in “The Hot Hand,” the existence of the belief in a “hot hand” intimates that people’s “intuitive conception of randomness depart systematically from the laws of chance.” That is, when faced with a coin flip for example “people expect even short sequences of heads and tails to reflect the fairness of a coin and contain roughly 50% heads and 50% tails.” Yet, in reality, “the occurrence of, say, four heads in a row … is quite likely in a sequence of 20 tosses.” In just the same way, in other words, professional basketball players (who are obviously quite skilled at shooting baskets) are likely to make several baskets in a row—not because of any special quality of “heat” they possess, but instead simply because they are good shooters. It’s this inability to perceive randomness, in other words, that may help explain the absence of black British students at many Oxford colleges.

As we saw above, when Warikoo asserts that black students are “massively underrepresented” at Oxford colleges, what she means is that the proportion of black students at Oxford is not the same as the percentage of black people in the United Kingdom as a whole. But as “The Hot Hand” shows, to “expect [that] the essential characteristics of a chance process to be represented not only globally in the entire sequence, but also locally, in each of its parts” is irrational: in reality, a “locally representative sequence … deviates systematically from chance expectation.” Since Oxford colleges, after all, are much smaller population samples than the United Kingdom as a whole is, it would be absurd to believe that their populations could somehow exactly replicate precisely the same proportions as the larger population.

Maybe though you still don’t see why, which is why I’ll now call on some backup: professors of statistics Howard Wainer and Harris Zwerling. In 2006, the two observed that, during the 1990s, many became convinced that smaller schools were the solution to America’s “education crisis”—the Bill and Melinda Gates Foundation, they note, became so convinced of the fact that they spent $1.7 billion on it. That’s because “when one looks at high-performing schools, one is apt to see an unrepresentatively large proportion of smaller schools.” But while that may be so, the two say, in reality “seeing a greater than anticipated number of small schools” in the list of better schools “does not imply that being small means having a greater likelihood of being high performing.” The reason, they say, is precisely the same reason that you don’t have a higher risk of kidney cancer by living in the American South.

Why might you think that? Turns out, Wainer and Zwerling say, that U.S. counties with the highest apparent risk of kidney cancer are all “rural and located in the Midwest, the South, and the West.” So, should you avoid those parts of the country if you are afraid of kidney cancer? Not at all—because the U.S. counties with the lowest apparent risk of kidney cancer are all “rural and located in the Midwest, the South, and the West.” The county characteristics that tend to have both the highest and lowest rates of cancer are precisely the same.

What Wainer and Zwerling’s example shows is precisely the same as that shown by Tversky and company’s work on the field goal rates of the Philadelphia 76ers. It’s a “same” that can be expressed with the words of journalist Michael Lewis, who recently authored a book about Amos Tversky and his long-time research partner (and Nobel Prize-winner) Daniel Kahneman called The Undoing Project: A Friendship That Changed Our Minds: “the smaller the sample, the lower the likelihood that it would mirror the broader population.” If you flip a coin one hundred, or a thousand, times it is far more likely to come up heads half the time than if you flip it just ten or twenty times. Conversely, that small a sample is much more likely to come up biased towards either heads or tails—and much, much more likely to be heavily biased towards one or the other than a larger population of coin flips is. Getting extreme results is much more likely in smaller populations.

Oxford colleges are, of course, very small samples of the population of the United Kingdom, which is about 66 million people. Oxford University as a whole, on the other hand, contains about 23,000 students. There are 38 colleges (as well as some other institutions), and some of these—like All Souls, for example—do not even admit undergraduate students; those that that do consist largely of a few hundred students each. The question then that Natasha Warikoo ought to ask first about the admission of black British students to Oxford colleges is, “how likely is it that a sample of 300 would mirror a population of 66 million?” The answer, as the work of Tversky et al. demonstrates,  is “not very”—it’s even less likely, in other words, than the likelihood of throwing exactly 2 heads and 2 tails when throwing a coin four times.

Does that mean that racism does not exist? No, certainly not. But Warikoo says that “[o]nly when Oxford and Cambridge succeed in including young Britons from all walks of life will they be what they say they are: world-class universities.” In fact, however, the idea that institutional populations ought to mirror the broader population is not only not easy to obtain—but flatly absurd. It isn’t that that a racially proportionate society is a difficult goal, in other words—it is that it is an impossible one. To get 300 people, or even 23,000, to reflect the broader population would require, essentially, rewiring the system to such an extent that it’s possible that no other goals—like, say, educating qualified students—could also be achieved; it would require so much effort fighting the entropy of chance that the cause would, eventually, absorb all possible resources. In other words, Oxford can either include “young Britons from all walks of life”—or it can be a world-class university.  It can’t, however, be both; which is to say that Natasha Warikoo—like one character says about little “Father Time’s” stepmother, Sue, at the end of Jude the Obscure—will never find peace.


Blind Shots

… then you are apt to have what one of the tournament’s press bus drivers
describes as a “bloody near-religious experience.”
—David Foster Wallace. “Roger Federer As Religious Experience.” The New York Times, 20 Aug. 2006.

Not much gets by the New York Times, unless it’s the non-existence of WMDs—or the rules of tennis. The Gray Lady is bamboozled by the racquet game: “The truth is,” says The New York Times Guide to Essential Knowledge, Third Edition, not only that “no one knows for sure how … the curious scoring system came about.” But in what might be an example of the Times’ famously droll sense of fun, an article by Stuart Miller entitled “Quirks of the Game: How Tennis Got Its Scoring System” not only does not provide the answer its title promises, but actually even only addresses its ostensible subject by merely noting that “No one can pinpoint exactly when and how” the ostensible subject of the piece came into existence. So much, one supposes, for reportorial tenacity. Yet despite the failure of the Times, in fact there is an explanation for tennis’ scoring system—an explanation that is so simple that while the Times’ inability to see why tennis is scored the way it is is amusing, also leads to disquieting thoughts about what else the Times can’t see. That’s because solving the mystery of why tennis is scored the way it is also could explain a great deal about political reality in the United States.

To be fair, the Times is not alone in its befuddlement: “‘It’s a difficult topic,’” says one “Steve Flink, an historian and author of ‘The Greatest Tennis Matches of All Time,’” in the “How Tennis Got Its Scoring System” story. So far as I can tell, all tennis histories are unclear about the origins of the scoring system: about all anyone knows for sure—or at least, is willing to put on paper—is that (as Rolf Potts put it in an essay for The Smart Set a few years ago) when modern lawn tennis was codified in 1874, it “appropriated the scoring system of the ancient French game” of jeu de paume, or “real tennis” as it is known in English. The origins of the modern game of tennis, all the histories do agree, lie in this older game—most of all, the scoring system.

Yet, while that does push back the origins of the system a few centuries, no one seems to know why jeu de paume adopted the system it did, other than to observe that the scoring breakdowns of 15, 30, and 40 seem to be, according to most sources, allusions to the face of a clock. (Even the Times, it seems, is capable of discovering this much: the numbers of the points, Miller says, appear “to derive from the idea of a clock face.”) But of far more importance than the “15-30-40” numbering is why the scoring system is qualitatively different than virtually every other kind of sport—a difference even casual fans are aware of and yet even the most erudite historians, so far as I am aware, cannot explain.

Psychologist Allen Fox once explained the difference in scoring systems in Tennis magazine: whereas, the doctor said, the “score is cumulative throughout the contest in most other sports, and whoever has the most points at the end wins,” in tennis “some points are more important than others.” A tennis match, in other words, is divided up into games, sets, and matches: instead of adding up all the points each player scores at the end, tennis “keeps score” by counting the numbers of games, and sets, won. This difference, although it might appear trivial, actually isn’t—and it’s a difference that explains not only a lot about tennis, but much else besides.

Take the case of Roger Federer, who has won 17 major championships in men’s tennis: the all-time record in men’s singles. Despite this dominating record, many people argue that he is not the sport’s Greatest Of All Time—at least, according to New York Times writer Michael Steinberger. Not long ago, Steinberger said that the reason people can argue that way is because Federer “has a losing record against [Rafael] Nadal, and a lopsided one at that.” (Currently, the record stands at 23-10 in favor of Nadal—a nearly 70% edge.) Steinberger’s article—continuing the pleasing simplicity in the titles of New York Times tennis articles, it’s named “Why Roger Federer Is The Greatest Of All Time”—then goes on to argue that Federer should be called the “G.O.A.T.” anyway, record be damned.

Yet weirdly, Steinberger didn’t attempt—and neither, so far as I can tell, has anyone else—to do what an anonymous blogger did in 2009: a feat that demonstrates just why tennis’ scoring system is so curious, and why it has implications, perhaps even sinister implications from a certain point of view, far beyond tennis. What that blogger did, on a blog entitled SW19—postal code for Wimbledon, site of the All-England Tennis Club—was very simple.

He counted up the points.

In any other sport, with a couple of exceptions, that act might seem utterly banal: in those sports, in order to see who’s better you’d count up how many one player scored and then count up how many the other guy scored when they played head-to-head. But in tennis that apparently simple act is not so simple—and the reason it isn’t is what makes tennis such a different game than virtually all other sports. “In tennis, the better player doesn’t always win,” as Carl Bialik for pointed out last year: because of the scoring system, what matters is whether you win “more sets than your opponent”—not necessarily more points.

Why that matters is because the argument against Federer as the Greatest Of All Time rests on the grounds that he has a losing record against Nadal: at the time the anonymous SW19 blogger began his research in 2009, that record was 13-7 in Nadal’s favor. As the mathematically-inclined already know, that record translates to a 65 percent edge to Nadal: a seemingly-strong argument against Federer’s all-time greatness because the percentage seems so overwhelmingly tilted toward the Spaniard. How can the greatest player of all time be so weak against one opponent?

In fact, however, as the SW19 blogger discovered, Nadal’s seemingly-insurmountable edge was an artifact of the scoring system, not a sign of Federer’s underlying weakness. Of the 20 matches the two men had played up until 2009, the two men played 4,394 total points: that is, where one player served and the two volleyed back and forth until one player failed to deliver the ball to the other court according to the rules. If tennis had a straightforward relationship between points and wins—like baseball or basketball or football—then it might be expected that Nadal has won about 65 percent of those 4,394 points played, which would be about 2,856 points. In other words, to get a 65 percent edge in total matches, Nadal should have about a 65 percent edge in total points: the point total, as opposed to the match record, between the two ought to be about 2,856 to 1,538.

Yet that, as the SW19 blogger realized, is not the case: the real margin between the two players was Nadal, 2,221, and Federer, 2,173. Further, those totals included Nadal’s victory in the 2008 French Open final—which was played on Nadal’s best surface, clay—in straight sets, 6-1, 6-3, 6-0. In other words, even including the epic beating at Roland Garros in 2008, Nadal had only beaten Federer by a total of 48 points over the course of their careers: a total of less than one percent of all the points scored.

And that is not all. If the single match at the 2008 French Open final is excluded, then the margin becomes eight points. In terms of points scored, in other words, Nadal’s edge is about a half of a percentage point—and most of that percentage was generated by a single match. So, it may be so that Federer is not the G.O.A.T., but an argument against Federer cannot coherently be based on the fact of Nadal’s “dominating” record over the Swiss—because going by the act that is the central, defining act of the sport, the act of scoring points, the two players were, mathematically speaking, exactly equal.

Now, many will say here that, to risk making a horrible pun, I’ve missed the point: in tennis, it will be noted, not all acts of scoring are equal, and neither are all matches. It’s important that the 2008 match was a final, not an opening round … And so on. All of which certainly could be allowed, and reasonable people can differ about it, and if you don’t understand that then you really haven’t understood tennis, have you? But there’s a consequence to the scoring system—one that makes the New York Times’ inability to understand the origins of a scoring system that produces such peculiar results something more than simply another charming foible of the matriarch of the American press.

That’s because of something else that is unusual about tennis by comparison to other sports: its propensity for gambling scandals. In recent years, this has become something of an open secret within the game: when in 2007 the fourth-ranked player in the world, Nikolay Davydenko of Russia, was investigated for match-fixing, Andy Murray—the Wimbledon champion currently ranked third in the world—“told BBC Radio that although it is difficult to prove who has ‘tanked’ a match, ‘everyone knows it goes on,” according to another New York Times story, this one by reporter Joe Drape.

Around that same time Patrick McEnroe, brother of the famous champion John McEnroe, told the Times that tennis “is a very easy game to manipulate,” and that it is possible to “throw a match and you’d never know.” During that scandal year of 2007, the problem seemed about to break out into public awareness: in the wake of the Davydenko case the Association of Tennis Professionals, one of the sport’s governing bodies, commissioned an investigation conducted by former Scotland Yard detectives into match-fixing and other chicanery—the Environmental Review of Integrity In Professional Tennis, issued in May of 2008. That investigation resulted in four lowly-ranked players being banned from the professional ranks, but not much else.

Perhaps however that papering-over should not be surprising, given the history of the game. As mentioned, today’s game of tennis owes its origins in the game of real tennis, or jeu de paume—a once-hugely popular game very well-known for its connection to gambling. “Gambling was closely associated with tennis,” as Elizabeth Wilson puts it in her Love Game: A History of Tennis, from Victorian Pastime to Global Phenomenon, and jeu de paume had a “special association with court life and the aristocracy.” Henry VIII of England, for example, was an avid player—he had courts built in several of his palaces—and, as historian Alison Weir has put it in her Henry VIII: The King and His Court, “Gambling on the outcome of a game was common.” In Robert E. Gensemer’s 1982 history of tennis, the historian points out that “monetary wagers on tennis matches soon became commonplace” as jeu de paume grew in popularity. Yet eventually, as historians of jeu de paume have repeatedly shown, by “the close of the eighteenth century … game fixing and gambling scandals had tarnished Jeu de Paume’s reputation,” as a history of real tennis produced by an English real tennis club has put it.

Oddly however, despite all this evidence directly in front of all the historians, no one, not even the New York Times, seems to have put together the connection between tennis’ scoring system and the sport’s origins in gambling. It is, apparently, something to be pitied, and then moved past: what a shame it is that these grifters keep interfering with this noble sport! But that is to mistake the cart for the horse. It isn’t that the sport attracts con artists—it’s rather because of gamblers that the sport exists at all. Tennis’ scoring system, in other words, was obviously designed by, and for, gamblers.

Why, in other words, should tennis break up its scoring into smaller, discrete units—so that  the total number of points scored is only indirectly related to the outcome of a match? The answer to that question might be confounding to sophisticates like the New York Times, but child’s play to anyone familiar with a back-alley dice game. Perhaps that’s why places like Wimbledon dress themselves up in the “pageantry”—the “strawberries and cream” and so on—that such events have: because if people understood tennis correctly, they’d realize that were this sport played in Harlem or Inglewood or 71st and King Drive in Chicago, everyone involved would be doing time.

That’s because—as Nassim Nicholas Taleb, author of The Black Swan: The Impact of the Highly Improbable, would point out—breaking a game into smaller, discrete chunks, as tennis’ scoring system does, is—exactly, precisely—how casino operators make money. And if that hasn’t already made sense to you—if, say, it makes more sense to explain a simple, key feature of the world by reference to advanced physics rather than merely to mention the bare fact—Taleb is also gracious enough to explain how casinos make money via a metaphor drawn from that ever-so-simple subject, quantum mechanics.

Consider, Taleb asks in that book, that because a coffee “cup is the sum of trillions of very small particles” there is little chance that any cup will “jump two feet” of its own spontaneous accord—despite the fact that, according to the particle physicists, that event is not outside the realm of possibility. “Particles jump around all the time,” as Taleb says, so it is indeed possible that a cup could do that. But in order to to make that jump, it would require that all the particles in the cup made the same leap at precisely the same time—an event so unlikely that the odds of it are longer than the lifetime of the universe. Were any of the particles in the cup to make such a leap, that leap would be canceled out by the leap of some other particle in the cup—coordinating so many particles is effectively impossible.

Yet, observe that by reducing the numbers of particles to less than a coffee cup, it can be very easy to ensure that some number of particles jump: if there is only one particle, the chance that it will jump is effectively 100%. (It would be more surprising if it didn’t jump.) “Casino operators,” as Taleb drily adds, “understand this well, which is why they never (if they do things right) lose money.” All they have to do to make money, on the other hand, is to refuse to “let one gambler make a massive bet,” and instead to ensure “to have plenty of gamblers make a series of bets of limited size.” The secret of a casino is that it multiplies the numbers of gamblers—and hence the numbers of bets.

In this way, casino operators can guarantee that “the variations in the casino’s returns are going to be ridiculously small, no matter the total gambling activity.” By breaking up the betting into thousands, and even—over the course of time—millions or billions of bets, casino operators can ensure that their losses on any single bet are covered by some other bet elsewhere in the casino: there’s a reason that, as the now-folded website Grantland pointed out in 2014, during the previous 23 years “bettors have won twice, while the sportsbooks have won 21 times” in Super Bowl betting. The thing to do in order to make something “gamable”—or “bettable,” which is to say a commodity worth the house’s time—is to break its acts into as many discrete chunks as possible.

The point, I think, can be easily seen: by breaking up a tennis match into smaller sets and games, gamblers can commodify, or make the sport “more bettable”—at least, from the point of view of a sharp operator. “Gamblers may be a total of $20 million, but you needn’t worry about the casino’s health,” Taleb says—because the casino isn’t accepting ten $2 million bets. Instead, “the bets run, say, $20 on average; the casino caps the bets at a maximum.” Rather than making one bet on a match’s outcome, gamblers can make a series of bets on the “games within the game”—bets that, as in the case of the casino, inevitably favor the house even without any match-fixing involved.

In professional tennis there are, as Louisa Thomas pointed out in Grantland a few years ago, every year “tens of thousands of professional matches, hundreds of thousands of games, millions of points, and patterns in the chaos.” (If there is match-fixing—and as mentioned there have been many allegations over the years—well then, you’re in business: an excellent player can even “tank” many, many early opportunities, allowing confederates to cash in, and still come back to put away a weaker opponent.) Anyway, just as Taleb says, casino operators inevitably wish to make bets as numerous as possible because, in the long run, that protects their investment—and tennis, what a co-inky-dink, has more opportunities for betting than virtually any sport you can name.

The august majesty of the New York Times, however, cannot imagine any of that. In their “How Tennis Got Its Scoring System” story, it mentions the speculations of amateur players who say things like: “The eccentricities are part of the fun,” and “I like the old-fashioned touches that tennis has.” It’s all so quaint, in the view of the Times. But since no one can account for tennis’ scoring system otherwise, and everyone admits not only that gambling flourished around lawn tennis’ predecessor game, jeu de paume (or real tennis), but also that the popularity of the sport was eventually brought down precisely because of gambling scandals—and tennis is to this day vulnerable to gamblers—the hypothesis that tennis is scored the way it is for the purposes of gambling makes much more sense than, say, tennis historian Elizabeth Wilson’s solemn pronouncement that tennis’ scoring system is “a powerful exception to the tendencies toward uniformity” that is so dreadfully, dreadfully common in our contemporary vale of tears.

The reality, of course, is that tennis’ scoring system was obviously designed to fleece suckers, not to entertain the twee viewers of Wes Anderson movies. Yet while such dimwittedness can be expected from college students or proper ladies who have never left the Upper East Side of Manhattan or Philadelphia’s Main Line, why is the New York Times so flummoxed by the historical “mystery” of it all? The answer, I suspect anyway, lies in some other, far more significant, sport that is played by with a very similar set of rules as tennis: one that equally breaks up the action into many more different acts than seem strictly necessary. In this game, too, there is an indirect connection between the central, defining act and wins and losses.

The name of that sport? Well, it’s really two versions of the same game.

One is called “the United States Senate”—and the other is called a “presidential election.”

Shut Out

But cloud instead, and ever-during dark
Surrounds me, from the cheerful ways of men
Cut off, and for the book of knowledge fair

And wisdom at one entrance quite shut out
Paradise Lost. Book III, 45-50

Hey everybody, let’s go out the baseball game,” the legendary 1960s Chicago disc jockey Dick Biondi said in the joke that (according to the myth) got him fired. “The boys,” Biondi is alleged to have said, “kiss the girls on the strikes, and …” In the story, of course, Biondi never finished the sentence—but you see where he was going, which is what makes the story interesting to a specific type of philosopher: the epistemologist. Epistemology is the study of how people know things: the question the epistemologist might ask about Biondi’s joke is, how do you know the ending to that story? For many academics today, the answer can be found in another baseball story, this time told by the literary critic Stanley Fish—a story that, oddly enough, also illustrates the political problems with that wildly popular contemporary concept: “diversity.”

As virtually everyone literate knows, “diversity” is one of the great adjectives of the present: something that has it is, ipso facto, usually held to be better than something that doesn’t. As a virtue, “diversity” has tremendous range, because it applies both in natural contexts—“biodiversity” is all the rage among environmentalists—and in social ones: in the 2003 case of Grutter v. Bollinger, for example, the Supreme Court held that the “educational benefits of diversity” were a “compelling state interest.” Yet, what often goes unnoticed about arguments in favor of “diversity” is that they themselves are dependent upon a rather monoglot account of how people know things—which is how we get back to epistemology.

Take, for instance, Stanley Fish’s story about the late, great baseball umpire Bill Klem. “It ain’t nothin’ til I call it,” Klem supposedly once said in response to a batter’s question about whether the previous pitch was a ball or a strike. (It’s a story I’ve retailed before: cf. “Striking Out”). The literature professor Stanley Fish has used that story, in turn, to illustrate what he views as the central lesson of what is sometimes called “postmodernism”: according to The New Yorker, Fish’s (and Klem’s) point is that “balls and strikes come into being only on the call of an umpire,” instead of being “facts in the world.” Klem’s remark in other words—Fish thinks—illustrates just how knowledge is what is sometimes called “socially constructed.”

The notion of “social construction” is the idea—as City College of New York professor Massimo Pigliucci recently put the point—that “no human being, or organized group of human beings, has access to a god’s eye of the world,” and that we ought therefore rely on an epistemic model in which “many individually biased points of view enter into dialogue with each other, yielding a less (but still) biased outcome.” The idea, in other words, is that meaning is—as Canadian philosopher Ian Hacking described the concept in The Social Construction of What?—“the product of historical events, social forces, and ideology.” Or, to put it another way, that we know things because of our culture, or social group: not by means of our own senses and judgement, but by the people around us.

For Pigliucci, this view of how human beings access reality suggests that we ought therefore rely on a particular epistemic model: rather than one in which each person ought to judge evidence for herself, we would instead rely on one in which “many individually biased points of view enter into dialogue with each other, yielding a less (but still) biased outcome.” In other words, we should rely upon diverse points of view, which is one reason why Pigliucci says, for instance, that because of the overall cognitive lack displayed by individuals, we ought “to work toward increasing diversity in the sciences.” Pigliucci’s reasoning is, of course, also what forms the basis of Grutter: “When universities are granted the freedom to assemble student bodies featuring multiple types of diversity,” wrote defendant Lee Bollinger (then dean of the University of Michigan law school) in an editorial for the Washington Post about the case, “the result is a highly sought-after learning environment that attracts the best students.” “Diversity,” in sum, is a tool to combat our epistemic weaknesses.

“Diversity” is thereby justified by means of a particular vision of epistemology: a particular theory of how people know things. On this theory, we are dependent upon other people in order to know anything. Yet, the very basis of Dick Biondi’s “joke” is that you, yourself, can “fill in” the punchline: it doesn’t take a committee to realize what the missing word at the end of the story is. And what that reality—your ability to furnish the missing word—perhaps illustrates is an epistemic distinction Keynes made in his magisterial 1920 work, A Treatise on Probability: a distinction that troubles the epistemology that underlies the concept of “diversity.”

“Now our knowledge,” Keynes writes in chapter two of that work, “seems to be obtained in two ways: directly, as the result of contemplating the objects of acquaintance; and indirectly, by argument” (italics in original). What Keynes is proposing, in other words, is an epistemic division between two ways of knowing—one of them being much like the epistemic model described by Fish or Pigliucci or Bollinger. As Keynes says, “it is usually agreed that we do not have direct knowledge” of such things as “the law of gravity … the cure for phthisis … [or] the contents of Bradshaw”—things like these, in other words, are only known through chains of reasoning, rather than direct experience. In order to know items like these, in other words, we have to have undergone a kind of socialization, otherwise known as education. We are dependent on other people to know those things.

Yet, as Keynes also recognizes, there is also another means of knowing:  “From an acquaintance with a sensation of yellow,” the Canadian economist and thinker wrote, “I can pass directly to a knowledge of the proposition ‘I have a sensation of yellow.’” In this epistemic model, human beings can know things by immediate apprehension—the chief example of this form of knowing being, as Keynes describes, our own senses. What Keynes says, in short, is that people can know things in more than one way: one way through other people yes, as Fish et al. say—but also through our own experience.

Or—to put the point differently—Keynes has a “diverse” epistemology. That would, at least superficially, seem to make Keynes’ argument a support for the theory of “diversity”: after all, he is showing how people can know things differently, which would appear to assist Lee Bollinger and Massimo Pigliucci’s argument for diversity in education. If people can know things in different ways, it would then appear necessary to gather more, and different, kinds of people in order to know anything. But just saying so exposes the weakness at the heart of Bollinger and Pigliucci’s ideal of “diversity.”

Whereas Keynes has a “diverse” epistemology, in short, Bollinger and Pigliucci do not: in their conception, human beings can only know things in one way. That is the way that Keynes called “indirect”: through argumentation and persuasion—or as its sometimes put, “social construction.” In other words, the defenders of “diversity” have a rather monolithic epistemology, which is why Fish for instance once attacked the view that it is possible to “survey the world in a manner free of assumptions about what it is like and then, from that … disinterested position, pick out the set of reasons that will be adequate to its description.” If such a thing were possible, after all, it would be possible to experience a direct encounter with the world—which “diversity” enthusiasts like Fish deny is possible: Fish says, for instance, that “the rhetoric of disinterested inquiry … is in fact”—just how he knows this is unclear—“a very interested assertion of the superiority of one set of beliefs.” In other words, any other epistemological view than their own is merely a deception.

Perhaps though this is all just one of the purest cases of an “academic” dispute: eggheads arguing, as the phrase goes, about how many angels can dance on a pin. At least, until one realizes that the nearly-undisputed triumph of epistemology retailed by Fish and company also has certain quite-real consequences. For example, as the case of Bollinger demonstrates, although the “socially-constructed” epistemology is an excellent means, as has been demonstrated over the past several decades, of—in the words of Fish’s fellow literary critic William Benn Michaels—“battling over what skin color the rich kids should have,” it isn’t so great for, say, dividing up legislative districts: a question that, as Elizabeth Kolbert noted last year in The New Yorker, “may simply be mathematical.” But if so, that presents a problem for those who think of their epistemological views as serving a political cause.

Mathematics, after all, is famously not something that can be understood “culturally”; it is, as Keynes—and before him, a silly fellow named Plato—knew, perhaps the foremost example of the sort of knowing demonstrated by Dick Biondi’s joke. Mathematics, in other words, is the chief example of something known directly: when you understand something in mathematics, you understand it either immediately—or not at all. Which, after all, is the significance of Kolbert’s remarks: to say that re-districting—perhaps the most political act of all in a democracy—is primarily a mathematical operation is to say that to understand redistricting, you have to understand directly the mathematics of the operation. Yet if the “diversity” promoters are correct, then only their epistemology has any legitimacy: an epistemology that a priori prevents anyone from sensibly discussing redistricting. In other words, it’s precisely the epistemological blindspots promoted by the often-ostensibly “politically progressive” promoters of “diversity” that allow the current American establishment to ignore the actual interests of actual people.

Which, one supposes, may be the real joke.

Home of the Brave

audentes Fortuna iuvat.
The Aeneid. Book X, line 284. 

American prosecutors in the last few decades have—Patrick Keefe recently noted in The New Yorker—come to use more and more “a type of deal, known as a deferred-prosecution agreement, in which the company would acknowledge wrongdoing, pay a fine, and pledge to improve its corporate culture,” rather than prosecuting either the company officers or the company itself for criminal acts. According to prosecutors, it seems, this is because “the problem with convicting a company was that it could have ‘collateral consequences’ that would be borne by employees, shareholders, and other innocent parties.” In other words, taking action against a corporation could put it out of business. Yet, declining to prosecute because of the possible consequences is an odd position for a prosecutor to take: “Normally a grand jury will indict a ham sandwich if a prosecutor asks it to,” former Virginia governor Chuck Robb, once a prosecutor himself, famously remarked. Prosecutors, in other words, aren’t usually known for their sensitivity to circumstance—so why the change in recent decades? The answer may lie, perhaps, in a knowledge of child-raising practices of the ancient European nobility—and the life of Galileo Galilei.

“In those days,” begins one of the stories described by Nicola Clarke in The Muslim Conquest of Iberia: Medieval Arabic Narratives, “the custom existed amongst the Goths that the sons and daughters of the nobles were brought up in the king’s palace.” Clarke is describing the tradition of “fosterage”: the custom, among the medieval aristocracy, of sending one’s children to be raised by another noble family while raising another such family’s children in turn. “It is not clear what … was the motive” for fostering children, according to Laurence Ginnell’s The Brehon Laws (from 1894), “but its practice, whether designed for that end or not, helped materially to strengthen the natural ties of kinship and sympathy which bound the chief and clan or the flaith and sept together.” In Ginnell’s telling, “a stronger affection oftentimes sprang up between persons standing in those relations than that between immediate relatives by birth.” One of the purposes of fostering, in other words, was to decrease the risk of conflict by ensuring that members of the ruling classes grew up together: it’s a lot harder to go to war, the thinking apparently went, when you are thinking of your potential opponent as the kid who skinned his knee that one time, instead of the fearsome leader of a gang of killers.

Perhaps one explanation for why prosecutors appear to be willing to go easier on corporate criminals these days than in the past might be because they share “natural ties”: they attended the same schools as those they are authorized to prosecute. Although statistics on the matter appear lacking, there’s reason to think that future white collar criminals and their (potential) prosecutors share the same “old school” ties more and more these days: there’s reason to think, in other words, that just as American law schools have seized a monopoly on the production of lawyers—Robert H. Jackson, who served from 1941 to 1954, was the last American Supreme Court Justice without a law degree—so too have America’s “selective” colleges seized a monopoly on the production of CEOs. “Just over 10% of the highest paid CEOs in America came from the Ivy League plus MIT and Stanford,” a Forbes article noted in 2012—a percentage higher than at any previous moment in American history. In other words, just as lawyers all come from the same schools these days, so too does upper management—producing the sorts of “natural ties” that not only lead to rethinking that cattle raid on your neighbor’s castle, but perhaps also any thoughts of subjecting Jaime Dimon to a “perp walk.” Yet as plausible an explanation as that might seem, it’s even more satisfying when it is combined with an incident in the life of the great astronomer.

In 1621, a Catholic priest named Scipio Chiaramonti published a book about a supernova that had occurred in 1572; the exploded star (as we now know it to have been) had been visible during daylight for several weeks in that year. The question for astronomers in that pre-Copernican time was whether the star had been one of the “fixed stars,” and thus existed beyond the moon, or whether it was closer to the earth than the moon: since—as James Franklin, from whose The Science of Conjecture: Evidence and Probability Before Pascal I take this account, notes—it was “the doctrine of the Aristotelians that there could be no change beyond the sphere of the moon,” a nova that far away would refute their theory. Chiaramonti’s book claimed that the measurements of 12 astronomers showed that the object was not as far as the moon—but Galileo pointed out that Chiaramonti’s work had, in effect, “cherrypicked”: he did not use all the data actually available, but merely used that which supported his thesis. Galileo’s argument, oddly enough, can also be applied to why American prosecutors aren’t pursuing financial crimes.

The point is supplied, Keefe tells us, by James Comey: the recent head of the FBI fired by President Trump. Before moving to Washington Comey was U.S. Attorney for the Southern District of New York, in which position he once called—Keefe informs us—some of the attorneys working for the Justice Department members of “the Chickenshit Club.” Comey’s point was that while a “perfect record of convictions and guilty pleas might signal simply that you’re a crackerjack attorney,” it might instead “mean that you’re taking only those cases you’re sure you’ll win.” To Comey’s mind, the marvelous winning records of those working under him was not a sign of not a guarantee of the ability of those attorneys, but instead a sign that his office was not pursuing enough cases. In other words, just as Chiaramonti chose only those data points that confirmed his thesis, the attorneys in Comey’s office were choosing only those cases they were sure they would win.

Yet, assuming that the decrease in financial prosecution is due to prosecutorial choice, why are prosecutors more likely, when it comes to financial crimes, to “cherrypick” today than they were a few decades ago? Keefe says this may be because “people who go to law school are risk-averse types”—but that begs the question of why today’s lawyers are more risk-averse than their predecessors. The answer, at least according to a former Yale professor, may be that they are more likely to cherrypick because they are the product of cherrypicking.

Such at least was the answer William Deresiewicz arrived at in 2014’s “Don’t Send Your Kid to the Ivy League”—the most downloaded article in the history of The New Republic. “Our system of elite education manufactures young people who are smart and talented and driven, yes,” Deresiewicz wrote  there—but, he wrote, it also produces students that are “anxious, timid, and lost.” Such students, the Yale faculty member wrote, had “little intellectual curiosity and a stunted sense of purpose”; they are “great at what they’re doing but [have] no idea why they’re doing it.” The question Deresiewicz wanted answered was, of course, why the students he saw in New Haven were this way; the answer he hit upon was that the students he saw were themselves the product of a cherrypicking process.

“So extreme are the admissions standards now,” Deresiewicz wrote in “Don’t,” “that kids who manage to get into elite colleges have, by definition, never experienced anything but success.” The “result,” he concluded, “is a violent aversion to risk.” Deresiewicz, in other words, is thinking systematically: in other words, it isn’t so much that prosecutors and white collar criminals share the same background that has made prosecutions so much less likely, but instead the fact that prosecutors have experienced a certain kind of winnowing process in the course of achieving their positions in life.

To most people, in other words, scarcity equals value: Harvard admits very few people, therefore Harvard must provide an excellent education. But what the Chiaramonti episode brings to light is the notion that what makes Harvard so great may not be that it provides an excellent education, but instead that it admits such “excellent” people in the first place: Harvard’s notably long list of excellent alumni may not be a result of what’s happening in the classroom, but instead in the admissions office. The usual understanding of education, in other words, takes the significant action of education to be what happens inside the school—but what Galileo’s statistical perspective says, instead, is that the important play may be what happens before the students even arrive.

The question that Deresiewicz’ work suggests, in turn, is that this very process may itself have unseen effects: efforts to make Harvard (along with other schools) more “exclusive”—and thus, ostensibly, provide a better education—may actually be making students worse off than they might otherwise be. Furthermore, Keefe’s work intimates that this insidious effect might not be limited to education; it may be causing invisible ripples throughout American society—ripples that may not be limited to the criminal justice system. If the same effects Keefe says are affecting lawyers is also affecting the future CEOs the prosecutors are not prosecuting, then perhaps CEOs are becoming less likely to pursue the legitimate risks that are the economic lifeblood of the nation—and perhaps more susceptible to pursuing illegitimate risks, of the sort that once landed CEOs in non-pinstriped suits. Accordingly, perhaps that old conservative bumper sticker really does have something to teach American academics—it’s just that what both sides ought perhaps to realize is that this relationship may be, at bottom, a mathematical one. That relation, you ask?

The “land of the free” because of “the brave.”

Nunc Dimittis

Nunc dimittis servum tuum, Domine, secundum verbum tuum in pace:
Quia viderunt oculi mei salutare tuum
Quod parasti ante faciem omnium populorum:
Lumen ad revelationem gentium, et gloriam plebis tuae Israel.
—“The Canticle of Simeon.”
What appeared obvious was therefore rendered problematical and the question remains: why do most … species contain approximately equal numbers of males and females?
—Stephen Jay Gould. “Death Before Birth, or a Mite’s Nunc dimittis.”
    The Panda’s Thumb: More Reflections in Natural History. 1980.

Since last year the attention of most American liberals has been focused on the shenanigans of President Trump—but the Trump Show has hardly been the focus of the American right. Just a few days ago, John Nichols of The Nation observed that ALEC—the business-funded American Legislative Exchange Council that has functioned as a clearinghouse for conservative proposals for state laws—“is considering whether to adopt a new piece of ‘model legislation’ that proposes to do away with an elected Senate.” In other words, ALEC is thinking of throwing its weight behind the (heretofore) fringe idea of overturning the Seventeenth Amendment, and returning the right to elect U.S. Senators to state legislatures: the status quo of 1913. Yet, why would Americans wish to return to a period widely known to be—as the most recent reputable academic history, Wendy Schiller and Charles Stewart’s Electing the Senate: Indirect Democracy Before the Seventeenth Amendment has put the point—“plagued by significant corruption to a point that undermined the very legitimacy of the election process and the U.S. Senators who were elected by it?” The answer, I suggest, might be found in a history of the German higher educational system prior to the year 1933.

“To what extent”—asked Fritz K. Ringer in 1969’s The Decline of the German Mandarins: The German Academic Community, 1890-1933—“were the German mandarins to blame for the terrible form of their own demise, for the catastrophe of National Socialism?” Such a question might sound ridiculous to American ears, to be sure: as Ezra Klein wrote in the inaugural issue of Vox, in 2014, there’s “a simple theory underlying much of American politics,” which is “that many of our most bitter political battles are mere misunderstandings” that can be solved with more information, or education. To blame German professors, then, for the triumph of the Nazi Party sounds paradoxical to such ears: it sounds like blaming an increase in rats on a radio station. From that view, then, the Nazis must have succeeded because the German people were too poorly-educated to be able to resist Hitler’s siren song.

As one appraisal of Ringer’s work in the decades since Decline has pointed out, however, the pioneering researcher went on to compare biographical dictionaries between Germany, France, England and the United States—and found “that 44 percent of German entries were academics, compared to 20 percent or less elsewhere”; another comparison of such dictionaries found that a much-higher percentage of Germans (82%) profiled in such books had exposure to university classes than those of other nations. Meanwhile, Ringer also found that “the real surprise” of delving into the records of “late nineteenth-century German secondary education” is that it “was really rather progressive for its time”: a higher percentage of Germans found their way to a high school education than did their peers in France or England during the same period. It wasn’t, in other words, for lack of education that Germany fell under the sway of the Nazis.

All that research, however, came after Decline, which dared to ask the question, “Did the work of German academics help the Nazis?” To be sure, there were a number of German academics, like philosopher Martin Heidegger and legal theorist Carl Schmitt, who not only joined the party, but actively cheered the Nazis on in public. (Heidegger’s connections to Hitler have been explored by Victor Farias and Emannuel Faye; Schmitt has been called “the crown jurist of the Third Reich.”) But that question, as interesting as it is, is not Ringer’s; he isn’t interested in the culpability of academics in direct support of the Nazis, perhaps the culpability of elevator repairmen could as well be interrogated. Instead, what makes Ringer’s argument compelling is that he connects particular intellectual beliefs to a particular historical outcome.

While most examinations of intellectuals, in other words, bewail a general lack of sympathy and understanding on the part of the public regarding the significance of intellectual labor, Ringer’s book is refreshing insofar as it takes the opposite tack: instead of upbraiding the public for not paying attention to the intellectuals, it upbraids the intellectuals for not understanding just how much attention they were actually getting. The usual story about intellectual work and such, after all, is about just how terrible intellectuals have it—how many first novels, after all, are about young writers and their struggles? But Ringer’s research suggests, as mentioned, the opposite: an investigation of Germany prior to 1933 shows that intellectuals were more highly thought of there than virtually anywhere in the world. Indeed, for much of its history before the Holocaust Germany was thought of as a land of poets and thinkers, not the grim nation portrayed in World War II movies. In that sense, Ringer has documented just how good intellectuals can have it—and how dangerous that can be.

All of that said, what are the particular beliefs that, Ringer thinks, may have led to the installation of the Fürher in 1933? The “characteristic mental habits and semantic preferences” Ringer documents in his book include such items as “the underlying vision of learning as an empathetic and unique interaction with venerated texts,” as well as a “consistent repudiation of instrumental or ‘utilitarian’ knowledge.” Such beliefs are, to be sure, seemingly required of the departments of what are now—but weren’t then—thought of, at least in the United States, as “the humanities”: without something like such foundational assumptions, subjects like philosophy or literature could not remain part of the curriculum. But, while perhaps necessary for intellectual projects to leave the ground, they may also have some costs—costs like, say, forgetting why the Seventeenth Amendment was passed.

That might sound surprising to some—after all, aren’t humanities departments hotbeds of leftism? Defenders of “the humanities”—like Gregory Harpham, once Director of the National Endowment for the Humanities—sometimes go even further and make the claim—as Harpham did in his 2011 book, The Humanities and the Dream of America—that “the capacity to sympathize, empathize, or otherwise inhabit the experience of others … is clearly essential to democratic society,” and that this “kind of capacity … is developed by an education that includes the humanities.” Such views, however, make a nonsense of history: traditionally, after all, it’s been the sciences that have been “clearly essential to democratic society,” not “the humanities.” And, if anyone thinks about it closely, the very notion of democracy itself depends on an idea that, at base, is “scientific” in nature—and one that is opposed to the notion of “the humanities.”

That idea is called, in scientific circles, “the Law of Large Numbers”—a concept first written down formally two centuries ago by mathematician Jacob Bernoulli, but easily illustrated in the words of journalist Michael Lewis’ most recent book. “If you flipped a coin a thousand times,” Lewis writes in The Undoing Project, “you were more likely to end up with heads or tails roughly half the time than if you flipped it ten times.” Or as Bernoulli put it in 1713’s Ars Conjectandi, “it is not enough to take one or another observation for such a reasoning about an event, but that a large number of them are needed.” It is a restatement of the commonsensical notion that the more times a result is repeated, the more trustworthy it is—an idea hugely applicable to human life.

For example, the Law of Large Numbers is why, as publisher Nate Silver recently put it, if “you want to predict a pitcher’s win-loss record, looking at the number of strikeouts he recorded and the number of walks he yielded is more informative than looking at his W’s and L’s from the previous season.” It’s why, when financial analyst John Bogle examined the stock market, he decided that, instead of trying to chase the latest-and-greatest stock, “people would be better off just investing their money in the entire stock market for a very cheap price”—and thereby invented the index fund. It’s why, Malcolm Gladwell has noted, the labor movement has always endorsed a national health care system: because they “believed that the safest and most efficient way to provide insurance against ill health or old age was to spread the costs and risks of benefits over the biggest and most diverse group possible.” It’s why casinos have limits on the amounts bettors can wager. In all these fields, as well as more “properly” scientific ones, it’s better to amass large quantities of results, rather than depend on small numbers of them.

What is voting, after all, but an act of sampling of the opinion of the voters, an act thereby necessarily engaged with the Law of Large Numbers? So, at least, thought the eighteenth-century mathematician and political theorist the Marquis de Condorcet—who called the result “the miracle of aggregation.” Summarizing a great deal of contemporary research, Sean Richey of Georgia State University has noted that Condorcet’s idea was that (as one of Richey’s sources puts the point) “[m]ajorities are more likely to select the ‘correct’ alternative than any single individual when there is uncertainty about which alternative is in fact the best.” Or, as Richey describes how Condorcet’s process actually works more concretely puts it, the notion is that “if ten out of twelve jurors make random errors, they should split five and five, and the outcome will be decided by the two who vote correctly.” Just as, in sum, a “betting line” demarks the boundary of opinion between gamblers, Condorcet provides the justification for voting: Condorcet’s theory was that “the law of large numbers shows that this as-if rational outcome will be almost certain in any large election if the errors are randomly distributed.” Condorcet, thereby, proposed elections as a machine for producing truth—and, arguably, democratic governments have demonstrated that fact ever since.

Key to the functioning of Condorcet’s machine, in turn, is large numbers of voters: the marquis’ whole idea, in fact, is that—as David Austen-Smith and Jeffrey S. Banks put the French mathematician’s point in 1996—“the probability that a majority votes for the better alternative … approaches 1 [100%] as n [the number of voters] goes to infinity.” In other words, the point is that the more voters, the more likely an election is to reach the correct decision. The Seventeenth Amendment is, then, just such a machine: its entire rationale is that the (extremely large) pool of voters of a state is more likely to reach a correct decision than an (extremely small) pool voters consisting of the state legislature alone.

Yet the very thought that anyone could even know what truth is, of course—much less build a machine for producing it—is anathema to people in humanities departments: as I’ve mentioned before, Bruce Robbins of Columbia University has reminded everyone that such departments were “founded on … the critique of Enlightenment rationality.” Such departments have, perhaps, been at the forefront of the gradual change in Americans from what the baseball writer Bill James has called “an honest, trusting people with a heavy streak of rationalism and an instinctive trust of science,” with the consequence that they had “an unhealthy faith in the validity of statistical evidence,” to adopting “the position that so long as something was stated as a statistic it was probably false and they were entitled to ignore it and believe whatever they wanted to [believe].” At any rate, any comparison of the “trusting” 1950s America described by James by comparison to what he thought of as the statistically-skeptical 1970s (and beyond) needs to reckon with the increasingly-large bulge of people educated in such departments: as a report by the Association of American Colleges and Universities has pointed out, “the percentage of college-age Americans holding degrees in the humanities has increased fairly steadily over the last half-century, from little over 1 percent in 1950 to about 2.5 percent today.” That might appear to be a fairly low percentage—but as Joe Pinsker’s headline writer put the point of Pinsker’s article in The Atlantic, “Rich Kids Major in English.” Or as a study cited by Pinsker in that article noted, “elite students were much more likely to study classics, English, and history, and much less likely to study computer science and economics.” Humanities students are a small percentage of graduates, in other words—but historically they have been (and given the increasingly-documented decreasing social mobility of American life, are increasingly likely to be) the people calling the shots later.

Or, as the infamous Northwestern University chant had it: “That‘s alright, that’s okay—you’ll be working for us someday!” By building up humanities departments, the professoriate has perhaps performed useful labor by clearing the ideological ground for nothing less than the repeal of the Seventeenth Amendment—an amendment whose argumentative success, even today, depends upon an audience familiar not only with Condorcet’s specific proposals, but also with the mathematical ideas that underlay them. That would be no surprise, perhaps, to Fritz Ringer, who described how the German intellectual class of the late nineteenth century and early twentieth constructed an “a defense of the freedom of learning and teaching, a defense which is primarily designed to combat the ruler’s meddling in favor of a narrowly useful education.” To them, the “spirit flourishes only in freedom … and its achievements, though not immediately felt, are actually the lifeblood of the nation.” Such an argument is reproduced by such “academic superstar” professors of humanities as Judith Butler, Maxine Elliot Professor in the Departments of Rhetoric and Comparative Literature at (where else?) the University of California, Berkeley, who has argued that the “contemporary tradition”—what?—“of critical theory in the academy … has shown how language plays an important role in shaping and altering our common or ‘natural’ understanding of social and political realities.”

Can’t put it better.


Literature as a pure art approaches the nature of pure science.
—“The Scientist of Letters: Obituary of James Joyce.” The New Republic 20 January 1941.


James Joyce, in the doorway of Shakespeare & Co., sometime in the 1920s.

In 1910 the twenty-sixth president of the United States, Theodore Roosevelt, offered what he called a “Square Deal” to the American people—a deal that, the president explained, consisted of two components: “equality of opportunity” and “reward for equally good service.” Not only would everyone would be given a chance, but, also—and as we shall see, more importantly—pay would be proportional to effort. More than a century later, however—according to University of Illinois at Chicago professor of English Walter Benn Michaels—the second of Roosevelt’s components has been forgotten: “the supposed left,” Michaels asserted in 2006, “has turned into something like the human resources department of the right.” What Michaels meant was that, these days, “the model of social justice is not that the rich don’t make as much and the poor make more,” it is instead “that the rich [can] make whatever they make, [so long as] an appropriate percentage of them are minorities or women.” In contemporary America, he means, only the first goal of Roosevelt’s “Square Deal” matters. Yet, why should Michaels’ “supposed left” have abandoned Roosevelt’s second goal? An answer may be found in a seminal 1961 article by political scientists Peter B. Clark and James Q. Wilson called “Incentive Systems: A Theory of Organizations”—an article that, though it nowhere mentions the man, could have been entitled “The Charlie Wilson Problem.”

Charles “Engine Charlie” Wilson was president of General Motors during World War II and into the early 1950s; General Motors, which produced tanks, bombers, and ammunition during the war, may have been as central to the war effort as any other American company—which is to say, given the fact that the United States was the “Arsenal of Democracy,” quite a lot. (“Without American trucks, we wouldn’t have had anything to pull our artillery with,” commented Field Marshal Georgy Zhukov, who led the Red Army into Berlin.) Hence, it may not be a surprise that World War II commander Dwight Eisenhower selected Wilson to be his Secretary of Defense when the leader of the Allied war in western Europe was elected president in 1952, which led to the confirmation hearings that made Wilson famous—and the possible subject of “Incentive Systems.”

That’s because of something Wilson said during those hearings: when asked whether he could make a decision, as Secretary of Defense, that would be adverse for General Motors, Wilson replied that he could not imagine such a situation, “because for years I thought that what was good for our country was good for General Motors, and vice versa.” Wilson’s words revealed how sometimes people within an organization can forget about the larger purposes of the organization—or what could be called “the Charlie Wilson problem.” What Charlie Wilson could not imagine, however, was precisely what James Wilson (and his co-writer Peter Clark) wrote about in “Incentive Systems”: how the interests of an organization might not always align with society.

Not that Clark and Wilson made some startling discovery; in one sense “Incentive Systems” is simply a gloss on one of Adam Smith’s famous remarks in The Wealth of Nations: “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public.” What set their effort apart, however, was the specificity with which they attacked the problem: the thesis of “Incentive Systems” asserts that “much of the internal and external activity of organizations may be explained by understanding their incentive systems.” In short, in order to understand how an organization’s purposes might differ from that of the larger society, a big clue might be in how it rewards its members.

In the particular case of Engine Charlie, the issue was the more than $2.5 million in General Motors stock he possessed at the time of his appointment as Secretary of Defense—even as General Motors remained one of the largest defense contractors. Depending on the calculation, that figure would be nearly ten times that today—and, given contemporary trends in corporate pay for executives, would surely be even greater than that: the “ratio of CEO-to-worker pay has increased 1,000 percent since 1950,” according to a 2013 Bloomberg report. But “Incentive Systems” casts a broader net than “merely” financial rewards.

The essay constructs “three broad categories” of incentives: “material, solidary, and purposive.” That is, not only pay and other financial sorts of reward of the type possessed by Charlie Wilson, but also two other sorts: internal rewards within the organization itself—and rewards concerning the organization’s stated intent, or purpose, in society at large. Although Adam Smith’s pointed comment raised the issue of the conflict of material interest between organizations and society two centuries ago, what “Incentive Systems” thereby raises is the possibility that, even in organizations without the material purposes of a General Motors, internal rewards can conflict with external ones:

At first, members may derive satisfaction from coming together for the purpose of achieving a stated end; later they may derive equal or greater satisfaction from simply maintaining an organization that provides them with office, prestige, power, sociability, income, or a sense of identity.

Although Wealth of Nations, and Engine Charlie, provide examples of how material rewards can disrupt the straightforward relationship between members, organizations, and society, “Incentive Systems” suggests that non-material rewards can be similarly disruptive.

If so, Clark and Wilson’s view may perhaps circle back around to illuminate a rather pressing current problem within the United States concerning material rewards: one indicated by the fact that the pay of CEOs of large companies like General Motors has increased so greatly against that of workers. It’s a story that was usefully summarized by Columbia University economist Edward N. Wolff in 1998: “In the 1970s,” Wolff wrote then, “the level of wealth inequality in the United States was comparable to that of other developed industrialized countries”—but by the 1980s “the United States had become the most unequal society in terms of wealth among the advanced industrial nations.” Statistics compiled by the Census Bureau and the Federal Reserve, Nobel Prize-winning economist Paul Krugman pointed out in 2014, “have long pointed to a dramatic shift in the process of US economic growth, one that started around 1980.” “Before then,” Krugman says, “families at all levels saw their incomes grow more or less in tandem with the growth of the economy as a whole”—but afterwards, he continued, “the lion’s share of gains went to the top end of the income distribution, with families in the bottom half lagging far behind.” Books like Thomas Piketty’s Capital in the Twenty-first Century have further documented this broad economic picture: according to the Institute for Policy Studies, for example, the richest 20 Americans now have more wealth than the poorest 50% of Americans—more than 150 million people.

How, though, can “Incentive Systems” shine a light on this large-scale movement? Aside from the fact that, apparently, the essay predicts precisely the future we now inhabit—the “motivational trends considered here,” Wilson and Clark write, “suggests gradual movement toward a society in which factors such as social status, sociability, and ‘fun’ control the character of organizations, while organized efforts to achieve either substantive purposes or wealth for its own sake diminish”—it also suggests just why the traditional sources of opposition to economic power have, largely, been silent in recent decades. The economic turmoil of the nineteenth century, after all, became the Populist movement; that of the 1930s became the Popular Front. Meanwhile, although it has sometimes been claimed that Occupy Wall Street, and more lately Bernie Sanders’ primary run, have been contemporary analogs of those previous movements, both have—I suspect anyway—had nowhere near the kind of impact of their predecessors, and for reasons suggested by “Incentive Systems.”

What “Incentive Systems” can do, in other words, is explain the problem raised by Walter Benn Michaels: the question of why, to many young would-be political activists in the United States, it’s problems of racial and other forms of discrimination that appear the most pressing—and not the economic vice that has been squeezing the majority of Americans of all races and creeds for the past several decades. (Witness the growth of the Black Lives Matter movement, for instance—which frames the issue of policing the inner city as a matter of black and white, rather than dollars and cents.) The signature move of this crowd has, for some time, been to accuse their opponents of (as one example of this school has put it) “crude economic reductionism”—or, of thinking “that the real working class only cares about the size of its paychecks.” Of course, as Michaels says in The Trouble With Diversity, the flip side of that argument is to say that this school attempts to fit all problems into the Procrustean bed of “diversity,” or more simply, “that racial identity trumps class,” rather than the other way. But why do those activists need to insist on the point so strongly?

“Some people,” Jill Lepore wrote not long ago in The New Yorker about economic inequality, “make arguments by telling stories; other people make arguments by counting things.” Understanding inequality, as should be obvious, requires—at a minimum—a grasp of the most basic terms of mathematics: it requires knowing, for instance, that a 1,000 percent increase is quite a lot. But more significantly, it also requires understanding something about how rewards—incentives—operate in society: a “something” that, as Nobel Prize-winning economist Joseph Stiglitz explained not long ago, is “ironclad.” In the Columbia University professor’s view (and it is more-or-less the view of the profession), there is a fundamental law that governs the matter—which in turn requires understanding what a scientific law is, and how one operates, and so forth.

That law in this case, the Columbia University professor says, is this: “as more money becomes concentrated at the top, aggregate demand goes into decline.” Take, Stiglitz says, the example of Mitt Romney’s 2010 income of $21.7 million: Romney can “only spend a fraction of that sum in a typical year to support himself and his wife.” But, he continues, “take the same amount of money and divide it among 500 people—say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all the money gets spent.” The more evenly money is spread around, in other words, the more efficiently, and hence productively, the American economy works—for everyone, not just some people. Conversely, the more total income is captured by fewer people, the less efficiently the economy becomes, resulting in less productivity—and ultimately a poorer America. But understanding Stiglitz’ argument requires a kind of knowledge possessed by counters, not storytellers—which, in the light of “Incentive Systems,” illustrates just why it’s discrimination, and not inequality, that is the issue of choice for political activists today.

At least since the 1960s, that is, the center of political energy on university campuses has usually been the departments that “tell stories,” not the departments that “count things”: as the late American philosopher Richard Rorty remarked before he died, “departments of English literature are now the left-most departments of the universities.” But, as Clark and Wilson might point out (following Adam Smith), the departments that “tell stories” have internal interests that may not be identical to the interests of the public: as mentioned, understanding Joseph Stiglitz’ point requires understanding science and mathematics—and as Bruce Robbins (a colleague of Wolff and Stiglitz at Columbia University, only in the English department ) has remarked, “the critique of Enlightenment rationality is what English departments were founded on.” In other words, the internal incentive systems of English departments and other storytelling disciplines reward their members for not understanding the tools that are the only means of understanding foremost political issue of the present—an issue that can only be sorted out by “counting things.”

As viewed through the prism of “Incentive Systems,” then, the lesson taught by the past few decades of American life might well be that elevating “storytelling” disciplines above “counting” disciplines has had the (utterly predictable) consequence that economic matters—a field constituted by arguments constructed about “counting things”—have been largely vacated as a possible field of political contest. And if politics consists of telling stories only, that means that “counting things” is understood as apolitical—a view that is surely, as students of deconstruction have always said, laden with politics. In that sense, then, the deal struck by Americans with themselves in the past several decades hardly seems fair. Or, to use an older vocabulary:


Water to the Sea

Yet lives our pilot still. Is’t meet that he
Should leave the helm and like a fearful lad
With tearful eyes add water to the sea
And give more strength to that which hath too much,
Whiles, in his moan, the ship splits on the rock,
Which industry and courage might have saved?
Henry VI, Part III. Act V, scene iv.

Those who make many species are the ‘splitters,’ and those who make few are the ‘lumpers,’” remarked Charles Darwin in an 1857 letter to botanist J.D. Hooker; the title of University of Chicago professor Kenneth Warren’s most recent book, What Was African-American Literature?, announces him as a “lumper.” The chief argument of Warren’s book is that the claim that something called “African-American literature” is “different from the rest of American lit[ature]”—a claim that many of Warren’s colleagues, perhaps no one more so than Harvard’s Henry Louis Gates, Jr., have based their careers upon—is, in reality, a claim that, historically, many writers with large amounts of melanin would have rejected. Take the fact, Warren says, that “literary societies … among free blacks in the antebellum north were not workshops for the production of a distinct black literature but salons for producing works of literary distinction”: these were not people looking to split off—or secede—from the state of literature. Warren’s work is, thereby, aimed against those who, like so many Lears, have divided and subdivided literature by attaching so many different adjectives to literature’s noun—an attack Warren says he makes because “a literature insisting that the problem of the 21st century remains the problem of the color line paradoxically obscures the economic and political problems facing many black Americans, unless those problems can be attributed to racial discrimination.” What Warren sees, I think, is that far too much attention is being paid to the adjective in “African-American literature”—though what he may not see is that the real issue concerns the noun.

The noun being, of course, the word “literature”: Warren’s account worries the “African-American” part of “African-American literature” instead of the “literature” part. Specifically, in Warren’s view what links the adjective to the noun—or “what made African American literature a literature”—was the regime of “constitutionally-sanctioned state-enforced segregation” known as Jim Crow, which made “black literary achievement … count, almost automatically, as an effort on behalf of the ‘race’ as a whole.” Without that institutional circumstance there are writers who are black—but no “black writers.” To Warren, it’s the distinct social structure of Jim Crow, hardening in the 1890s, that creates “black literature,” instead of merely examples of writing produced by people whose skin is darker-colored than that of other writers.

Warren’s argument thereby takes the familiar form of the typical “social construction” argument, as outlined by Ian Hacking in his book, The Social Construction of What? Such arguments begin, Hacking says, when “X is taken for granted,” and “appears to be inevitable”; in the present moment, African-American literature can certainly be said—for some people—to appear to be inevitable: Harvard’s Gates, for instance, has long claimed that “calls for the creation of a [specifically “black”] tradition occurred long before the Jim Crow era.” But it’s just at such moments, Hacking says, that someone will observe that in fact the said X is “the contingent product of the social world.” Which is just what Warren does.

Warren points out that although those who argue for an ahistorical vision of an African-American literature would claim that all black writers were attempting to produce a specifically black literature, Warren notes that the historical evidence points, merely, to an attempt to produce literature: i.e., a member of the noun class without a modifying adjective. At least, until the advent of the Jim Crow system at the end of the nineteenth century: it’s only after that time, Warren says, that “literary work by black writers came to be discussed in terms of how well it served (or failed to serve) as an instrument in the fight against Jim Crow.” In the familiar terms of the hallowed social constructionism argument, Warren is claiming that the adjective is added to the noun later, as a result of specific social forces.

Warren’s is an argument, of course, with a number of detractors, and not simply Gates. In The Postethnic Literary: Reading Paratexts and Transpositions Around 2000, Florian Sedlmeier charged Warren with reducing “African American identity to a legal policy category,” and furthermore that Warren’s account “relegates the functions of authorship and literature to the economic subsystem.” It’s a familiar version of the“reductionist” charge often cited by “postmoderns” against Marxists—an accusation tiresome at best in these days.

More creatively, in a symposium of responses to Warren in the Los Angeles Review of Books, Erica Edwards attempted to one-up Warren by saying that Warren fails to recognize that perhaps the true “invention” of African-American literature was not during the Jim Crow era of legalized segregation, but instead “with the post-Jim Crow creation of black literature classrooms.” Whereas Gates, in short, wishes to locate the origin of African-American literature in Africa prior to (or concurrently with) slavery itself, and Warren instead locates it in the 1890s during the invention of Jim Crow, Edwards wants to locate it in the 1970s, when African-American professors began to construct their own classes and syllabi. Edwards’ argument, at the least, has a certain empirical force: the term “African-American” itself is a product of the civil rights movement and afterwards; that is, the era of the end of Jim Crow, not its beginnings.

Edwards’ argument thereby leads nearly seamlessly into Aldon Lynn Nielsen’s objections, published as part of the same symposium. Nielsen begins by observing that Warren’s claims are not particularly new: Thomas Jefferson, he notes, “held that while Phillis Wheatley [the eighteenth-century black poet] wrote poems, she did not write literature,” while George Schuyler, the black novelist, wrote for The Nation in 1926 that “there was not and never had been an African American literature”—for the perhaps-surprising reason that there was no such thing as an African-American. Schuyler instead felt that the “Negro”—his term—“was no more than a ‘lampblacked Anglo-Saxon.’” In that sense, Schuyler’s argument was even more committed to the notion of “social construction” than Warren is: whereas Warren questions the timelessness of the category of a particular sort of literature, Schuyler questioned the existence of a particular category of person. Warren, that is, merely questions why “African-American literature” should be distinguished—or split from—“American literature”; Schuyler—an even more incorrigible lumper than Warren—questioned why “African-Americans” ought to be distinguished from “Americans.”

Yet, if even the term “African-American,” considered as a noun itself rather than as the adjective it is in the phrase “African-American literature,” can be destabilized, then surely that ought to raise the question, for these sharp-minded intellectuals, of the status of the noun “literature.” For it is precisely the catechism of many today that it is the “liberating” features of literature—that is, exactly, literature’s supposed capacity to produce the sort of argument delineated and catalogued by Hacking, the sort of argument in which it is argued that “X need not have existed”—that will produce, and has produced, whatever “social progress” we currently observe about the world.

That is the idea that “social progress” is the product of an increasing awareness of Nietzsche’s description of language as a “mobile army of metaphors, metonyms, and anthropomorphisms”—or, to use the late American philosopher Richard Rorty’s terminology, to recognize that “social progress” is a matter of redescription by what he called, following literary critic Harold Bloom, “strong poets.” Some version of such a theory is held by what Rorty, following University of Chicago professor Allan Bloom, called “‘the Nietzscheanized left’”: one that takes seriously the late Belgian literature professor Paul de Man’s odd suggestion that “‘one can approach … the problems of politics only on the basis of critical-linguistic analysis,’” or the late French historian Michel Foucault’s insistence that he would not propose a positive program, because “‘to imagine another system is to extend our participation in the present system.’” But such sentiments have hardly been limited to European scholars.

In America, for instance, former Duke University professor of literature Jane Tompkins echoed Foucault’s position in her essay “Sentimental Power: Uncle Tom’s Cabin and the Politics of Literary History.” There, Tompkins approvingly cited novelist Harriet Beecher Stowe’s belief, as expressed in Uncle Tom, that the “political and economic measures that constitute effective action for us, she regards as superficial, mere extensions of the worldly policies that produced the slave system in the first place.’” In the view of people like Tompkins, apparently, “political measures” will somehow sprout out of the ground of their own accord—or at least, by means of the transformative redescriptive powers of “literature.”

Yet if literature is simply a matter of redescription then it must be possible to redescribe “literature” itself: which in this paragraph will be in terms of a growing scientific “literature” (!) that, since the 1930s, has examined the differences between animals and human beings in terms of what are known as “probability guessing experiment[s].” In the classic example of this research—as cited in a 2000 paper called “The Left Hemisphere’s Role in Hypothesis Formation”—if a light is flashed with a ratio of 70% red light to 30% green, animals will tend always to guess red, while human beings will attempt to anticipate which light will be flashed next: in other words, animals will “tend to maximize or always choose the option that has occurred most frequently in the past”—whereas human beings will “tend to match the frequency of previous occurrences in their guesses.” Animals will simply always guess the same answer, while human beings will attempt to divine the pattern: that is, they will make their guesses based on the assumption that the previous series of flashes were meaningful. If the previous three flashes were “red, red, green,” a human being will tend to guess that the next flash will be red, whereas an animal will simply always guess red.

That in turn implies that, since in this specific example there is in fact no pattern and merely a probabilistic ratio of green to red, animals will always outperform human beings in this sort of test: as the authors of the paper write, “choosing the most frequent option all of the time, yields more correct guesses than matching as long as p ≠ 0.5.” Or, as they also note, “if the red light occurs with a frequency of 70% and a green light occurs with a frequency of 30%, overall accuracy will be highest if the subject predicts red all the time.” It’s true, in other words, that attempting to match a pattern will result in being correct 100% of the time—if the pattern is successfully matched. That result has, arguably, consequences for the liberationist claims of social constructionist arguments in general and literature in specific.

I trust that, without much in the way of detail—which I think could be elucidated at tiresome length—it can be stipulated that, more or less, the entire liberatory project of “literature” described above, as held by such luminaries as Foucault or Tompkins, can be said to be an attempt at elaborating rules for “pattern recognition.” Hence, it’s possible to understand how training in literature might be helpful towards fighting discrimination, which after all is obviously about constructing patterns: racists are not racist towards merely 65% of all black people, or are only racist 37% of the time. Racism—and other forms of discrimination—are not probabilistic, they are deterministic: they are rules used by discriminators that are directed at everyone within the class. (It’s true that the phenomenon of “passing” raises questions about classes, but the whole point of “passing” is that individual discriminators are unaware of the class’ “true” boundaries.) So it’s easy to see how pattern-recognition might be a useful skill with which to combat racial or other forms of discrimination.

Matching a pattern, however, suffers from one difficulty: it requires the existence of a pattern to be matched. Yet, in the example discussed in “The Left Hemisphere’s Role in Hypothesis Formation”—as in everything influenced by probability—there is no pattern: there is merely a larger chance of the light being red rather than green in each instance. Attempting then to match a pattern in a situation ruled instead by probability is not only unhelpful, but positively harmful: because there is no pattern, “guessing” simply cannot perform as well as simply maintaining the same choice every time. (Which in this case would at least result in being correct 70% of the time.) In probabilistic situations, in other words, where there is merely a certain probability of a given result rather than a certain pattern, both empirical evidence and mathematics itself demonstrates that the animal procedure of always guessing the same will be more successful than the human attempt at pattern recognition.

Hence, it follows that although training in recognizing patterns—the basis of schooling in literature, it might be said—might be valuable in combatting racism, such training will not be helpful in facing other sorts of problems: as the scientific literature demonstrates, pattern recognition as a strategy only works if there is a pattern. That in turn means that literary training can only be useful in a deterministic, and not probabilistic, world—and therefore, then, the project of “literature,” so-called, can only be “liberatory” in the sense meant by its partisans if the obstacles from which human beings need liberation are pattern-based. And that’s a conclusion, it seems to me, that is questionable at best.

Take, for example, the matter of American health care. Unlike all other industrialized nations, the United States does not have a single, government-run healthcare system, despite the fact that—as Malcolm Gladwell has noted that the American labor movement knew as early as the 1940s— “the safest and most efficient way to provide insurance against ill health or old age [is] to spread the costs and risks of benefits over the biggest and most diverse group possible.”  In other words, insurance works best by lumping, not splitting. The reason why may perhaps be the same as the reason that, as the authors of “The Left Hemisphere’s Role in Hypothesis Formation” point out, it can be said that “humans choose a less optimal strategy than rats” when it comes to probabilistic situations. Contrary to the theories of those in the humanities, in other words, the reality  is that human beings in general—and Americans when it comes to health care—appear to have a basic unfamiliarity with the facts of probability.

One sign of that ignorance is, after all, the growth of casino gambling in the United States even as health care remains a hodgepodge of differing systems—despite the fact that both insurance and casinos run on precisely the same principle. As statistician and trader Nassim Taleb has pointed out, casinos “never (if they do things right) lose money”—so long as they are not run by Donald Trump—because they “simply do not let one gambler make a massive bet” and instead prefer “to have plenty of gamblers make a series of bets of limited size.” In other words, it is not possible for some high roller to bet, say, a Las Vegas casino the entire worth of the casino on a single hand of blackjack, or any other game; casinos just simply limit the stakes to something small enough that the continued existence of the business is not at risk on any one particular event, and then make sure that there are enough bets being made to allow the laws of probability in every game (which are tilted toward the casino) to ensure the continued health of the business. Insurance, as Gladwell observed above, works precisely the same way: the more people paying premiums—and the more widely dispersed they are—the less likely it is that any one catastrophic event can wipe out the insurance fund. Both insurance and casinos are lumpers, not splitters: that, after all, is precisely why all other industrialized nations have put their health care systems on a national basis rather than maintaining the various subsystems that Americans—apparently inveterate splitters—still have.

Health care, of course, is but one of the many issues of American life that, although influenced by, ultimately have little to do with, racial or other kinds of discrimination: what matters about health care, in other words, is that too few Americans are getting it, not merely that too few African-Americans are. The same is true, for instance, about incarceration: although such works as Michelle Alexander’s The New Jim Crow have argued that the fantastically-high rate of incarceration in the United States constitutes a new “racial caste system,” University of Pennsylvania professor of political science Marie Gottschalk has pointed out that “[e]ven if you released every African American from US prisons and jails today, we’d still have a mass incarceration crisis in this country.” The problem with American prisons, in other words, is that there are too many Americans in them, not (just) too many African-Americans—or any other sort of American.

Viewing politics through a literary lens, in sum—as a matter of flashes of insight and redescription, instantiated by Wittgenstein’s duck-rabbit figure and so on—ultimately has costs: costs that have been witnessed again and again in recent American history, from the War on Drugs to the War on Terror. As Warren recognizes, viewing such issues as health care or prisons through a literary, or more specifically racial, lens is ultimately an attempt to fit a square peg through a round hole—or, perhaps even more appositively, to bring a knife to a gun fight. Warren, in short, may as well have cited UCLA philosophy professor Abraham Kaplan’s observation, sometimes called Kaplan’s Law of the Instrument: “Give a boy a hammer and everything he meets has to be pounded.” (Or, as Kaplan put the point more delicately, it ought not to be surprising “to discover that a scientist formulates problems in a way which requires for their solution just those techniques in which he himself is especially skilled.”) Much of the American “left,” in other words, views all problems as matters of redescription and so on—a belief not far from common American exhortations to “think positively” and the like. Certainly, America is far from the post-racial utopia some would like it to be. But curing the disease is not—contrary to the beliefs of many Americans today—the same as diagnosing it.

Like it—or lump it.