Home of the Brave


audentes Fortuna iuvat.
The Aeneid. Book X, line 284. 

American prosecutors in the last few decades have—Patrick Keefe recently noted in The New Yorker—come to use more and more “a type of deal, known as a deferred-prosecution agreement, in which the company would acknowledge wrongdoing, pay a fine, and pledge to improve its corporate culture,” rather than prosecuting either the company officers or the company itself for criminal acts. According to prosecutors, it seems, this is because “the problem with convicting a company was that it could have ‘collateral consequences’ that would be borne by employees, shareholders, and other innocent parties.” In other words, taking action against a corporation could put it out of business. Yet, declining to prosecute because of the possible consequences is an odd position for a prosecutor to take: “Normally a grand jury will indict a ham sandwich if a prosecutor asks it to,” former Virginia governor Chuck Robb, once a prosecutor himself, famously remarked. Prosecutors, in other words, aren’t usually known for their sensitivity to circumstance—so why the change in recent decades? The answer may lie, perhaps, in a knowledge of child-raising practices of the ancient European nobility—and the life of Galileo Galilei.

“In those days,” begins one of the stories described by Nicola Clarke in The Muslim Conquest of Iberia: Medieval Arabic Narratives, “the custom existed amongst the Goths that the sons and daughters of the nobles were brought up in the king’s palace.” Clarke is describing the tradition of “fosterage”: the custom, among the medieval aristocracy, of sending one’s children to be raised by another noble family while raising another such family’s children in turn. “It is not clear what … was the motive” for fostering children, according to Laurence Ginnell’s The Brehon Laws (from 1894), “but its practice, whether designed for that end or not, helped materially to strengthen the natural ties of kinship and sympathy which bound the chief and clan or the flaith and sept together.” In Ginnell’s telling, “a stronger affection oftentimes sprang up between persons standing in those relations than that between immediate relatives by birth.” One of the purposes of fostering, in other words, was to decrease the risk of conflict by ensuring that members of the ruling classes grew up together: it’s a lot harder to go to war, the thinking apparently went, when you are thinking of your potential opponent as the kid who skinned his knee that one time, instead of the fearsome leader of a gang of killers.

Perhaps one explanation for why prosecutors appear to be willing to go easier on corporate criminals these days than in the past might be because they share “natural ties”: they attended the same schools as those they are authorized to prosecute. Although statistics on the matter appear lacking, there’s reason to think that future white collar criminals and their (potential) prosecutors share the same “old school” ties more and more these days: there’s reason to think, in other words, that just as American law schools have seized a monopoly on the production of lawyers—Robert H. Jackson, who served from 1941 to 1954, was the last American Supreme Court Justice without a law degree—so too have America’s “selective” colleges seized a monopoly on the production of CEOs. “Just over 10% of the highest paid CEOs in America came from the Ivy League plus MIT and Stanford,” a Forbes article noted in 2012—a percentage higher than at any previous moment in American history. In other words, just as lawyers all come from the same schools these days, so too does upper management—producing the sorts of “natural ties” that not only lead to rethinking that cattle raid on your neighbor’s castle, but perhaps also any thoughts of subjecting Jaime Dimon to a “perp walk.” Yet as plausible an explanation as that might seem, it’s even more satisfying when it is combined with an incident in the life of the great astronomer.

In 1621, a Catholic priest named Scipio Chiaramonti published a book about a supernova that had occurred in 1572; the exploded star (as we now know it to have been) had been visible during daylight for several weeks in that year. The question for astronomers in that pre-Copernican time was whether the star had been one of the “fixed stars,” and thus existed beyond the moon, or whether it was closer to the earth than the moon: since—as James Franklin, from whose The Science of Conjecture: Evidence and Probability Before Pascal I take this account, notes—it was “the doctrine of the Aristotelians that there could be no change beyond the sphere of the moon,” a nova that far away would refute their theory. Chiaramonti’s book claimed that the measurements of 12 astronomers showed that the object was not as far as the moon—but Galileo pointed out that Chiaramonti’s work had, in effect, “cherrypicked”: he did not use all the data actually available, but merely used that which supported his thesis. Galileo’s argument, oddly enough, can also be applied to why American prosecutors aren’t pursuing financial crimes.

The point is supplied, Keefe tells us, by James Comey: the recent head of the FBI fired by President Trump. Before moving to Washington Comey was U.S. Attorney for the Southern District of New York, in which position he once called—Keefe informs us—some of the attorneys working for the Justice Department members of “the Chickenshit Club.” Comey’s point was that while a “perfect record of convictions and guilty pleas might signal simply that you’re a crackerjack attorney,” it might instead “mean that you’re taking only those cases you’re sure you’ll win.” To Comey’s mind, the marvelous winning records of those working under him was not a sign of not a guarantee of the ability of those attorneys, but instead a sign that his office was not pursuing enough cases. In other words, just as Chiaramonti chose only those data points that confirmed his thesis, the attorneys in Comey’s office were choosing only those cases they were sure they would win.

Yet, assuming that the decrease in financial prosecution is due to prosecutorial choice, why are prosecutors more likely, when it comes to financial crimes, to “cherrypick” today than they were a few decades ago? Keefe says this may be because “people who go to law school are risk-averse types”—but that begs the question of why today’s lawyers are more risk-averse than their predecessors. The answer, at least according to a former Yale professor, may be that they are more likely to cherrypick because they are the product of cherrypicking.

Such at least was the answer William Deresiewicz arrived at in 2014’s “Don’t Send Your Kid to the Ivy League”—the most downloaded article in the history of The New Republic. “Our system of elite education manufactures young people who are smart and talented and driven, yes,” Deresiewicz wrote  there—but, he wrote, it also produces students that are “anxious, timid, and lost.” Such students, the Yale faculty member wrote, had “little intellectual curiosity and a stunted sense of purpose”; they are “great at what they’re doing but [have] no idea why they’re doing it.” The question Deresiewicz wanted answered was, of course, why the students he saw in New Haven were this way; the answer he hit upon was that the students he saw were themselves the product of a cherrypicking process.

“So extreme are the admissions standards now,” Deresiewicz wrote in “Don’t,” “that kids who manage to get into elite colleges have, by definition, never experienced anything but success.” The “result,” he concluded, “is a violent aversion to risk.” Deresiewicz, in other words, is thinking systematically: in other words, it isn’t so much that prosecutors and white collar criminals share the same background that has made prosecutions so much less likely, but instead the fact that prosecutors have experienced a certain kind of winnowing process in the course of achieving their positions in life.

To most people, in other words, scarcity equals value: Harvard admits very few people, therefore Harvard must provide an excellent education. But what the Chiaramonti episode brings to light is the notion that what makes Harvard so great may not be that it provides an excellent education, but instead that it admits such “excellent” people in the first place: Harvard’s notably long list of excellent alumni may not be a result of what’s happening in the classroom, but instead in the admissions office. The usual understanding of education, in other words, takes the significant action of education to be what happens inside the school—but what Galileo’s statistical perspective says, instead, is that the important play may be what happens before the students even arrive.

The question that Deresiewicz’ work suggests, in turn, is that this very process may itself have unseen effects: efforts to make Harvard (along with other schools) more “exclusive”—and thus, ostensibly, provide a better education—may actually be making students worse off than they might otherwise be. Furthermore, Keefe’s work intimates that this insidious effect might not be limited to education; it may be causing invisible ripples throughout American society—ripples that may not be limited to the criminal justice system. If the same effects Keefe says are affecting lawyers is also affecting the future CEOs the prosecutors are not prosecuting, then perhaps CEOs are becoming less likely to pursue the legitimate risks that are the economic lifeblood of the nation—and perhaps more susceptible to pursuing illegitimate risks, of the sort that once landed CEOs in non-pinstriped suits. Accordingly, perhaps that old conservative bumper sticker really does have something to teach American academics—it’s just that what both sides ought perhaps to realize is that this relationship may be, at bottom, a mathematical one. That relation, you ask?

The “land of the free” because of “the brave.”

Nunc Dimittis

Nunc dimittis servum tuum, Domine, secundum verbum tuum in pace:
Quia viderunt oculi mei salutare tuum
Quod parasti ante faciem omnium populorum:
Lumen ad revelationem gentium, et gloriam plebis tuae Israel.
—“The Canticle of Simeon.”
What appeared obvious was therefore rendered problematical and the question remains: why do most … species contain approximately equal numbers of males and females?
—Stephen Jay Gould. “Death Before Birth, or a Mite’s Nunc dimittis.”
    The Panda’s Thumb: More Reflections in Natural History. 1980.
 4199HWwzLWL._AC_US218_

Since last year the attention of most American liberals has been focused on the shenanigans of President Trump—but the Trump Show has hardly been the focus of the American right. Just a few days ago, John Nichols of The Nation observed that ALEC—the business-funded American Legislative Exchange Council that has functioned as a clearinghouse for conservative proposals for state laws—“is considering whether to adopt a new piece of ‘model legislation’ that proposes to do away with an elected Senate.” In other words, ALEC is thinking of throwing its weight behind the (heretofore) fringe idea of overturning the Seventeenth Amendment, and returning the right to elect U.S. Senators to state legislatures: the status quo of 1913. Yet, why would Americans wish to return to a period widely known to be—as the most recent reputable academic history, Wendy Schiller and Charles Stewart’s Electing the Senate: Indirect Democracy Before the Seventeenth Amendment has put the point—“plagued by significant corruption to a point that undermined the very legitimacy of the election process and the U.S. Senators who were elected by it?” The answer, I suggest, might be found in a history of the German higher educational system prior to the year 1933.

“To what extent”—asked Fritz K. Ringer in 1969’s The Decline of the German Mandarins: The German Academic Community, 1890-1933—“were the German mandarins to blame for the terrible form of their own demise, for the catastrophe of National Socialism?” Such a question might sound ridiculous to American ears, to be sure: as Ezra Klein wrote in the inaugural issue of Vox, in 2014, there’s “a simple theory underlying much of American politics,” which is “that many of our most bitter political battles are mere misunderstandings” that can be solved with more information, or education. To blame German professors, then, for the triumph of the Nazi Party sounds paradoxical to such ears: it sounds like blaming an increase in rats on a radio station. From that view, then, the Nazis must have succeeded because the German people were too poorly-educated to be able to resist Hitler’s siren song.

As one appraisal of Ringer’s work in the decades since Decline has pointed out, however, the pioneering researcher went on to compare biographical dictionaries between Germany, France, England and the United States—and found “that 44 percent of German entries were academics, compared to 20 percent or less elsewhere”; another comparison of such dictionaries found that a much-higher percentage of Germans (82%) profiled in such books had exposure to university classes than those of other nations. Meanwhile, Ringer also found that “the real surprise” of delving into the records of “late nineteenth-century German secondary education” is that it “was really rather progressive for its time”: a higher percentage of Germans found their way to a high school education than did their peers in France or England during the same period. It wasn’t, in other words, for lack of education that Germany fell under the sway of the Nazis.

All that research, however, came after Decline, which dared to ask the question, “Did the work of German academics help the Nazis?” To be sure, there were a number of German academics, like philosopher Martin Heidegger and legal theorist Carl Schmitt, who not only joined the party, but actively cheered the Nazis on in public. (Heidegger’s connections to Hitler have been explored by Victor Farias and Emannuel Faye; Schmitt has been called “the crown jurist of the Third Reich.”) But that question, as interesting as it is, is not Ringer’s; he isn’t interested in the culpability of academics in direct support of the Nazis, perhaps the culpability of elevator repairmen could as well be interrogated. Instead, what makes Ringer’s argument compelling is that he connects particular intellectual beliefs to a particular historical outcome.

While most examinations of intellectuals, in other words, bewail a general lack of sympathy and understanding on the part of the public regarding the significance of intellectual labor, Ringer’s book is refreshing insofar as it takes the opposite tack: instead of upbraiding the public for not paying attention to the intellectuals, it upbraids the intellectuals for not understanding just how much attention they were actually getting. The usual story about intellectual work and such, after all, is about just how terrible intellectuals have it—how many first novels, after all, are about young writers and their struggles? But Ringer’s research suggests, as mentioned, the opposite: an investigation of Germany prior to 1933 shows that intellectuals were more highly thought of there than virtually anywhere in the world. Indeed, for much of its history before the Holocaust Germany was thought of as a land of poets and thinkers, not the grim nation portrayed in World War II movies. In that sense, Ringer has documented just how good intellectuals can have it—and how dangerous that can be.

All of that said, what are the particular beliefs that, Ringer thinks, may have led to the installation of the Fürher in 1933? The “characteristic mental habits and semantic preferences” Ringer documents in his book include such items as “the underlying vision of learning as an empathetic and unique interaction with venerated texts,” as well as a “consistent repudiation of instrumental or ‘utilitarian’ knowledge.” Such beliefs are, to be sure, seemingly required of the departments of what are now—but weren’t then—thought of, at least in the United States, as “the humanities”: without something like such foundational assumptions, subjects like philosophy or literature could not remain part of the curriculum. But, while perhaps necessary for intellectual projects to leave the ground, they may also have some costs—costs like, say, forgetting why the Seventeenth Amendment was passed.

That might sound surprising to some—after all, aren’t humanities departments hotbeds of leftism? Defenders of “the humanities”—like Gregory Harpham, once Director of the National Endowment for the Humanities—sometimes go even further and make the claim—as Harpham did in his 2011 book, The Humanities and the Dream of America—that “the capacity to sympathize, empathize, or otherwise inhabit the experience of others … is clearly essential to democratic society,” and that this “kind of capacity … is developed by an education that includes the humanities.” Such views, however, make a nonsense of history: traditionally, after all, it’s been the sciences that have been “clearly essential to democratic society,” not “the humanities.” And, if anyone thinks about it closely, the very notion of democracy itself depends on an idea that, at base, is “scientific” in nature—and one that is opposed to the notion of “the humanities.”

That idea is called, in scientific circles, “the Law of Large Numbers”—a concept first written down formally two centuries ago by mathematician Jacob Bernoulli, but easily illustrated in the words of journalist Michael Lewis’ most recent book. “If you flipped a coin a thousand times,” Lewis writes in The Undoing Project, “you were more likely to end up with heads or tails roughly half the time than if you flipped it ten times.” Or as Bernoulli put it in 1713’s Ars Conjectandi, “it is not enough to take one or another observation for such a reasoning about an event, but that a large number of them are needed.” It is a restatement of the commonsensical notion that the more times a result is repeated, the more trustworthy it is—an idea hugely applicable to human life.

For example, the Law of Large Numbers is why, as publisher Nate Silver recently put it, if “you want to predict a pitcher’s win-loss record, looking at the number of strikeouts he recorded and the number of walks he yielded is more informative than looking at his W’s and L’s from the previous season.” It’s why, when financial analyst John Bogle examined the stock market, he decided that, instead of trying to chase the latest-and-greatest stock, “people would be better off just investing their money in the entire stock market for a very cheap price”—and thereby invented the index fund. It’s why, Malcolm Gladwell has noted, the labor movement has always endorsed a national health care system: because they “believed that the safest and most efficient way to provide insurance against ill health or old age was to spread the costs and risks of benefits over the biggest and most diverse group possible.” It’s why casinos have limits on the amounts bettors can wager. In all these fields, as well as more “properly” scientific ones, it’s better to amass large quantities of results, rather than depend on small numbers of them.

What is voting, after all, but an act of sampling of the opinion of the voters, an act thereby necessarily engaged with the Law of Large Numbers? So, at least, thought the eighteenth-century mathematician and political theorist the Marquis de Condorcet—who called the result “the miracle of aggregation.” Summarizing a great deal of contemporary research, Sean Richey of Georgia State University has noted that Condorcet’s idea was that (as one of Richey’s sources puts the point) “[m]ajorities are more likely to select the ‘correct’ alternative than any single individual when there is uncertainty about which alternative is in fact the best.” Or, as Richey describes how Condorcet’s process actually works more concretely puts it, the notion is that “if ten out of twelve jurors make random errors, they should split five and five, and the outcome will be decided by the two who vote correctly.” Just as, in sum, a “betting line” demarks the boundary of opinion between gamblers, Condorcet provides the justification for voting: Condorcet’s theory was that “the law of large numbers shows that this as-if rational outcome will be almost certain in any large election if the errors are randomly distributed.” Condorcet, thereby, proposed elections as a machine for producing truth—and, arguably, democratic governments have demonstrated that fact ever since.

Key to the functioning of Condorcet’s machine, in turn, is large numbers of voters: the marquis’ whole idea, in fact, is that—as David Austen-Smith and Jeffrey S. Banks put the French mathematician’s point in 1996—“the probability that a majority votes for the better alternative … approaches 1 [100%] as n [the number of voters] goes to infinity.” In other words, the point is that the more voters, the more likely an election is to reach the correct decision. The Seventeenth Amendment is, then, just such a machine: its entire rationale is that the (extremely large) pool of voters of a state is more likely to reach a correct decision than an (extremely small) pool voters consisting of the state legislature alone.

Yet the very thought that anyone could even know what truth is, of course—much less build a machine for producing it—is anathema to people in humanities departments: as I’ve mentioned before, Bruce Robbins of Columbia University has reminded everyone that such departments were “founded on … the critique of Enlightenment rationality.” Such departments have, perhaps, been at the forefront of the gradual change in Americans from what the baseball writer Bill James has called “an honest, trusting people with a heavy streak of rationalism and an instinctive trust of science,” with the consequence that they had “an unhealthy faith in the validity of statistical evidence,” to adopting “the position that so long as something was stated as a statistic it was probably false and they were entitled to ignore it and believe whatever they wanted to [believe].” At any rate, any comparison of the “trusting” 1950s America described by James by comparison to what he thought of as the statistically-skeptical 1970s (and beyond) needs to reckon with the increasingly-large bulge of people educated in such departments: as a report by the Association of American Colleges and Universities has pointed out, “the percentage of college-age Americans holding degrees in the humanities has increased fairly steadily over the last half-century, from little over 1 percent in 1950 to about 2.5 percent today.” That might appear to be a fairly low percentage—but as Joe Pinsker’s headline writer put the point of Pinsker’s article in The Atlantic, “Rich Kids Major in English.” Or as a study cited by Pinsker in that article noted, “elite students were much more likely to study classics, English, and history, and much less likely to study computer science and economics.” Humanities students are a small percentage of graduates, in other words—but historically they have been (and given the increasingly-documented decreasing social mobility of American life, are increasingly likely to be) the people calling the shots later.

Or, as the infamous Northwestern University chant had it: “That‘s alright, that’s okay—you’ll be working for us someday!” By building up humanities departments, the professoriate has perhaps performed useful labor by clearing the ideological ground for nothing less than the repeal of the Seventeenth Amendment—an amendment whose argumentative success, even today, depends upon an audience familiar not only with Condorcet’s specific proposals, but also with the mathematical ideas that underlay them. That would be no surprise, perhaps, to Fritz Ringer, who described how the German intellectual class of the late nineteenth century and early twentieth constructed an “a defense of the freedom of learning and teaching, a defense which is primarily designed to combat the ruler’s meddling in favor of a narrowly useful education.” To them, the “spirit flourishes only in freedom … and its achievements, though not immediately felt, are actually the lifeblood of the nation.” Such an argument is reproduced by such “academic superstar” professors of humanities as Judith Butler, Maxine Elliot Professor in the Departments of Rhetoric and Comparative Literature at (where else?) the University of California, Berkeley, who has argued that the “contemporary tradition”—what?—“of critical theory in the academy … has shown how language plays an important role in shaping and altering our common or ‘natural’ understanding of social and political realities.”

Can’t put it better.

Good’n’Plenty

Literature as a pure art approaches the nature of pure science.
—“The Scientist of Letters: Obituary of James Joyce.” The New Republic 20 January 1941.

 

028f4e06ed5fa7b5c60c796c9c4ab59244fb41cc
James Joyce, in the doorway of Shakespeare & Co., sometime in the 1920s.

In 1910 the twenty-sixth president of the United States, Theodore Roosevelt, offered what he called a “Square Deal” to the American people—a deal that, the president explained, consisted of two components: “equality of opportunity” and “reward for equally good service.” Not only would everyone would be given a chance, but, also—and as we shall see, more importantly—pay would be proportional to effort. More than a century later, however—according to University of Illinois at Chicago professor of English Walter Benn Michaels—the second of Roosevelt’s components has been forgotten: “the supposed left,” Michaels asserted in 2006, “has turned into something like the human resources department of the right.” What Michaels meant was that, these days, “the model of social justice is not that the rich don’t make as much and the poor make more,” it is instead “that the rich [can] make whatever they make, [so long as] an appropriate percentage of them are minorities or women.” In contemporary America, he means, only the first goal of Roosevelt’s “Square Deal” matters. Yet, why should Michaels’ “supposed left” have abandoned Roosevelt’s second goal? An answer may be found in a seminal 1961 article by political scientists Peter B. Clark and James Q. Wilson called “Incentive Systems: A Theory of Organizations”—an article that, though it nowhere mentions the man, could have been entitled “The Charlie Wilson Problem.”

Charles “Engine Charlie” Wilson was president of General Motors during World War II and into the early 1950s; General Motors, which produced tanks, bombers, and ammunition during the war, may have been as central to the war effort as any other American company—which is to say, given the fact that the United States was the “Arsenal of Democracy,” quite a lot. (“Without American trucks, we wouldn’t have had anything to pull our artillery with,” commented Field Marshal Georgy Zhukov, who led the Red Army into Berlin.) Hence, it may not be a surprise that World War II commander Dwight Eisenhower selected Wilson to be his Secretary of Defense when the leader of the Allied war in western Europe was elected president in 1952, which led to the confirmation hearings that made Wilson famous—and the possible subject of “Incentive Systems.”

That’s because of something Wilson said during those hearings: when asked whether he could make a decision, as Secretary of Defense, that would be adverse for General Motors, Wilson replied that he could not imagine such a situation, “because for years I thought that what was good for our country was good for General Motors, and vice versa.” Wilson’s words revealed how sometimes people within an organization can forget about the larger purposes of the organization—or what could be called “the Charlie Wilson problem.” What Charlie Wilson could not imagine, however, was precisely what James Wilson (and his co-writer Peter Clark) wrote about in “Incentive Systems”: how the interests of an organization might not always align with society.

Not that Clark and Wilson made some startling discovery; in one sense “Incentive Systems” is simply a gloss on one of Adam Smith’s famous remarks in The Wealth of Nations: “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public.” What set their effort apart, however, was the specificity with which they attacked the problem: the thesis of “Incentive Systems” asserts that “much of the internal and external activity of organizations may be explained by understanding their incentive systems.” In short, in order to understand how an organization’s purposes might differ from that of the larger society, a big clue might be in how it rewards its members.

In the particular case of Engine Charlie, the issue was the more than $2.5 million in General Motors stock he possessed at the time of his appointment as Secretary of Defense—even as General Motors remained one of the largest defense contractors. Depending on the calculation, that figure would be nearly ten times that today—and, given contemporary trends in corporate pay for executives, would surely be even greater than that: the “ratio of CEO-to-worker pay has increased 1,000 percent since 1950,” according to a 2013 Bloomberg report. But “Incentive Systems” casts a broader net than “merely” financial rewards.

The essay constructs “three broad categories” of incentives: “material, solidary, and purposive.” That is, not only pay and other financial sorts of reward of the type possessed by Charlie Wilson, but also two other sorts: internal rewards within the organization itself—and rewards concerning the organization’s stated intent, or purpose, in society at large. Although Adam Smith’s pointed comment raised the issue of the conflict of material interest between organizations and society two centuries ago, what “Incentive Systems” thereby raises is the possibility that, even in organizations without the material purposes of a General Motors, internal rewards can conflict with external ones:

At first, members may derive satisfaction from coming together for the purpose of achieving a stated end; later they may derive equal or greater satisfaction from simply maintaining an organization that provides them with office, prestige, power, sociability, income, or a sense of identity.

Although Wealth of Nations, and Engine Charlie, provide examples of how material rewards can disrupt the straightforward relationship between members, organizations, and society, “Incentive Systems” suggests that non-material rewards can be similarly disruptive.

If so, Clark and Wilson’s view may perhaps circle back around to illuminate a rather pressing current problem within the United States concerning material rewards: one indicated by the fact that the pay of CEOs of large companies like General Motors has increased so greatly against that of workers. It’s a story that was usefully summarized by Columbia University economist Edward N. Wolff in 1998: “In the 1970s,” Wolff wrote then, “the level of wealth inequality in the United States was comparable to that of other developed industrialized countries”—but by the 1980s “the United States had become the most unequal society in terms of wealth among the advanced industrial nations.” Statistics compiled by the Census Bureau and the Federal Reserve, Nobel Prize-winning economist Paul Krugman pointed out in 2014, “have long pointed to a dramatic shift in the process of US economic growth, one that started around 1980.” “Before then,” Krugman says, “families at all levels saw their incomes grow more or less in tandem with the growth of the economy as a whole”—but afterwards, he continued, “the lion’s share of gains went to the top end of the income distribution, with families in the bottom half lagging far behind.” Books like Thomas Piketty’s Capital in the Twenty-first Century have further documented this broad economic picture: according to the Institute for Policy Studies, for example, the richest 20 Americans now have more wealth than the poorest 50% of Americans—more than 150 million people.

How, though, can “Incentive Systems” shine a light on this large-scale movement? Aside from the fact that, apparently, the essay predicts precisely the future we now inhabit—the “motivational trends considered here,” Wilson and Clark write, “suggests gradual movement toward a society in which factors such as social status, sociability, and ‘fun’ control the character of organizations, while organized efforts to achieve either substantive purposes or wealth for its own sake diminish”—it also suggests just why the traditional sources of opposition to economic power have, largely, been silent in recent decades. The economic turmoil of the nineteenth century, after all, became the Populist movement; that of the 1930s became the Popular Front. Meanwhile, although it has sometimes been claimed that Occupy Wall Street, and more lately Bernie Sanders’ primary run, have been contemporary analogs of those previous movements, both have—I suspect anyway—had nowhere near the kind of impact of their predecessors, and for reasons suggested by “Incentive Systems.”

What “Incentive Systems” can do, in other words, is explain the problem raised by Walter Benn Michaels: the question of why, to many young would-be political activists in the United States, it’s problems of racial and other forms of discrimination that appear the most pressing—and not the economic vice that has been squeezing the majority of Americans of all races and creeds for the past several decades. (Witness the growth of the Black Lives Matter movement, for instance—which frames the issue of policing the inner city as a matter of black and white, rather than dollars and cents.) The signature move of this crowd has, for some time, been to accuse their opponents of (as one example of this school has put it) “crude economic reductionism”—or, of thinking “that the real working class only cares about the size of its paychecks.” Of course, as Michaels says in The Trouble With Diversity, the flip side of that argument is to say that this school attempts to fit all problems into the Procrustean bed of “diversity,” or more simply, “that racial identity trumps class,” rather than the other way. But why do those activists need to insist on the point so strongly?

“Some people,” Jill Lepore wrote not long ago in The New Yorker about economic inequality, “make arguments by telling stories; other people make arguments by counting things.” Understanding inequality, as should be obvious, requires—at a minimum—a grasp of the most basic terms of mathematics: it requires knowing, for instance, that a 1,000 percent increase is quite a lot. But more significantly, it also requires understanding something about how rewards—incentives—operate in society: a “something” that, as Nobel Prize-winning economist Joseph Stiglitz explained not long ago, is “ironclad.” In the Columbia University professor’s view (and it is more-or-less the view of the profession), there is a fundamental law that governs the matter—which in turn requires understanding what a scientific law is, and how one operates, and so forth.

That law in this case, the Columbia University professor says, is this: “as more money becomes concentrated at the top, aggregate demand goes into decline.” Take, Stiglitz says, the example of Mitt Romney’s 2010 income of $21.7 million: Romney can “only spend a fraction of that sum in a typical year to support himself and his wife.” But, he continues, “take the same amount of money and divide it among 500 people—say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all the money gets spent.” The more evenly money is spread around, in other words, the more efficiently, and hence productively, the American economy works—for everyone, not just some people. Conversely, the more total income is captured by fewer people, the less efficiently the economy becomes, resulting in less productivity—and ultimately a poorer America. But understanding Stiglitz’ argument requires a kind of knowledge possessed by counters, not storytellers—which, in the light of “Incentive Systems,” illustrates just why it’s discrimination, and not inequality, that is the issue of choice for political activists today.

At least since the 1960s, that is, the center of political energy on university campuses has usually been the departments that “tell stories,” not the departments that “count things”: as the late American philosopher Richard Rorty remarked before he died, “departments of English literature are now the left-most departments of the universities.” But, as Clark and Wilson might point out (following Adam Smith), the departments that “tell stories” have internal interests that may not be identical to the interests of the public: as mentioned, understanding Joseph Stiglitz’ point requires understanding science and mathematics—and as Bruce Robbins (a colleague of Wolff and Stiglitz at Columbia University, only in the English department ) has remarked, “the critique of Enlightenment rationality is what English departments were founded on.” In other words, the internal incentive systems of English departments and other storytelling disciplines reward their members for not understanding the tools that are the only means of understanding foremost political issue of the present—an issue that can only be sorted out by “counting things.”

As viewed through the prism of “Incentive Systems,” then, the lesson taught by the past few decades of American life might well be that elevating “storytelling” disciplines above “counting” disciplines has had the (utterly predictable) consequence that economic matters—a field constituted by arguments constructed about “counting things”—have been largely vacated as a possible field of political contest. And if politics consists of telling stories only, that means that “counting things” is understood as apolitical—a view that is surely, as students of deconstruction have always said, laden with politics. In that sense, then, the deal struck by Americans with themselves in the past several decades hardly seems fair. Or, to use an older vocabulary:

Square.

Water to the Sea

Yet lives our pilot still. Is’t meet that he
Should leave the helm and like a fearful lad
With tearful eyes add water to the sea
And give more strength to that which hath too much,
Whiles, in his moan, the ship splits on the rock,
Which industry and courage might have saved?
Henry VI, Part III. Act V, scene iv.

Those who make many species are the ‘splitters,’ and those who make few are the ‘lumpers,’” remarked Charles Darwin in an 1857 letter to botanist J.D. Hooker; the title of University of Chicago professor Kenneth Warren’s most recent book, What Was African-American Literature?, announces him as a “lumper.” The chief argument of Warren’s book is that the claim that something called “African-American literature” is “different from the rest of American lit[ature]”—a claim that many of Warren’s colleagues, perhaps no one more so than Harvard’s Henry Louis Gates, Jr., have based their careers upon—is, in reality, a claim that, historically, many writers with large amounts of melanin would have rejected. Take the fact, Warren says, that “literary societies … among free blacks in the antebellum north were not workshops for the production of a distinct black literature but salons for producing works of literary distinction”: these were not people looking to split off—or secede—from the state of literature. Warren’s work is, thereby, aimed against those who, like so many Lears, have divided and subdivided literature by attaching so many different adjectives to literature’s noun—an attack Warren says he makes because “a literature insisting that the problem of the 21st century remains the problem of the color line paradoxically obscures the economic and political problems facing many black Americans, unless those problems can be attributed to racial discrimination.” What Warren sees, I think, is that far too much attention is being paid to the adjective in “African-American literature”—though what he may not see is that the real issue concerns the noun.

The noun being, of course, the word “literature”: Warren’s account worries the “African-American” part of “African-American literature” instead of the “literature” part. Specifically, in Warren’s view what links the adjective to the noun—or “what made African American literature a literature”—was the regime of “constitutionally-sanctioned state-enforced segregation” known as Jim Crow, which made “black literary achievement … count, almost automatically, as an effort on behalf of the ‘race’ as a whole.” Without that institutional circumstance there are writers who are black—but no “black writers.” To Warren, it’s the distinct social structure of Jim Crow, hardening in the 1890s, that creates “black literature,” instead of merely examples of writing produced by people whose skin is darker-colored than that of other writers.

Warren’s argument thereby takes the familiar form of the typical “social construction” argument, as outlined by Ian Hacking in his book, The Social Construction of What? Such arguments begin, Hacking says, when “X is taken for granted,” and “appears to be inevitable”; in the present moment, African-American literature can certainly be said—for some people—to appear to be inevitable: Harvard’s Gates, for instance, has long claimed that “calls for the creation of a [specifically “black”] tradition occurred long before the Jim Crow era.” But it’s just at such moments, Hacking says, that someone will observe that in fact the said X is “the contingent product of the social world.” Which is just what Warren does.

Warren points out that although those who argue for an ahistorical vision of an African-American literature would claim that all black writers were attempting to produce a specifically black literature, Warren notes that the historical evidence points, merely, to an attempt to produce literature: i.e., a member of the noun class without a modifying adjective. At least, until the advent of the Jim Crow system at the end of the nineteenth century: it’s only after that time, Warren says, that “literary work by black writers came to be discussed in terms of how well it served (or failed to serve) as an instrument in the fight against Jim Crow.” In the familiar terms of the hallowed social constructionism argument, Warren is claiming that the adjective is added to the noun later, as a result of specific social forces.

Warren’s is an argument, of course, with a number of detractors, and not simply Gates. In The Postethnic Literary: Reading Paratexts and Transpositions Around 2000, Florian Sedlmeier charged Warren with reducing “African American identity to a legal policy category,” and furthermore that Warren’s account “relegates the functions of authorship and literature to the economic subsystem.” It’s a familiar version of the“reductionist” charge often cited by “postmoderns” against Marxists—an accusation tiresome at best in these days.

More creatively, in a symposium of responses to Warren in the Los Angeles Review of Books, Erica Edwards attempted to one-up Warren by saying that Warren fails to recognize that perhaps the true “invention” of African-American literature was not during the Jim Crow era of legalized segregation, but instead “with the post-Jim Crow creation of black literature classrooms.” Whereas Gates, in short, wishes to locate the origin of African-American literature in Africa prior to (or concurrently with) slavery itself, and Warren instead locates it in the 1890s during the invention of Jim Crow, Edwards wants to locate it in the 1970s, when African-American professors began to construct their own classes and syllabi. Edwards’ argument, at the least, has a certain empirical force: the term “African-American” itself is a product of the civil rights movement and afterwards; that is, the era of the end of Jim Crow, not its beginnings.

Edwards’ argument thereby leads nearly seamlessly into Aldon Lynn Nielsen’s objections, published as part of the same symposium. Nielsen begins by observing that Warren’s claims are not particularly new: Thomas Jefferson, he notes, “held that while Phillis Wheatley [the eighteenth-century black poet] wrote poems, she did not write literature,” while George Schuyler, the black novelist, wrote for The Nation in 1926 that “there was not and never had been an African American literature”—for the perhaps-surprising reason that there was no such thing as an African-American. Schuyler instead felt that the “Negro”—his term—“was no more than a ‘lampblacked Anglo-Saxon.’” In that sense, Schuyler’s argument was even more committed to the notion of “social construction” than Warren is: whereas Warren questions the timelessness of the category of a particular sort of literature, Schuyler questioned the existence of a particular category of person. Warren, that is, merely questions why “African-American literature” should be distinguished—or split from—“American literature”; Schuyler—an even more incorrigible lumper than Warren—questioned why “African-Americans” ought to be distinguished from “Americans.”

Yet, if even the term “African-American,” considered as a noun itself rather than as the adjective it is in the phrase “African-American literature,” can be destabilized, then surely that ought to raise the question, for these sharp-minded intellectuals, of the status of the noun “literature.” For it is precisely the catechism of many today that it is the “liberating” features of literature—that is, exactly, literature’s supposed capacity to produce the sort of argument delineated and catalogued by Hacking, the sort of argument in which it is argued that “X need not have existed”—that will produce, and has produced, whatever “social progress” we currently observe about the world.

That is the idea that “social progress” is the product of an increasing awareness of Nietzsche’s description of language as a “mobile army of metaphors, metonyms, and anthropomorphisms”—or, to use the late American philosopher Richard Rorty’s terminology, to recognize that “social progress” is a matter of redescription by what he called, following literary critic Harold Bloom, “strong poets.” Some version of such a theory is held by what Rorty, following University of Chicago professor Allan Bloom, called “‘the Nietzscheanized left’”: one that takes seriously the late Belgian literature professor Paul de Man’s odd suggestion that “‘one can approach … the problems of politics only on the basis of critical-linguistic analysis,’” or the late French historian Michel Foucault’s insistence that he would not propose a positive program, because “‘to imagine another system is to extend our participation in the present system.’” But such sentiments have hardly been limited to European scholars.

In America, for instance, former Duke University professor of literature Jane Tompkins echoed Foucault’s position in her essay “Sentimental Power: Uncle Tom’s Cabin and the Politics of Literary History.” There, Tompkins approvingly cited novelist Harriet Beecher Stowe’s belief, as expressed in Uncle Tom, that the “political and economic measures that constitute effective action for us, she regards as superficial, mere extensions of the worldly policies that produced the slave system in the first place.’” In the view of people like Tompkins, apparently, “political measures” will somehow sprout out of the ground of their own accord—or at least, by means of the transformative redescriptive powers of “literature.”

Yet if literature is simply a matter of redescription then it must be possible to redescribe “literature” itself: which in this paragraph will be in terms of a growing scientific “literature” (!) that, since the 1930s, has examined the differences between animals and human beings in terms of what are known as “probability guessing experiment[s].” In the classic example of this research—as cited in a 2000 paper called “The Left Hemisphere’s Role in Hypothesis Formation”—if a light is flashed with a ratio of 70% red light to 30% green, animals will tend always to guess red, while human beings will attempt to anticipate which light will be flashed next: in other words, animals will “tend to maximize or always choose the option that has occurred most frequently in the past”—whereas human beings will “tend to match the frequency of previous occurrences in their guesses.” Animals will simply always guess the same answer, while human beings will attempt to divine the pattern: that is, they will make their guesses based on the assumption that the previous series of flashes were meaningful. If the previous three flashes were “red, red, green,” a human being will tend to guess that the next flash will be red, whereas an animal will simply always guess red.

That in turn implies that, since in this specific example there is in fact no pattern and merely a probabilistic ratio of green to red, animals will always outperform human beings in this sort of test: as the authors of the paper write, “choosing the most frequent option all of the time, yields more correct guesses than matching as long as p ≠ 0.5.” Or, as they also note, “if the red light occurs with a frequency of 70% and a green light occurs with a frequency of 30%, overall accuracy will be highest if the subject predicts red all the time.” It’s true, in other words, that attempting to match a pattern will result in being correct 100% of the time—if the pattern is successfully matched. That result has, arguably, consequences for the liberationist claims of social constructionist arguments in general and literature in specific.

I trust that, without much in the way of detail—which I think could be elucidated at tiresome length—it can be stipulated that, more or less, the entire liberatory project of “literature” described above, as held by such luminaries as Foucault or Tompkins, can be said to be an attempt at elaborating rules for “pattern recognition.” Hence, it’s possible to understand how training in literature might be helpful towards fighting discrimination, which after all is obviously about constructing patterns: racists are not racist towards merely 65% of all black people, or are only racist 37% of the time. Racism—and other forms of discrimination—are not probabilistic, they are deterministic: they are rules used by discriminators that are directed at everyone within the class. (It’s true that the phenomenon of “passing” raises questions about classes, but the whole point of “passing” is that individual discriminators are unaware of the class’ “true” boundaries.) So it’s easy to see how pattern-recognition might be a useful skill with which to combat racial or other forms of discrimination.

Matching a pattern, however, suffers from one difficulty: it requires the existence of a pattern to be matched. Yet, in the example discussed in “The Left Hemisphere’s Role in Hypothesis Formation”—as in everything influenced by probability—there is no pattern: there is merely a larger chance of the light being red rather than green in each instance. Attempting then to match a pattern in a situation ruled instead by probability is not only unhelpful, but positively harmful: because there is no pattern, “guessing” simply cannot perform as well as simply maintaining the same choice every time. (Which in this case would at least result in being correct 70% of the time.) In probabilistic situations, in other words, where there is merely a certain probability of a given result rather than a certain pattern, both empirical evidence and mathematics itself demonstrates that the animal procedure of always guessing the same will be more successful than the human attempt at pattern recognition.

Hence, it follows that although training in recognizing patterns—the basis of schooling in literature, it might be said—might be valuable in combatting racism, such training will not be helpful in facing other sorts of problems: as the scientific literature demonstrates, pattern recognition as a strategy only works if there is a pattern. That in turn means that literary training can only be useful in a deterministic, and not probabilistic, world—and therefore, then, the project of “literature,” so-called, can only be “liberatory” in the sense meant by its partisans if the obstacles from which human beings need liberation are pattern-based. And that’s a conclusion, it seems to me, that is questionable at best.

Take, for example, the matter of American health care. Unlike all other industrialized nations, the United States does not have a single, government-run healthcare system, despite the fact that—as Malcolm Gladwell has noted that the American labor movement knew as early as the 1940s— “the safest and most efficient way to provide insurance against ill health or old age [is] to spread the costs and risks of benefits over the biggest and most diverse group possible.”  In other words, insurance works best by lumping, not splitting. The reason why may perhaps be the same as the reason that, as the authors of “The Left Hemisphere’s Role in Hypothesis Formation” point out, it can be said that “humans choose a less optimal strategy than rats” when it comes to probabilistic situations. Contrary to the theories of those in the humanities, in other words, the reality  is that human beings in general—and Americans when it comes to health care—appear to have a basic unfamiliarity with the facts of probability.

One sign of that ignorance is, after all, the growth of casino gambling in the United States even as health care remains a hodgepodge of differing systems—despite the fact that both insurance and casinos run on precisely the same principle. As statistician and trader Nassim Taleb has pointed out, casinos “never (if they do things right) lose money”—so long as they are not run by Donald Trump—because they “simply do not let one gambler make a massive bet” and instead prefer “to have plenty of gamblers make a series of bets of limited size.” In other words, it is not possible for some high roller to bet, say, a Las Vegas casino the entire worth of the casino on a single hand of blackjack, or any other game; casinos just simply limit the stakes to something small enough that the continued existence of the business is not at risk on any one particular event, and then make sure that there are enough bets being made to allow the laws of probability in every game (which are tilted toward the casino) to ensure the continued health of the business. Insurance, as Gladwell observed above, works precisely the same way: the more people paying premiums—and the more widely dispersed they are—the less likely it is that any one catastrophic event can wipe out the insurance fund. Both insurance and casinos are lumpers, not splitters: that, after all, is precisely why all other industrialized nations have put their health care systems on a national basis rather than maintaining the various subsystems that Americans—apparently inveterate splitters—still have.

Health care, of course, is but one of the many issues of American life that, although influenced by, ultimately have little to do with, racial or other kinds of discrimination: what matters about health care, in other words, is that too few Americans are getting it, not merely that too few African-Americans are. The same is true, for instance, about incarceration: although such works as Michelle Alexander’s The New Jim Crow have argued that the fantastically-high rate of incarceration in the United States constitutes a new “racial caste system,” University of Pennsylvania professor of political science Marie Gottschalk has pointed out that “[e]ven if you released every African American from US prisons and jails today, we’d still have a mass incarceration crisis in this country.” The problem with American prisons, in other words, is that there are too many Americans in them, not (just) too many African-Americans—or any other sort of American.

Viewing politics through a literary lens, in sum—as a matter of flashes of insight and redescription, instantiated by Wittgenstein’s duck-rabbit figure and so on—ultimately has costs: costs that have been witnessed again and again in recent American history, from the War on Drugs to the War on Terror. As Warren recognizes, viewing such issues as health care or prisons through a literary, or more specifically racial, lens is ultimately an attempt to fit a square peg through a round hole—or, perhaps even more appositively, to bring a knife to a gun fight. Warren, in short, may as well have cited UCLA philosophy professor Abraham Kaplan’s observation, sometimes called Kaplan’s Law of the Instrument: “Give a boy a hammer and everything he meets has to be pounded.” (Or, as Kaplan put the point more delicately, it ought not to be surprising “to discover that a scientist formulates problems in a way which requires for their solution just those techniques in which he himself is especially skilled.”) Much of the American “left,” in other words, views all problems as matters of redescription and so on—a belief not far from common American exhortations to “think positively” and the like. Certainly, America is far from the post-racial utopia some would like it to be. But curing the disease is not—contrary to the beliefs of many Americans today—the same as diagnosing it.

Like it—or lump it.

Best Intentions

L’enfer est plein de bonnes volontés ou désirs
—St. Bernard of Clairvaux. c. 1150 A.D.

And if anyone knows Chang-Rae Lee,” wrote Penn State English professor Michael Bérubé back in 2006, “let’s find out what he thinks about Native Speaker!” The reason Bérubé gives for doing that asking is, first, that Lee wrote the novel under discussion, Native Speaker—and second, that Bérubé “once read somewhere that meaning is identical with intention.” But this isn’t the beginning of an essay about Native Speaker. It’s actually the end of an attack on a fellow English professor: the University of Illinois at Chicago’s Walter Benn Michaels, who (along with with Steven Knapp, now president of George Washington University), wrote the 1982 essay “Against Theory”—an essay that  argued that “the meaning of a text is simply identical to the author’s intended meaning.” Bérubé’s closing scoff then is meant to demonstrate just how politically conservative Michaels’ work is— earlier in the same piece, Bérubé attempted to tie Michaels’ work to Arthur Schlesinger, Jr.’s The Disuniting of America, a book that, because it argued that “multiculturalism” weakened a shared understanding of the United States, has much the same status among some of the intelligentsia that Mein Kampf has among Jews. Yet—weirdly for a critic who often insists on the necessity of understanding historical context—it’s Bérubé’s essay that demonstrates a lack of contextual knowledge, while it’s Michaels’ view—weirdly for a critic who has echoed Henry Ford’s claim that “History is bunk”—that demonstrates a possession of it. In historical reality, that is, it’s Michaels’ pro-intention view that has been the politically progressive one, while it’s Bérubé’s scornful view that shares essentially everything with traditionally conservative thought.

Perhaps that ought to have been apparent right from the start. Despite the fact that, to many English professors, the anti-intentionalist view has helped to unleash enormous political and intellectual energies on behalf of forgotten populations, the reason it could do so was that it originated from a forgotten population that, to many of those same professors, deserves to be forgotten: white Southerners. Anti-intentionalism, after all, was a key tenet of the critical movement called the New Criticism—a movement that, as Paul Lauter described in a presidential address to the American Studies Association in 1994, arose “largely in the South” through the work of Southerners like John Crowe Ransom, Allen Tate, and Robert Penn Warren. Hence, although Bérubé, in his essay on Michaels, insinuates that intentionalism is politically retrograde (and perhaps even racist), it’s actually the contrary belief that can be more concretely tied to a conservative politics.

Ransom and the others, after all, initially became known through a 1930 book entitled I’ll Take My Stand: The South and the Agrarian Tradition, a book whose theme was a “central attack on the impact of industrial capitalism” in favor of a vision of a specifically Southern tradition of a society based around the farm, not the factory. In their vision, as Lauter says, “the city, the artificial, the mechanical, the contingent, cosmopolitan, Jewish, liberal, and new” were counterposed to the “natural, traditional, harmonious, balanced, [and the] patriachal”: a juxtaposition of sets of values that wouldn’t be out of place in a contemporary Republican political ad. But as Lauter observes, although these men were “failures in … ‘practical agitation’”—i.e., although I’ll Take My Stand was meant to provoke a political revolution, it didn’t—“they were amazingly successful in establishing the hegemony of their ideas in the practice of the literature classroom.” Among the ideas that they instituted in the study of literature was the doctrine of anti-intentionalism.

The idea of anti-intentionalism itself, of course, predates the New Criticism: writers like T.S. Eliot (who grew up in St. Louis) and the University of Cambridge don F.R. Leavis are often cited as antecedents. Yet it did not become institutionalized as (nearly) official doctrine of English departments  (which themselves hardly existed) until the 1946 publication of W.K. Wimsatt and Monroe Beardsley’s “The Intentional Fallacy” in The Sewanee Review. (The Review, incidentally, is a publication of Sewanee: The University of the South, which was, according to its Wikipedia page, originally founded in Tennessee in 1857 “to create a Southern university free of Northern influences”—i.e., abolitionism.) In “The Intentional Fallacy,” Wimsatt and Beardsley explicitly “argued that the design or intention of the author is neither available nor desirable as a standard for judging the success of a work of literary art”—a doctrine that, in the decades that followed, did not simply become a key tenet of the New Criticism, but also largely became accepted as the basis for work in English departments. In other words, when Bérubé attacks Michaels in the guise of acting on behalf of minorities, he also attacks him on behalf of the institution of English departments—and so just who the bully is here isn’t quite so easily made out as Bérubé makes it appear.

That’s especially true because anti-intentionalism wasn’t just born and raised among conservatives—it has also continued to be a doctrine in conservative service. Take, for instance, the teachings of conservative Supreme Court justice Antonin Scalia, who throughout his career championed a method of interpretation he called “textualism”—by which he meant (!) that, as he said in 1995, it “is the law that governs, not the intent of the lawgiver.” Scalia argued his point throughout his career: in 1989’s Green v. Bock Laundry Mach. Co., for instance, he wrote that the

meaning of terms on the statute books ought to be determined, not on the basis of which meaning can be shown to have been understood by the Members of Congress, but rather on the basis of which meaning is … most in accord with context and ordinary usage … [and is] most compatible with the surrounding body of law.

Scalia thusly argued that interpretation ought to proceed from a consideration of language itself, apart from those who speak it—a position that would place him, perhaps paradoxically from Michael Bérubé’s position, among the most rarified heights of literary theorists: it was after all the formidable German philosopher Martin Heidegger—a twelve-year member of the Nazi Party and sometime-favorite of Bérubé’s—who wrote the phrase “Die Sprache spricht”: “Language [and, by implication, not speakers] speaks.” But, of course, that may not be news Michael Bérubé wishes to hear.

Like Odysseus’ crew, there’s a simple method by which Bérubé could avoid hearing the point: all of the above could be dismissed as an example of the “genetic fallacy.” First defined by Morris Cohen and Ernest Nagel in 1934’s An Introduction to Logic and Scientific Method, the “genetic fallacy” is “the supposition that an actual history of any science, art, or social institution can take the place of a logical analysis of its structure.” That is, the arguments above could be said to be like the argument that would dismiss anti-smoking advocates on the grounds that the Nazis were also anti-smoking: just because the Nazi were against smoking is no reason not to be against smoking also. In the same way, just because anti-intentionalism originated among conservative Southerners—and also, as we saw, committed Nazis—is no reason to dismiss the thought of anti-intentionalism. Or so Michael Bérubé might argue.

That would be so, however, only insofar as the doctrine of anti-intentionalism were independent from the conditions from which it arose: the reasons to be against smoking, after all, have nothing to do with anti-Semitism or the situation of interwar Germany. But in fact the doctrine of anti-intentionalism—or rather, to put things in the correct order, the doctrine of intentionalism—has everything to do with the politics of its creators. In historical reality, the doctrine enunciated by Michaels—that intention is central to interpretation—was in fact created precisely in order to resist the conservative political visions of Southerners. From that point of view, in fact, it’s possible to see the Civil War itself as essentially fought over this principle: from this height, “slavery” and “states’ rights” and the rest of the ideas sometimes advanced as reasons for the war become mere details.

It was, in fact, the very basis upon which Abraham Lincoln would fight the Civil War—though to see how requires a series of steps. They are not, however, especially difficult ones: in the first place, Lincoln plainly said what the war was about in his First Inaugural Address. “Unanimity is impossible,” as he said there, while “the rule of a minority, as a permanent arrangement, is wholly inadmissable.” Not everyone will agree all the time, in other words, yet the idea of a “wise minority” (Plato’s philosopher-king or the like) has been tried for centuries—and been found wanting; therefore, Lincoln continued, by “rejecting the majority principle, anarchy or despotism in some form is all that is left.” Lincoln thereby concluded that “a majority, held in restraint by constitutional checks and limitations”—that is, bounds to protect the minority—“is the only true sovereign of a free people.” Since the Southerners, by seceding, threatened this idea of government—the only guarantee of free government—therefore Lincoln was willing to fight them. But where did Lincoln obtain this idea?

The intellectual line of descent, as it happens, is crystal clear: as Wills writes, “Lincoln drew much of his defense of the Union from the speeches of [Daniel] Webster”: after all, the Gettysburg Address’ famous phrase, “government of the people, by the people, for the people” was an echo of Webster’s Second Reply to Hayne, which contained the phrase “made for the people, made by the people, and answerable to the people.” But if Lincoln got his notions of the Union (and thusly, his reasons for fighting the war) from Webster, then it should also be noted that Webster got his ideas from Supreme Court Justice Joseph Story: as Theodore Parker, the Boston abolitionist minister, once remarked, “Mr. Justice Story was the Jupiter Pluvius [Raingod] from whom Mr. Webster often sought to elicit peculiar thunder for his speeches and private rain for his own public tanks of law.” And Story, for his part, got his notions from another Supreme Court justice: James Wilson, who—as Linda Przybyszewski notes in passing in her book, The Republic According to John Marshall Harlan (a later Supreme Court justice)—was “a source for Joseph Story’s constitutional nationalism.” So in this fashion Lincoln’s arguments concerning the constitution—and thus, the reasons for fighting the war—ultimately derived from Wilson.

 

JamesWilson
Not this James Wilson.

Yet, what was that theory—the one that passed by a virtual apostolic succession from Wilson to Story to Webster to Lincoln? It was derived, most specifically, from a question Wilson had publicly asked in 1768, in his Considerations on the Nature and Extent of the Legislative Authority of the British Parliament. “Is British freedom,” Wilson had there asked, “denominated from the soil, or from the people, of Britain?” Nineteen years later, at the Constitutional Convention of 1787, Wilson would echo the same theme: “Shall three-fourths be ruled by one-fourth? … For whom do we make a constitution? Is it for men, or is it for imaginary beings called states?” To Wilson, the answer was clear: constitutions are for people, not for tracts of land, and as Wills correctly points out, it was on that doctrine that Lincoln prosecuted the war.

James Wilson (1742-1798)
This James Wilson.

Still, although all of the above might appear unobjectionable, there is one key difficulty to be overcome. If, that is, Wilson’s theory—and Lincoln’s basis for war—depends on a theory of political power derived from people, and not inanimate objects like the “soil,” that requires a means of distinguishing between the two—which perhaps is why Wilson insisted, in his Lectures on Law in 1790 (the very first such legal works in the United States), that “[t]he first and governing maxim in the interpretation of a statute is to discover the meaning of those who made it.” Or—to put it another way—the intention of those who made it. It’s intention, in other words, that enables Wilson’s theory to work—as Knapp and Michaels well-understand in “Against Theory.”

The central example of “Against Theory,” after all, is precisely about how to distinguish people from objects. “Suppose that you’re walking along a beach and you come upon a curious sequence of squiggles in the sand,” Michaels and his co-author ask. These “squiggles,” it seems, appear to be the opening lines of Wordsworth’s “A Slumber”: “A slumber did my spirit seal.” That wonder, then, is reinforced by the fact that, in this example, the next wave leaves, “in its wake,” the next stanza of the poem. How to explain this event, Knapp and Michaels ask?

There are, they say, only two alternatives: either to ascribe “these marks to some agent capable of intentions,” or to “count them as nonintentional effects of mechanical processes,” like some (highly unlikely) process of erosion or wave action or the like. Which, in turn, leads up to the $64,000 question: if these “words” are the result of “mechanical processes” and not the actions of an actor, then “will they still seem to be words?”

The answer, of course, is that they will not: “They will merely seem to resemble words.” Thus, to deprive (what appear to be) the words “of an author is to convert them into accidental likenesses of language.” Intention and meaning are, in this way, identical to each other: no intention, no meaning—and vice versa. Similarly, I suggest, to Lincoln (and his intellectual antecedents), the state is identical to its people—and vice versa. Which, clearly, then suggests that those who deny intention are, in their own fashion—and no matter what they say—secessionists.

If so, then that would, conversely, make those who think—along with Knapp and Michaels—that it is intention that determines meaning, and—along with Lincoln and Wilson—that it is people that constitutes states, then it would follow that those who thought that way really could—unlike the sorts of “radicals” Bérubé is attempting to cover for—construct the United States differently, in a fashion closer to the vision of James Wilson as interpreted by Abraham Lincoln. There are, after all, a number of things about the government of the United States that still lend themselves to the contrary theory, that power derives from the inanimate object of the soil: the Senate, for one. The Electoral College, for another. But the “radical” theory espoused by Michael Bérubé and others of his ilk does not allow for any such practical changes in the American constitutional architecture. In fact, given its collaboration—a word carefully chosen—with conservatives like Antonin Scalia, it does rather the reverse.

Then again, perhaps that is the intention of Michael Bérubé. He is, after all, an apparently-personable man who nevertheless asked, in a 2012 essay in the Chronicle of Higher Education explaining why he resigned the Paterno Family Professorship in Literature at Pennsylvania State University, us to consider just how horrible the whole Jerry Sandusky scandal was—for Joe Paterno’s family. (Just “imagine their shock and grief” at finding out that the great college coach may have abetted a child rapist, he asked—never mind the shock and grief of those who discovered that their child had been raped.) He is, in other words, merely a part-time apologist for child rape—and so, I suppose, on his logic we ought to give a pass to his slavery-defending, Nazi-sympathizing, “intellectual” friends.

They have, they’re happy to tell us after all, only the best intentions.

Caterpillars

All scholars, lawyers, courtiers, gentlemen,
They call false caterpillars and intend their death.
2 Henry VI 

 

When Company A, 27th Armored Infantry Battalion, U.S. 1st Infantry Division (“the Big Red One”), reached the forested hills overlooking the Rhine in the early afternoon of 7 March, 1945, and found the Ludendorff Bridge still, improbably, standing, they may have been surprised to find that they had not only found the last passage beyond Hitler’s Westwall into the heart of Germany—but also stumbled into a controversy that is still, seventy years on, continuing. That controversy could be represented by an essay written some years ago by the Belgian political theorist Chantal Mouffe on the American philosopher Richard Rorty: the problem with Rorty’s work, Chantal claimed, was that he believed that the “enemies of human happiness are greed, sloth, and hypocrisy, and no deep analysis is required to understand how they could be eliminated.” Such beliefs are capital charges in intellectual-land, where the stock-in-trade is precisely the kind of “deep analysis” that Rorty thought (at least according to Mouffe) unnecessary, so it’s little wonder that, for the most part, it’s Mouffe who’s had the better part of this argument—especially considering Rorty has been dead since 2007. Yet as the men of Company A might have told Mouffe—whose work is known, according to her Wikipedia article, for her “use of the work of Carl Schmitt” (a legal philosopher who joined the Nazi Party on 1 May 1933)—it’s actually Rorty’s work that explains just why they came to the German frontier; an account whose only significance lies in the fact that it may be the ascendance of Mouffe’s view over Rorty’s that explains such things as, for instance, why no one was arrested after the financial crisis of 2007-08.

That may, of course, sound like something of a stretch: what could the squalid affairs that nearly led to the crash of the world financial system have in common with such recondite matters as the dark duels conducted at academic conferences—or a lucky accident in the fog of war? But the link in fact is precisely at the Ludendorff, sometimes called “the Bridge at Remagen”—a bridge that might not have been standing for Company A to find had the Nazi state really been the complicated ideological product described by people like Mouffe, instead of the product of “ruthless gangsters, distinguishable only by their facial hair” (as Rorty, following Vladimar Nabokov, once described Lenin, Trotsky, and Stalin). That’s because, according to (relatively) recent historical work that unfortunately has not yet deeply penetrated the English-speaking world, in March 1945 the German generals who had led the Blitzkrieg in 1940 and ’41—and then headed the defense of Hitler’s criminal empire—were far more concerned with the routing numbers of their bank accounts than the routes into Germany.

As “the ring closed around Germany in February, March, and April 1945”—wrote Ohio State historian Norman Goda in 2003—“and as thousands of troops were being shot for desertion,” certain high-ranking officers who, in some cases, had been receiving extra “monthly payments” directly from the German treasury on orders of Hitler himself “deposited into banks that were located in the immediate path of the enemy quickly arranged to have their deposits shifted to accounts in what they hoped would be in safer locales.” In other words, in the face of Allied advance, Hitler’s generals—men like Heinz Guderian, who in 1943 was awarded “Deipenhof, an estate of 937 hectares (2,313 acres) worth RM [Reichmark] 1.24 million” deep inside occupied Poland—were preoccupied with defending their money, not Germany.

Guderian—who led the tanks who broke the French lines at Sedan, the direct cause of the Fall of France in May 1940—was only one of many top-level military leaders who received secretive pay-offs even before the beginning of World War II: Walther von Brauchitsch, who was Guderian’s supervisor, had for example been getting—tax-free—double his salary since 1938, while Field Marshal Erhard Milch, who quit his prewar job of running Lufthansa to join the Luftwaffe, received a birthday “gift” from Hitler each year worth more than $100,000 U.S. Both of these were just two of many high military officers to receive such six-figure “birthday gifts,” or other payments, which Goda writes not only were “secret and dependent on behavior”—that is, on not telling anyone about the payments and on submission to Hitler’s will—but also “simply too substantial to have been viewed seriously as legitimate.” All of these characteristics, as any federal prosecutor will tell you, are hallmarks of corruption.

Such corruption, of course, was not limited to the military: the Nazis were, according to historian Jonathan Petropoulos “not only the most notorious murderers in history but also the greatest thieves.” Or as historian Richard J. Evans has noted, “Hitler’s rule [was] based not just on dictatorship, but also on plunder, theft and looting,” beginning with the “systematic confiscation of Jewish assets, beginning almost immediately on the Nazi seizure of power in 1933.” That looting expanded once the war began; at the end of September, 1939 for instance, Evans reports, the German government “decreed a blanket confiscation of Polish property.” Dutch historian Gerard Aalders has estimated that Nazi rule stole “the equivalent of 14 billion guilders in today’s money in Jewish-owned assets alone” from the Netherlands. In addition, Hitler and other Nazi leaders, like Herman Göring, were also known for stealing priceless artworks in conquered nations (the subject of the recent film, Monument Men). In the context of such thievery on such a grand scale it hardly appears a stretch to think they might pay off the military men who made it all possible. After all, the Nazis had been doing the same for civilian leaders virtually since the moment they took over the state apparatus in 1933.

Yet, there is one difference between the military leaders of the Third Reich and American leaders today—a difference perhaps revealed by their response when confronted after the war with the evidence of their plunder. At the “High Command Trial” at Nuremberg in the winter of 1947-’48, Walther von Brauchitsch and his colleague Franz Halder—who together led the Heer into France in 1940—denied that they ever took payments, even after confronted with clear evidence of just that. Milch, for instance, claimed that his “birthday present” was compensation for the loss of his Lufthansa job. All the other generals did the same: Goda notes that even Guderian, who was well-known for his Polish estate, “changed the dates and circumstances of the transfer in order to pretend that the estate was a legitimate retirement gift.” In short, they all denied it—which is interesting in the light of the fact that, as happened during the first Nuremberg trial on 3 January 1946, at one point a witness could casually admit to the murder of 90,000 people.

To admit receiving payments, in other words, was worse—to the generals—than admitting setting Europe alight for essentially no reason. That it was so is revealed by the fact that the legal silence was matched by similar silences in postwar memoirs and so on, none of which (except Guderian’s, who as mentioned fudged some details in his) ever admitted taking money directly from the national till. That silence implies, in the first place, a conscious knowledge that these payments were simply too large to be legitimate. And that, in turn, implies a consciousness not merely of guilt, but also of shame—a concept that is simply incoherent without an understanding of what the act underlying the payments actually is. The silence of the generals, that is, implies that the German generals had internalized a definition of corruption—unfortunately, however, a recent U.S. Supreme Court case, McDowell v. United States, suggests that Americans (or at least the Supreme Court) have no such definition.

The facts of the case were that Robert McDonnell, then governor of Virginia, received $175,000 in benefits from the executive officer of a company called Star Scientific, presumably because not only did Star Scientific want Virginia’s public universities to conduct research on their product, a “nutritional supplement” based on tobacco—but they felt McDonnell could conjure the studies. The burden of the prosecutor—according to Chief Justice John Roberts’ unanimous opinion—was then to show “that Governor McDonnell committed (or agreed to commit) an ‘official act’ in exchange for the loans and gifts.” At that point, then, the case turned on the definition of “official act.”

According to the federal bribery statute, an “official act” is

any decision or action on any question, matter, cause, suit, proceeding or controversy, which may at any time be pending, or which may by law be brought before any public official, in such official’s official capacity, or in such official’s place of trust or profit.

McDonnell, of course, held that the actions McDonnell admitted he took on Star Scientific’s behalf—including setting up meetings with other state officials, making phone calls, and hosting events—did not constitute an “official act” under the law. The federal prosecutors, obviously to the contrary, held they did.

To McDonnell, defining the acts he took on behalf of Star Scientific constituted a too-broad definition of “official act”: to him (or rather his attorneys), the government’s definition made “‘virtually all of a public servant’s activities ‘official,’ no matter how minor or innocuous.’” The prosecutors’ argued that a broad definition of crooked acts is necessary to combat corruption; McDonnell argued that the broad definition threatens the ability of public officials to act. Ultimately, his attorneys said, the broad nature of the anti-corruption statute threatens constitutional government itself.

In the end the Court accepted that argument. In John Roberts’ words, the acts McDonnell committed could not be defined as anything “more specific and focused than a broad policy objective.” In other words, sure McDonnell got a bunch of stuff from a constituent, and then he did a bunch of things for that constituent, but the things that he did did not constitute anything more than simply doing his job—a familiar defense, to be sure, at Nuremberg.

The effective upshot of McDowell, then, appears to be that the U.S. Supreme Court, at least, no longer has an adequate definition of corruption—which might appear to be a grandiose conclusion to hang on one court case, of course. But consider the response of Preet Bharara, former United States Attorney for the Southern District of New York, when he was asked by The New Yorker just why it was that his office did not prosecute anyone—anyone—in response to the financial meltdown of 2007-08. Sometimes, Bharara said in response, when “you see a building go up in flames, you have to wonder if there’s arsonyou see a building go up in flames, you have to wonder if there’s arson.” Sometimes, he continued, “it’s not arson, it’s an accident”—but sometimes “it is arson, and you can’t prove it.” Bharara’s comments suggested that the problem was an investigatory one: his detectives could not gather the right evidence. But McDonnell suggests that the problem may have been something else: a legal one, where the problem isn’t with the evidence but rather with the conceptual category required to use the evidence to prosecute a crime.

That something is going on is revealed by a report from Syracuse University’s Transactional Records Access Clearinghouse, or TRAC, which found that in 2011 the Department of Justice reported that prosecutions for financial crimes have been falling since the early 1990s—despite the fact that the economic crisis of 2007 and 2008 was driven by extremely questionable financial transactions. Other studies observe that Ronald Reagan, generally not thought to be a crusader type, prosecuted more financial crimes than did Barack Obama: in 2010, the Obama administration deported 393,000 immigrants—and prosecuted zero bankers.

The question, of course, is why that is so—to which any number of answers have been proposed. One, however, is especially resisted by those at the upper reaches of academia who are in the position of educating future federal prosecutors: people who, like Mouffe, think that

Democratic action … does not require a theory of truth and notions like unconditionality and universal validity but rather a variety of practices and pragmatic moves aimed at persuading people to broaden their commitments to others, to build a more inclusive community.

“Liberal democratic principles,” Mouffe goes on to claim, “can only be defended in a contextualist manner, as being constitutive of our form of life, and we should not try to ground our commitment to them on something supposedly safer”—that “something safer” being, I suppose, anything like the account ledgers of the German treasury from 1933 to 1945, which revealed the extent of Nazi corruption after the war.

To suggest, however, that there is a connection between the linguistic practices of professors and the failures of prosecutors is, of course, to engage in just the same style of argumentation as those who insist, with Mouffe, that it is “the mobilization of passions and sentiments, the multiplication of practices, institutions and language games that provide the conditions of possibility for democratic subjects and democratic forms of willing” that will lead to “the creation of a democratic ethos.” Among these are, for example, literary scholar Jane Tompkins, who once made a similar point by recommending, not “specific alterations in the current political and economic arrangements,” but instead “a change of heart.” But perhaps the rise of such a species of supposed “leftism” ought to be expected in an age characterized by vast economic inequality, which according to Nobel Prize-winning economist Joseph Stiglitz (a proud son of Gary, Indiana), “is due to manipulation of the financial system, enabled by changes in the rules that have been bought and paid for by the financial industry itself—one of its best investments ever.”`The only question left, one supposes, is what else has been bought; the state of academia these days, it appears, suggests that academics can’t even see the Rhine, much less point the way to a bridge across.

No Hurry

The man who is not in a hurry will always see his way clearly; haste blunders on blindly.
—Titus Livius (Livy). Ab Urbe Condita. (From the Foundation of the City.) Book 22.

Just inland from the Adriatic coast, northwest of Bari, lies the little village of Canne. In Italian, the name means “reeds”; a non-descript name for a non-descript town. But the name has outlived at least one language, and will likely outlive another, all due to one August day more than 2000 years ago, where two ways of thinking collided; the conversation marked by that day has continued until now, and likely will outlive us all. One line of that conversation was taken up recently by a magazine likely as obscure as the village to most readers: Parameters, the quarterly publication of the U.S. Army War College. The article that continues the conversation whose earliest landmark may be found near the little river of  Ofanto is entitled “Intellectual Capital: A Case for Cultural Change,” and the argument of the piece’s three co-authors—all professors at West Point—is that “recent US Army promotion and command boards may actually penalize officers for their conceptual ability.” It’s a charge that, if true, ought first to scare the hell out of Americans (and everyone else on the planet), because it means that the single most fearsome power on earth is more or less deliberately being handed over to morons. But it ought, second, to scare the hell out of people because it suggests that the lesson first taught at the sleepy Italian town has still not been learned—a lesson suggested by two words I withheld from the professors’ charge sheet.

Those words? “Statistical evidence”: as in, “statistical evidence shows that recent US Army promotion and command boards …” What the statistical evidence marshaled by the West Pointers shows, it seems, is that

officers with one-standard-deviation higher cognitive abilities had 29 percent, 18 percent, and 32 percent lower odds, respectively, of being selected early … to major, early to lieutenant colonel, and for battalion command than their one-standard-deviation lower cognitive ability peers.

(A “standard deviation,” for those who don’t know—and the fact that you don’t is part of the story being told here—is a measure of how far from the mean, or average, a given set of data tends to spread: a low standard means that the data tends to cluster pretty tightly, like a river in mountainous terrain, whereas a high measure means that the data spreads widely, like river’s delta.) The study controlled for gender, ethnicity, year group, athleticism, months deployed, military branch, geographic region, and cumulative scores as cadets—and found that “if two candidates for early promotion or command have the same motivation, ethnicity, gender, length of Army experience, time deployed, physical ability, and branch, and both cannot be selected, the board is more likely to select the officer with the lower conceptual ability.” In other words, in the Army, the smarter you are, the less likely you are to advance quickly—which, obviously, may affect just how far you are likely to go at all.

That may be so, you might say, but maybe it’s just that smarter people aren’t very “devoted,” or “loyal” (or whatever sort of adjective one prefers), at least according to the military. This dichotomy even has a name in such circles: “Athens” vs. “Sparta.” According to the article, “Athens represents an institutional preference for intellectual ability, critical thinking, education, etc.,” while conversely “Sparta represents an institutional preference for motivation, tactical-ability, action-bias, diligence, intensity, physicality, etc.” So maybe the military may not be promoting as many “Athenians” as “Spartans”—but maybe the military is a more “Spartan” organization than others. Maybe this study is just a bunch of Athenians whining about not being able to control every aspect of life.

Yet, if thought about, that’s a pretty weird way to conceptualize things: why should “Athens” be opposed to “Sparta” at all? In other words, why should it happen that the traits these names attempt to describe are distributed in zero-sum packages? Why should it be that people with “Spartan” traits should not also possess “Athenian” traits, and vice versa? The whole world supposedly divides along just these lines—but I think any of us knows someone who is neither of these, and if so then it seems absurd to think that possessing a “Spartan” trait implies a lack of a corresponding “Athenian” one. As the three career Army officers say, “motivation levels and cognitive ability levels are independent of each other.” Just because someone is intelligent does not mean they are likely to be unmotivated; indeed, it makes more sense to think just the opposite.

Yet, apparently, the upper levels of the U.S. military think differently: they seem to believe that devotion to duty precludes intelligence, and vice versa. We know this not because of stereotypes about military officials, but instead because of real data about how the military allocates its promotions. In their study, the three career Army officers report that they

found significant evidence that regardless of what motivation/diligence category officers were in (low, medium, or high) there was a lower likelihood the Army would select the officers for early promotion or battalion command the higher their cognitive ability, despite the fact that the promotion and selection boards had no direct information indicating each officer’s cognitive ability. (Emp. added).

This latter point is so significant that I highlight it: it demonstrates that the Army is—somehow—selecting against intelligence even when it, supposedly, doesn’t know whether a particular candidate has it or not. Nonetheless, the boards are apparently able to suss it out (which itself is a pretty interesting use of intelligence) in order to squash it, and not only that, squash it no matter how devoted a given officer might be. In sum, these boards are not selecting against intelligence because they are selecting for devotion, or whatever, but instead are just actively attempting to promote less-intelligent officers.

Now, it may then be replied, that may be so—but perhaps fighting wars is not similar to doing other types of jobs. Or as the study puts it: perhaps “officers with higher intellectual abilities may actually make worse junior officers than their average peers.” If so, as the three career Army officers point out, such a situation “would be diametrically opposed to the … academic literature” on leadership, which finds a direct relationship between cognitive ability and success. Even so, however, perhaps war is different: the “commander of a top-tier special operations selection team,” the three officers say, reported that his team rejected candidates who scored too high on a cognitive ability test, on the grounds that such candidates “‘take too long to make a decision’”—despite the fact that, as the three officers point out, “research has shown that brighter people come up with alternatives faster than their average-conceptual-level peers.” Thinking that intelligence inhibits action, in other words, would make war essentially different from virtually every other human activity.

Of course, had that commander been in charge of recruitment during the U.S. Civil War, that would have meant not employing an alcoholic, cashiered (fired) former lieutenant later denounced as “an unimaginative butcher in war and a corrupt, blundering drunkard in peace,” a man who failed in all the civilian jobs he undertook, as a farmer and even a simple store clerk, and came close to bankruptcy several times over the course of his life. That man was Ulysses S. Grant—the man about whom Abraham Lincoln would say, when his critics pointed to his poor record, “I cannot spare this man; he fights!” (In other words, he did not hesitate to act.) Grant would, as is known, eventually accept his adversary, Lee’s, surrender at Appomattox Court House; hence, a policy that runs the risk of not finding Grant in time appears, at best, pretty cavalier.

Or, as the three career Army officers write, “if an organization assumes an officer cannot be both an Athenian and a Spartan, and prefers Spartans, any sign of Athenians will be discouraged,” and so therefore “when the Army needs senior officers who are Athenians, there will be only Spartans remaining.” The opposite view somehow thinks that smart people will still be around when they are needed—but when they are needed, they are really needed. Essentially, this view is more or less to say that the Army should not worry about its ammunition supply, because if something ever happened to require a lot of ammunition the Army could just go get more. Never mind the fact that, at such a moment, everyone else is probably going to want some ammunition too. It’s a pretty odd method of thinking that treats physical objects as more important than the people who use them—after all, as we know, guns don’t kill people, people do.

Still, the really significant thing about Grant is not he himself, but rather that he represented a particular method of thinking: “I propose to fight it out on this line, if it takes all summer,” Grant wrote to Abraham Lincoln in May 1864; “Hold on with a bulldog grip, and chew and choke as much as possible,” Lincoln replied to Grant a few months later. Although Grant is, as above, sometimes called a “butcher” who won the Civil War simply by firing more bodies at the Confederacy than the Southerners could shoot, he clearly wasn’t the idiot certain historians have made him out to be: the “‘one striking feature about Grant’s [written] orders,’” as another general would observe later, was that no “‘matter how hurriedly he may write them in the field, no one ever had the slightest doubt as to their meaning, or even has to read them over a second time to understand them.’” Rather than being unintelligent, Grant had a particular way of thinking: as one historian has observed, “Grant regard[ed] his plans as tests,” so that Grant would “have already considered other options if something doesn’t work out.” Grant had a certain philosophy, a method of both thinking and doing things—which he more or less thinks of as the same thing. But Grant did not invent that method of thinking. It  was already old when a certain Roman Senator conceived of a single sentence that, more or less, captured Grant’s philosophy—a sentence that, in turn, referred to a certain village near the Adriatic coast.

The road to that village is, however, a long one; even now we are just more than halfway there. The next step taken upon it was by a man named Quintus Fabius Maximus Verrocusus—another late bloomer, much like Grant. According to Plutarch, whose Parallel Lives sought to compare the biographies of famous Greeks and Romans, as a child Fabius was known for his “slowness in speaking, his long labour and pains in learning, his deliberation in entering into the sports of other children, [and] his easy submission to everybody, as if he had no will of his own,” traits that led many to “esteem him insensible and stupid.” Yet, as he was educated he learned to make his public speeches—required of young aristocratic Romans—without “much of popular ornament, nor empty artifice,” and instead with a “great weight of sense.” And also like Grant, who in the last year of the war faced a brilliant opponent general in Robert E. Lee, Fabius would eventually face an ingenious military leader who desired nothing more than to meet his adversary in battle—where that astute mind could destroy the Roman army in a single day, and, so possibly win the freedom of his nation.

That adversary was Hannibal Barca, the man who had marched his army, including his African war elephants, across the Alps into Italy. Hannibal was a Carthaginian, a Phoenician city on the North African coast that had already fought one massive war with Rome (the First Punic War) and had now, through Hannibal’s invasion, embarked on a second. Carthage was about as rich and powerful as Rome was itself, so by invading Hannibal posed a mortal threat to the Italians—not least because Hannibal had quite a reputation as a general already. Hence Fabius, who by this time had himself been selected to oppose the invader, “deemed it best not to meet in the field a general whose army had been tried in many encounters, and whose object was a battle,” and instead attempted to “let the force and vigour of Hannibal waste away and expire, like a flame, for want of fuel,” as Plutarch put the point. Instead of attempting to meet Hannibal in a single battle, where the African might out-general him, Fabius attempted to wear him—an invader far from his home base—down.

For some time things continued like this: Hannibal ranged about Italy, attempting to provoke Fabius into battle, while the Roman followed meekly at a distance; according to his enemies, as if he were Hannibal’s servant. Meanwhile, according to Plutarch, Hannibal himself sought to encourage that idea: burning the countryside around Rome, the Carthaginian made sure to post armed guards around Fabius’ estates in order to suggest that the Roman was in his pay. Eventually, these stratagems had their effect, and after a further series of misadventures, Fabius retired from command—just the event Hannibal awaited.

The man who became commander after Fabius was Varro, and it was he who led the Romans to the small village near the Adriatic coast. What happened near that village more than 2000 years ago might be summed by an image that might familiar to viewers of the television show, Game of Thrones:

Battle-850x560

On the television show the chaotic mass in the middle is the tiny army of the character Jon Snow, whereas the orderly lines about the perimeter is the much-vaster army of Ramsay Bolton. But in historical reality, the force in the center that is being surrounded by the opposing force was actually the larger of the two—the Roman army. It was the smaller of the two armies, the Carthaginian one, that stood at the periphery. Yet, somehow, the outcome was more or less the same: the mass of soldiers on the outside of that circle destroyed the force of soldiers on the inside, despite there being more of them; a fact that was so surprising that not only is it still remembered, but it was also the subject of not one, but two remarks that are also still remembered today.

The first of these is a remark made just before the battle itself—a remark that came in reply to the comment of one of Hannibal’s lieutenants, an officer named Gisgo, on the disparity in size between the two armies. The intent of Gisgo’s remark was, it would seem, something to the effect of, “you’re sure this is going to work, right?” To which Hannibal replied: “another thing that has escaped your notice, Gisgo, is even more amazing—that although there are so many of them, there is not one among them called Gisgo.” That is to say, Gisgo is a unique individual, and so the numbers do not matter … etc., etc. We can all fill in the arguments from there: the power of the individual, the singular force of human creativity, and so on. In the case of the incident outside Cannae, those platitudes happened to be true—Hannibal really was a kind of tactical genius. But he also happened not to be facing Fabius that day.

Fabius himself was not the sort of person who could sum up his thought in a pithy (and trite) remark, but I think that the germ of his idea was distilled some centuries after the battle by another Roman senator. “Did all the Romans who fell at Cannae”—the ancient name for the village now known as Canne—“have the same horoscope?” asked Marcus Cicero, in a book entitled De Divinatione. The comment is meant as a deflationary pinprick, designed to explode the pretensions of the followers of Hannibal—a point revealed by a subsequent sentence: “Was there ever a day when countless numbers were not born?” The comment’s point, in other words, is much the same Cicero made in another of his works, when he tells a story about the atheistic philosopher Diagoras. Reproaching his atheism, a worshipper directed Diagoras to the many painted tablets in praise of the gods at the local temple—tablets produced by storm survivors who had taken a vow to have such a tablet painted while enveloped by the sea’s power. Diagoras replied, according to Cicero, that is merely so “because there are no pictures anywhere of those who have been shipwrecked.” In other words: check your premises, sportsfans: what you think may be the result of “creativity,” or some other malarky, may simply be due to the actions of chance—in the case of Hannibal, the fact that he happened not to be fighting Fabius.

Or, more specifically, to a statistical concept called the Law of Large Numbers. First explicitly described by the mathematician Jacob Bernoulli in 1713, this is the law that holds—in Bernoulli’s words—that “it is not enough to take one or another observation for […] reasoning about an event, but that a large number of them are needed.” In a crude way, this law is what critics of Grant refer to when they accuse him of being a “butcher”: that he simply applied the larger numbers of men and material available to the Union side to the war effort. It’s also what the enemies of the man who ought to have been on the field at Cannae—but wasn’t—said about him also: that Fabius fought what military strategists call a “war of attrition” rather than a “war of maneuver.” At that time, and since, many turn their nose up at such methods: in ancient times, they were thought to be ignoble, unworthy—which was why Varro insisted on rejecting what he might have called an “old man strategy” and went on the attack that August day. Yet, they were precisely the means by which, two millennia apart, two very similar men saved their countries from very similar threats.

Today, of course, very many people on the American “Left” say that what they call “scientific” and “mathematical” thought is the enemy. On the steps of the University of California’s Sproul Hall, more than fifty years ago, the Free Speech Movement’s Mario Savio denounced “the operation of the machine”; some years prior to that German Marxist Theodore Adorno and his co-worker Max Horkheimer had condemned the spread of such thought as, more or less, the pre-condition necessary for the Holocaust: “To the Enlightenment,” the two sociologists wrote, “that which does not reduce to numbers, and ultimately to the one, becomes illusion.” According to Bruce Robbins of Columbia University, “the critique of Enlightenment rationality is what English departments were founded on,” while it’s also been observed that, since the 1960s, “language, symbolism, text, and meaning came to be seen as the theoretical foundation for the humanities.” But as I have attempted to show, the notions conceived of by these writers as belonging to a particular part of the Eurasian landmass at a particular moment of history may not be so particular after all.

Leaving those large-scale considerations aside, however, returns us to the discussion concerning promotions in the U.S. military—where the assertions of the three career officers apparently cannot be allowed to go unchallenged. A reply to the three career officers’ article from a Parameters editorial board member, predictably enough, takes them to task for not recognizing that “there are multiple kinds of intelligence,” and instead suggesting that there is “only one particular type of intelligence”—you know, just the same smear used by Adorno and Horkheimer. The author of that article, Anna Simons (a professor at the U.S. Naval Postgraduate School), further intimates that the three officers do not possess “a healthy respect for variation”—i.e., “diversity.” Which, finally, brings us to the point of all this: what is really happening within the military is that, in order to promote what is called “diversity,” standards have to be amended in such a fashion as not only to include women and minorities, but also dumb people.

In other words, the social cost of what is known as “inclusiveness” is simultaneously a general “dumbing-down” of the military: promoting women and minorities also means rewarding not-intelligent people—and, because statistically speaking there simply are more dumb people than not, that also means suppressing smart people who are like Grant, or Fabius. It never appears to occur to anyone that, more or less, talking about “variation” and the like is what the enemies of Grant—or, further back, the enemies of Fabius—said also. But, one supposes, that’s just how it goes in the United States today: neither Grant nor Fabius were called to service until their countrymen had been scared pretty badly. It may be, in other words, that the American military will continue to suppress people with high cognitive abilities within their ranks—apparently, 9/11 and its consequences were not enough like the battle fought near the tiny Italian village to change American views on these matters. Statistically speaking, after all, 9/11 only killed 0.001% of the U.S. population, whereas Cannae killed perhaps a third of the members of the Roman Senate. That, in turn, raises the central question: If 9/11 was not enough to convince Americans that something isn’t right, well—

What will?