Good’n’Plenty

Literature as a pure art approaches the nature of pure science.
—“The Scientist of Letters: Obituary of James Joyce.” The New Republic 20 January 1941.

 

028f4e06ed5fa7b5c60c796c9c4ab59244fb41cc
James Joyce, in the doorway of Shakespeare & Co., sometime in the 1920s.

In 1910 the twenty-sixth president of the United States, Theodore Roosevelt, offered what he called a “Square Deal” to the American people—a deal that, the president explained, consisted of two components: “equality of opportunity” and “reward for equally good service.” Not only would everyone would be given a chance, but, also—and as we shall see, more importantly—pay would be proportional to effort. More than a century later, however—according to University of Illinois at Chicago professor of English Walter Benn Michaels—the second of Roosevelt’s components has been forgotten: “the supposed left,” Michaels asserted in 2006, “has turned into something like the human resources department of the right.” What Michaels meant was that, these days, “the model of social justice is not that the rich don’t make as much and the poor make more,” it is instead “that the rich [can] make whatever they make, [so long as] an appropriate percentage of them are minorities or women.” In contemporary America, he means, only the first goal of Roosevelt’s “Square Deal” matters. Yet, why should Michaels’ “supposed left” have abandoned Roosevelt’s second goal? An answer may be found in a seminal 1961 article by political scientists Peter B. Clark and James Q. Wilson called “Incentive Systems: A Theory of Organizations”—an article that, though it nowhere mentions the man, could have been entitled “The Charlie Wilson Problem.”

Charles “Engine Charlie” Wilson was president of General Motors during World War II and into the early 1950s; General Motors, which produced tanks, bombers, and ammunition during the war, may have been as central to the war effort as any other American company—which is to say, given the fact that the United States was the “Arsenal of Democracy,” quite a lot. (“Without American trucks, we wouldn’t have had anything to pull our artillery with,” commented Field Marshal Georgy Zhukov, who led the Red Army into Berlin.) Hence, it may not be a surprise that World War II commander Dwight Eisenhower selected Wilson to be his Secretary of Defense when the leader of the Allied war in western Europe was elected president in 1952, which led to the confirmation hearings that made Wilson famous—and the possible subject of “Incentive Systems.”

That’s because of something Wilson said during those hearings: when asked whether he could make a decision, as Secretary of Defense, that would be adverse for General Motors, Wilson replied that he could not imagine such a situation, “because for years I thought that what was good for our country was good for General Motors, and vice versa.” Wilson’s words revealed how sometimes people within an organization can forget about the larger purposes of the organization—or what could be called “the Charlie Wilson problem.” What Charlie Wilson could not imagine, however, was precisely what James Wilson (and his co-writer Peter Clark) wrote about in “Incentive Systems”: how the interests of an organization might not always align with society.

Not that Clark and Wilson made some startling discovery; in one sense “Incentive Systems” is simply a gloss on one of Adam Smith’s famous remarks in The Wealth of Nations: “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public.” What set their effort apart, however, was the specificity with which they attacked the problem: the thesis of “Incentive Systems” asserts that “much of the internal and external activity of organizations may be explained by understanding their incentive systems.” In short, in order to understand how an organization’s purposes might differ from that of the larger society, a big clue might be in how it rewards its members.

In the particular case of Engine Charlie, the issue was the more than $2.5 million in General Motors stock he possessed at the time of his appointment as Secretary of Defense—even as General Motors remained one of the largest defense contractors. Depending on the calculation, that figure would be nearly ten times that today—and, given contemporary trends in corporate pay for executives, would surely be even greater than that: the “ratio of CEO-to-worker pay has increased 1,000 percent since 1950,” according to a 2013 Bloomberg report. But “Incentive Systems” casts a broader net than “merely” financial rewards.

The essay constructs “three broad categories” of incentives: “material, solidary, and purposive.” That is, not only pay and other financial sorts of reward of the type possessed by Charlie Wilson, but also two other sorts: internal rewards within the organization itself—and rewards concerning the organization’s stated intent, or purpose, in society at large. Although Adam Smith’s pointed comment raised the issue of the conflict of material interest between organizations and society two centuries ago, what “Incentive Systems” thereby raises is the possibility that, even in organizations without the material purposes of a General Motors, internal rewards can conflict with external ones:

At first, members may derive satisfaction from coming together for the purpose of achieving a stated end; later they may derive equal or greater satisfaction from simply maintaining an organization that provides them with office, prestige, power, sociability, income, or a sense of identity.

Although Wealth of Nations, and Engine Charlie, provide examples of how material rewards can disrupt the straightforward relationship between members, organizations, and society, “Incentive Systems” suggests that non-material rewards can be similarly disruptive.

If so, Clark and Wilson’s view may perhaps circle back around to illuminate a rather pressing current problem within the United States concerning material rewards: one indicated by the fact that the pay of CEOs of large companies like General Motors has increased so greatly against that of workers. It’s a story that was usefully summarized by Columbia University economist Edward N. Wolff in 1998: “In the 1970s,” Wolff wrote then, “the level of wealth inequality in the United States was comparable to that of other developed industrialized countries”—but by the 1980s “the United States had become the most unequal society in terms of wealth among the advanced industrial nations.” Statistics compiled by the Census Bureau and the Federal Reserve, Nobel Prize-winning economist Paul Krugman pointed out in 2014, “have long pointed to a dramatic shift in the process of US economic growth, one that started around 1980.” “Before then,” Krugman says, “families at all levels saw their incomes grow more or less in tandem with the growth of the economy as a whole”—but afterwards, he continued, “the lion’s share of gains went to the top end of the income distribution, with families in the bottom half lagging far behind.” Books like Thomas Piketty’s Capital in the Twenty-first Century have further documented this broad economic picture: according to the Institute for Policy Studies, for example, the richest 20 Americans now have more wealth than the poorest 50% of Americans—more than 150 million people.

How, though, can “Incentive Systems” shine a light on this large-scale movement? Aside from the fact that, apparently, the essay predicts precisely the future we now inhabit—the “motivational trends considered here,” Wilson and Clark write, “suggests gradual movement toward a society in which factors such as social status, sociability, and ‘fun’ control the character of organizations, while organized efforts to achieve either substantive purposes or wealth for its own sake diminish”—it also suggests just why the traditional sources of opposition to economic power have, largely, been silent in recent decades. The economic turmoil of the nineteenth century, after all, became the Populist movement; that of the 1930s became the Popular Front. Meanwhile, although it has sometimes been claimed that Occupy Wall Street, and more lately Bernie Sanders’ primary run, have been contemporary analogs of those previous movements, both have—I suspect anyway—had nowhere near the kind of impact of their predecessors, and for reasons suggested by “Incentive Systems.”

What “Incentive Systems” can do, in other words, is explain the problem raised by Walter Benn Michaels: the question of why, to many young would-be political activists in the United States, it’s problems of racial and other forms of discrimination that appear the most pressing—and not the economic vice that has been squeezing the majority of Americans of all races and creeds for the past several decades. (Witness the growth of the Black Lives Matter movement, for instance—which frames the issue of policing the inner city as a matter of black and white, rather than dollars and cents.) The signature move of this crowd has, for some time, been to accuse their opponents of (as one example of this school has put it) “crude economic reductionism”—or, of thinking “that the real working class only cares about the size of its paychecks.” Of course, as Michaels says in The Trouble With Diversity, the flip side of that argument is to say that this school attempts to fit all problems into the Procrustean bed of “diversity,” or more simply, “that racial identity trumps class,” rather than the other way. But why do those activists need to insist on the point so strongly?

“Some people,” Jill Lepore wrote not long ago in The New Yorker about economic inequality, “make arguments by telling stories; other people make arguments by counting things.” Understanding inequality, as should be obvious, requires—at a minimum—a grasp of the most basic terms of mathematics: it requires knowing, for instance, that a 1,000 percent increase is quite a lot. But more significantly, it also requires understanding something about how rewards—incentives—operate in society: a “something” that, as Nobel Prize-winning economist Joseph Stiglitz explained not long ago, is “ironclad.” In the Columbia University professor’s view (and it is more-or-less the view of the profession), there is a fundamental law that governs the matter—which in turn requires understanding what a scientific law is, and how one operates, and so forth.

That law in this case, the Columbia University professor says, is this: “as more money becomes concentrated at the top, aggregate demand goes into decline.” Take, Stiglitz says, the example of Mitt Romney’s 2010 income of $21.7 million: Romney can “only spend a fraction of that sum in a typical year to support himself and his wife.” But, he continues, “take the same amount of money and divide it among 500 people—say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all the money gets spent.” The more evenly money is spread around, in other words, the more efficiently, and hence productively, the American economy works—for everyone, not just some people. Conversely, the more total income is captured by fewer people, the less efficiently the economy becomes, resulting in less productivity—and ultimately a poorer America. But understanding Stiglitz’ argument requires a kind of knowledge possessed by counters, not storytellers—which, in the light of “Incentive Systems,” illustrates just why it’s discrimination, and not inequality, that is the issue of choice for political activists today.

At least since the 1960s, that is, the center of political energy on university campuses has usually been the departments that “tell stories,” not the departments that “count things”: as the late American philosopher Richard Rorty remarked before he died, “departments of English literature are now the left-most departments of the universities.” But, as Clark and Wilson might point out (following Adam Smith), the departments that “tell stories” have internal interests that may not be identical to the interests of the public: as mentioned, understanding Joseph Stiglitz’ point requires understanding science and mathematics—and as Bruce Robbins (a colleague of Wolff and Stiglitz at Columbia University, only in the English department ) has remarked, “the critique of Enlightenment rationality is what English departments were founded on.” In other words, the internal incentive systems of English departments and other storytelling disciplines reward their members for not understanding the tools that are the only means of understanding foremost political issue of the present—an issue that can only be sorted out by “counting things.”

As viewed through the prism of “Incentive Systems,” then, the lesson taught by the past few decades of American life might well be that elevating “storytelling” disciplines above “counting” disciplines has had the (utterly predictable) consequence that economic matters—a field constituted by arguments constructed about “counting things”—have been largely vacated as a possible field of political contest. And if politics consists of telling stories only, that means that “counting things” is understood as apolitical—a view that is surely, as students of deconstruction have always said, laden with politics. In that sense, then, the deal struck by Americans with themselves in the past several decades hardly seems fair. Or, to use an older vocabulary:

Square.

Best Intentions

L’enfer est plein de bonnes volontés ou désirs
—St. Bernard of Clairvaux. c. 1150 A.D.

And if anyone knows Chang-Rae Lee,” wrote Penn State English professor Michael Bérubé back in 2006, “let’s find out what he thinks about Native Speaker!” The reason Bérubé gives for doing that asking is, first, that Lee wrote the novel under discussion, Native Speaker—and second, that Bérubé “once read somewhere that meaning is identical with intention.” But this isn’t the beginning of an essay about Native Speaker. It’s actually the end of an attack on a fellow English professor: the University of Illinois at Chicago’s Walter Benn Michaels, who (along with with Steven Knapp, now president of George Washington University), wrote the 1982 essay “Against Theory”—an essay that  argued that “the meaning of a text is simply identical to the author’s intended meaning.” Bérubé’s closing scoff then is meant to demonstrate just how politically conservative Michaels’ work is— earlier in the same piece, Bérubé attempted to tie Michaels’ work to Arthur Schlesinger, Jr.’s The Disuniting of America, a book that, because it argued that “multiculturalism” weakened a shared understanding of the United States, has much the same status among some of the intelligentsia that Mein Kampf has among Jews. Yet—weirdly for a critic who often insists on the necessity of understanding historical context—it’s Bérubé’s essay that demonstrates a lack of contextual knowledge, while it’s Michaels’ view—weirdly for a critic who has echoed Henry Ford’s claim that “History is bunk”—that demonstrates a possession of it. In historical reality, that is, it’s Michaels’ pro-intention view that has been the politically progressive one, while it’s Bérubé’s scornful view that shares essentially everything with traditionally conservative thought.

Perhaps that ought to have been apparent right from the start. Despite the fact that, to many English professors, the anti-intentionalist view has helped to unleash enormous political and intellectual energies on behalf of forgotten populations, the reason it could do so was that it originated from a forgotten population that, to many of those same professors, deserves to be forgotten: white Southerners. Anti-intentionalism, after all, was a key tenet of the critical movement called the New Criticism—a movement that, as Paul Lauter described in a presidential address to the American Studies Association in 1994, arose “largely in the South” through the work of Southerners like John Crowe Ransom, Allen Tate, and Robert Penn Warren. Hence, although Bérubé, in his essay on Michaels, insinuates that intentionalism is politically retrograde (and perhaps even racist), it’s actually the contrary belief that can be more concretely tied to a conservative politics.

Ransom and the others, after all, initially became known through a 1930 book entitled I’ll Take My Stand: The South and the Agrarian Tradition, a book whose theme was a “central attack on the impact of industrial capitalism” in favor of a vision of a specifically Southern tradition of a society based around the farm, not the factory. In their vision, as Lauter says, “the city, the artificial, the mechanical, the contingent, cosmopolitan, Jewish, liberal, and new” were counterposed to the “natural, traditional, harmonious, balanced, [and the] patriachal”: a juxtaposition of sets of values that wouldn’t be out of place in a contemporary Republican political ad. But as Lauter observes, although these men were “failures in … ‘practical agitation’”—i.e., although I’ll Take My Stand was meant to provoke a political revolution, it didn’t—“they were amazingly successful in establishing the hegemony of their ideas in the practice of the literature classroom.” Among the ideas that they instituted in the study of literature was the doctrine of anti-intentionalism.

The idea of anti-intentionalism itself, of course, predates the New Criticism: writers like T.S. Eliot (who grew up in St. Louis) and the University of Cambridge don F.R. Leavis are often cited as antecedents. Yet it did not become institutionalized as (nearly) official doctrine of English departments  (which themselves hardly existed) until the 1946 publication of W.K. Wimsatt and Monroe Beardsley’s “The Intentional Fallacy” in The Sewanee Review. (The Review, incidentally, is a publication of Sewanee: The University of the South, which was, according to its Wikipedia page, originally founded in Tennessee in 1857 “to create a Southern university free of Northern influences”—i.e., abolitionism.) In “The Intentional Fallacy,” Wimsatt and Beardsley explicitly “argued that the design or intention of the author is neither available nor desirable as a standard for judging the success of a work of literary art”—a doctrine that, in the decades that followed, did not simply become a key tenet of the New Criticism, but also largely became accepted as the basis for work in English departments. In other words, when Bérubé attacks Michaels in the guise of acting on behalf of minorities, he also attacks him on behalf of the institution of English departments—and so just who the bully is here isn’t quite so easily made out as Bérubé makes it appear.

That’s especially true because anti-intentionalism wasn’t just born and raised among conservatives—it has also continued to be a doctrine in conservative service. Take, for instance, the teachings of conservative Supreme Court justice Antonin Scalia, who throughout his career championed a method of interpretation he called “textualism”—by which he meant (!) that, as he said in 1995, it “is the law that governs, not the intent of the lawgiver.” Scalia argued his point throughout his career: in 1989’s Green v. Bock Laundry Mach. Co., for instance, he wrote that the

meaning of terms on the statute books ought to be determined, not on the basis of which meaning can be shown to have been understood by the Members of Congress, but rather on the basis of which meaning is … most in accord with context and ordinary usage … [and is] most compatible with the surrounding body of law.

Scalia thusly argued that interpretation ought to proceed from a consideration of language itself, apart from those who speak it—a position that would place him, perhaps paradoxically from Michael Bérubé’s position, among the most rarified heights of literary theorists: it was after all the formidable German philosopher Martin Heidegger—a twelve-year member of the Nazi Party and sometime-favorite of Bérubé’s—who wrote the phrase “Die Sprache spricht”: “Language [and, by implication, not speakers] speaks.” But, of course, that may not be news Michael Bérubé wishes to hear.

Like Odysseus’ crew, there’s a simple method by which Bérubé could avoid hearing the point: all of the above could be dismissed as an example of the “genetic fallacy.” First defined by Morris Cohen and Ernest Nagel in 1934’s An Introduction to Logic and Scientific Method, the “genetic fallacy” is “the supposition that an actual history of any science, art, or social institution can take the place of a logical analysis of its structure.” That is, the arguments above could be said to be like the argument that would dismiss anti-smoking advocates on the grounds that the Nazis were also anti-smoking: just because the Nazi were against smoking is no reason not to be against smoking also. In the same way, just because anti-intentionalism originated among conservative Southerners—and also, as we saw, committed Nazis—is no reason to dismiss the thought of anti-intentionalism. Or so Michael Bérubé might argue.

That would be so, however, only insofar as the doctrine of anti-intentionalism were independent from the conditions from which it arose: the reasons to be against smoking, after all, have nothing to do with anti-Semitism or the situation of interwar Germany. But in fact the doctrine of anti-intentionalism—or rather, to put things in the correct order, the doctrine of intentionalism—has everything to do with the politics of its creators. In historical reality, the doctrine enunciated by Michaels—that intention is central to interpretation—was in fact created precisely in order to resist the conservative political visions of Southerners. From that point of view, in fact, it’s possible to see the Civil War itself as essentially fought over this principle: from this height, “slavery” and “states’ rights” and the rest of the ideas sometimes advanced as reasons for the war become mere details.

It was, in fact, the very basis upon which Abraham Lincoln would fight the Civil War—though to see how requires a series of steps. They are not, however, especially difficult ones: in the first place, Lincoln plainly said what the war was about in his First Inaugural Address. “Unanimity is impossible,” as he said there, while “the rule of a minority, as a permanent arrangement, is wholly inadmissable.” Not everyone will agree all the time, in other words, yet the idea of a “wise minority” (Plato’s philosopher-king or the like) has been tried for centuries—and been found wanting; therefore, Lincoln continued, by “rejecting the majority principle, anarchy or despotism in some form is all that is left.” Lincoln thereby concluded that “a majority, held in restraint by constitutional checks and limitations”—that is, bounds to protect the minority—“is the only true sovereign of a free people.” Since the Southerners, by seceding, threatened this idea of government—the only guarantee of free government—therefore Lincoln was willing to fight them. But where did Lincoln obtain this idea?

The intellectual line of descent, as it happens, is crystal clear: as Wills writes, “Lincoln drew much of his defense of the Union from the speeches of [Daniel] Webster”: after all, the Gettysburg Address’ famous phrase, “government of the people, by the people, for the people” was an echo of Webster’s Second Reply to Hayne, which contained the phrase “made for the people, made by the people, and answerable to the people.” But if Lincoln got his notions of the Union (and thusly, his reasons for fighting the war) from Webster, then it should also be noted that Webster got his ideas from Supreme Court Justice Joseph Story: as Theodore Parker, the Boston abolitionist minister, once remarked, “Mr. Justice Story was the Jupiter Pluvius [Raingod] from whom Mr. Webster often sought to elicit peculiar thunder for his speeches and private rain for his own public tanks of law.” And Story, for his part, got his notions from another Supreme Court justice: James Wilson, who—as Linda Przybyszewski notes in passing in her book, The Republic According to John Marshall Harlan (a later Supreme Court justice)—was “a source for Joseph Story’s constitutional nationalism.” So in this fashion Lincoln’s arguments concerning the constitution—and thus, the reasons for fighting the war—ultimately derived from Wilson.

 

JamesWilson
Not this James Wilson.

Yet, what was that theory—the one that passed by a virtual apostolic succession from Wilson to Story to Webster to Lincoln? It was derived, most specifically, from a question Wilson had publicly asked in 1768, in his Considerations on the Nature and Extent of the Legislative Authority of the British Parliament. “Is British freedom,” Wilson had there asked, “denominated from the soil, or from the people, of Britain?” Nineteen years later, at the Constitutional Convention of 1787, Wilson would echo the same theme: “Shall three-fourths be ruled by one-fourth? … For whom do we make a constitution? Is it for men, or is it for imaginary beings called states?” To Wilson, the answer was clear: constitutions are for people, not for tracts of land, and as Wills correctly points out, it was on that doctrine that Lincoln prosecuted the war.

James Wilson (1742-1798)
This James Wilson.

Still, although all of the above might appear unobjectionable, there is one key difficulty to be overcome. If, that is, Wilson’s theory—and Lincoln’s basis for war—depends on a theory of political power derived from people, and not inanimate objects like the “soil,” that requires a means of distinguishing between the two—which perhaps is why Wilson insisted, in his Lectures on Law in 1790 (the very first such legal works in the United States), that “[t]he first and governing maxim in the interpretation of a statute is to discover the meaning of those who made it.” Or—to put it another way—the intention of those who made it. It’s intention, in other words, that enables Wilson’s theory to work—as Knapp and Michaels well-understand in “Against Theory.”

The central example of “Against Theory,” after all, is precisely about how to distinguish people from objects. “Suppose that you’re walking along a beach and you come upon a curious sequence of squiggles in the sand,” Michaels and his co-author ask. These “squiggles,” it seems, appear to be the opening lines of Wordsworth’s “A Slumber”: “A slumber did my spirit seal.” That wonder, then, is reinforced by the fact that, in this example, the next wave leaves, “in its wake,” the next stanza of the poem. How to explain this event, Knapp and Michaels ask?

There are, they say, only two alternatives: either to ascribe “these marks to some agent capable of intentions,” or to “count them as nonintentional effects of mechanical processes,” like some (highly unlikely) process of erosion or wave action or the like. Which, in turn, leads up to the $64,000 question: if these “words” are the result of “mechanical processes” and not the actions of an actor, then “will they still seem to be words?”

The answer, of course, is that they will not: “They will merely seem to resemble words.” Thus, to deprive (what appear to be) the words “of an author is to convert them into accidental likenesses of language.” Intention and meaning are, in this way, identical to each other: no intention, no meaning—and vice versa. Similarly, I suggest, to Lincoln (and his intellectual antecedents), the state is identical to its people—and vice versa. Which, clearly, then suggests that those who deny intention are, in their own fashion—and no matter what they say—secessionists.

If so, then that would, conversely, make those who think—along with Knapp and Michaels—that it is intention that determines meaning, and—along with Lincoln and Wilson—that it is people that constitutes states, then it would follow that those who thought that way really could—unlike the sorts of “radicals” Bérubé is attempting to cover for—construct the United States differently, in a fashion closer to the vision of James Wilson as interpreted by Abraham Lincoln. There are, after all, a number of things about the government of the United States that still lend themselves to the contrary theory, that power derives from the inanimate object of the soil: the Senate, for one. The Electoral College, for another. But the “radical” theory espoused by Michael Bérubé and others of his ilk does not allow for any such practical changes in the American constitutional architecture. In fact, given its collaboration—a word carefully chosen—with conservatives like Antonin Scalia, it does rather the reverse.

Then again, perhaps that is the intention of Michael Bérubé. He is, after all, an apparently-personable man who nevertheless asked, in a 2012 essay in the Chronicle of Higher Education explaining why he resigned the Paterno Family Professorship in Literature at Pennsylvania State University, us to consider just how horrible the whole Jerry Sandusky scandal was—for Joe Paterno’s family. (Just “imagine their shock and grief” at finding out that the great college coach may have abetted a child rapist, he asked—never mind the shock and grief of those who discovered that their child had been raped.) He is, in other words, merely a part-time apologist for child rape—and so, I suppose, on his logic we ought to give a pass to his slavery-defending, Nazi-sympathizing, “intellectual” friends.

They have, they’re happy to tell us after all, only the best intentions.

Just Say No

Siger wished to remain a professing Catholic, and to safeguard his faith he had recourse to the celebrated theory of the two truths: what is true in philosophy may be false in religion, and vice versa.
—“Siger of Brabant” New Catholic Encyclopedia. 1914. 
If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments where one suffices.
—Thomas Aquinas. Summa Contra Gentiles
220px-benozzo_gozzoli_004a
“The Triumph of Thomas Aquinas Over Averroës”  Benozzo Gozzoli        (1420-1497)

Let no one,” read the sign over Plato’s Academy, the famed school of ancient Athens, “ignorant of mathematics enter here.” To Plato, understanding mathematics was prerequisite to the discussion of other topics, including politics. During the 1880s, however, some professors in the German university system (like Wilhelm Windelband and Wilhelm Dilthey) divided knowledge into what they called “geisteswissenschaften” (“human sciences”) and “naturwissenschaften” (“natural sciences”), so that where Plato viewed mathematics as a necessary substrate in a vertical, hierarchical relation with other fields, the Germans thought of that relation horizontally, as if they were co-equals. Today, that German conception is best exemplified by what’s known as “science studies”: the “institutional goal of” which, as Mark Bauerlein of Emory University observed some years ago, is “to delimit the sciences to one knowledge domain, to show that they speak not for reality, but for certain constructions of reality.” (Or, as one of the founders of “science studies”—Andrew Ross—began a book on the matter back in the early 1990s: “This book is dedicated to all of the science teachers I never had. It could only have been written without them.”) Yet, while it may be that the German horizontal conception (to use Plato’s famous metaphor) “carves nature at the joint” better than Plato’s vertical one, the trouble with thinking of the mathematical, scientific world as one thing and the world of the human, including the political, as something else is that, among other costs, it makes it very difficult to tell—as exemplified by two different accounts of this same historical event—the story of George Washington’s first veto. Although many people appear to think of the “humanities” as just the ticket to escape America’s troubled political history … well, maybe not.

The first account I’ll mention is a chapter entitled “The First Veto,” contained in a book published in 2002 called Political Numeracy: Mathematical Perspectives on Our Chaotic Constitution. Written by law professor Michael Meyerson of the University of Baltimore, Meyerson’s book is deeply influenced by the German, horizontal view: he begins his book by observing that, when he began law school, his torts teacher sneered to his class that if any of them “were any good in math, you’d all be in medical school,” and goes on to observe that the “concept of mathematics can be relevant to the study of law seems foreign to many modern legal minds”—presumably, due to the German influence. Meyerson writes his book, then, as an argument against the German horizontal concept—and hence, implicitly, in favor of the Platonic, Greek one. Yet Meyerson’s work is subtly damaged by contact with the German view: it is not as good a treatment of the first presidential veto as another depiction of that same event—one written long before the German distinction came to be dominant in the United States.

That account was written by political scientist Edward James of the University of Chicago, and is entitled The First Apportionment of Federal Representatives in the United States: A Study in American Politics. Published in 1896, or more than a century before Meyerson’s account, it is nevertheless wholly superior: in the first place because of its level of detail, but in the second because—despite being composed in what might appear to contemporary readers as a wholly-benighted time—it’s actually far more sophisticated than Meyerson on precisely the subject that the unwary might suppose him to be weakest on. But before taking up that matter, it might be best to explain just what the first presidential veto was about.

George Washington only issued two vetoes during his two terms as president of the United States, which isn’t a record—several presidents have issued zero vetoes, including George W. Bush in his first term. But two is a pretty low number of vetoes: the all-time record holder, Franklin Roosevelt, issued 635 vetoes over his twelve years in office, and two others have issued more than a hundred. Yet while Washington’s second veto, concerning the War Department, appears fairly inconsequential today, his first veto has had repercussions that still echo in the United States. That’s because it concerned what’s of tremendous political importance to all Americans even now: the architecture of the national legislature, Congress. But it also (in a fashion that may explain just why Washington’s veto does not receive the attention it might) concerned that basic mathematical operation: division.

The structure of the Congress is detailed in Article One of the U.S. Constitution, whose first section vests the legislative power of the national government in Congress and then divides that Congress into two houses, the Senate and the House of Representatives. Section Two of Article One describes the House of Representatives, and Clause Three of Section Two describes, among other things, just how members of the House should be distributed around the nation: the members should, the clause says, “not exceed one for every thirty Thousand” inhabitants. But it also says that “each state shall have at Least one Representative”—and that causes all the trouble.

“At the heart of the dispute,” as Meyerson remarks, is a necessarily small matter: “fractions.” Or, as James puts it in what I think of as his admirably direct fashion:

There will always be remainders after dividing the population of the state by the number of people entitled to a representative, and so long as this is true, an exact division on numerical basis is impossible, if state lines must be observed in the process of apportionment.

It isn’t possible, in other words, to have one-sixth of a congressman (no matter what we might think of her cognitive abilities), nor is it likely that state populations will be an easily-dividable number. If it were possible to ignore state lines it would also be possible to divide up the country by population readily: as James remarks, without having to take into account state boundaries the job would be “a mere matter of arithmetic.” But because state boundaries have to be taken into account, it isn’t.

The original bill—the one that Washington vetoed—tackled this problem in two steps: in the first place, it simply divided the country, whose population the 1790 Census revealed to be (on Census Day, 2 August 1790) 3,929,214, and divided by 33,000 (which does not exceed one per 30,000), which of course gives a product just shy of 120 (119.067090909, to be precise). So that was to be the total number of seats in the House of Representatives.

The second step then was to distribute them, which Congress solved by giving—according to the “The Washington Papers” at the University of Virginia—“an additional member to the eight states with the largest fraction left over after dividing.” But doing so meant that, effectively, some states’ population was being divided by 30,000 while others were being divided by some other number: as James describes, while Congress determined the total number of congressmen by dividing the nation’s total population by 33,000, when it came time to determine which states got those congressmen the legislature used a different divisor. The bill applied a 30,000 ratio to “Rhode Island, New York, Pennsylvania, Maryland, Virginia, Kentucky and Georgia,” while applying “one of 27,770 to the other eight states.” Hence, as Washington would complain in his note to Congress explaining his veto, there was “no one proportion or divisor”—a fact that Edmund Randolph, Washington’s Attorney General (and, significantly as we’ll see, a Virginian), would say was “repugnant to the spirit of the constitution.” That opinion Washington’s Secretary of State, Thomas Jefferson (also a Virginian) shared.

Because the original bill used different divisors, Jefferson said that meant that it did not contain “any principle at all”—and hence would allow future Congresses to manipulate census results for political purposes “according to any … crochet which ingenuity may invent.” Jefferson thought, instead, that every state’s population ought to be divided by the same number: a “common divisor.” On the one hand, of course, that appears perfectly fair: using a single divisor gave the appearance of transparency and prevented the kinds of manipulations Jefferson envisioned. But it did not prevent what is arguably another manipulation: under Jefferson’s plan, which had largely the same results as the original plan, two seats were taken away from Delaware and New Hampshire and given to Pennsylvania—and Virginia.

Did I mention that Jefferson (and Randolph and Washington) was a Virginian? All three were, and at the time Virginia was, as Meyerson to his credit points out, “the largest state in the Union” by population. Yet while Meyerson does correctly note “that the Jefferson plan is an extraordinarily effective machine for allocating extra seats to large states,” he fails to notice something else about Virginia—something that James does notice (as we shall see). Virginia in the 1790s was not just the most populous state, but also a state with a very large, very wealthy, and very particular local industry.

That industry was, of course, slavery, and as James wrote (need I remind you) in 1896, it did not escape sharp people at the time of Washington’s veto that, in the first place, “the vote for and against the bill was perfectly geographical, a Northern against a Southern vote,” and secondly that Jefferson’s plan had the effect of “diminish[ing] the fractions in the Northern and Eastern states and increase them in the Southern”—a pattern that implied to some that “the real reason for the adoption” of Jefferson’s plan “was not that it secured a greater degree of fairness in the distribution, but that it secured for the controlling element in the Senate”—i.e., the slaveowners—“an additional power.” “It is noticeable,” James drily remarks, “that Virginia had been picked out especially as a state which profited” by Jefferson’s plan, and that “it was […] Virginians who persuaded the President” to veto the original bill. In other words, it’s James, in 1896, who is capable of discussing the political effects of the mathematics involved in terms of race—not Meyerson, despite the fact that the law professor (because he graduated from high school in 1976) had the benefit of, among other advantages, having witnessed at least the tail end of the American civil rights movement.

All that said, I don’t know just why, of course, Meyerson feels it possible to ignore the relation between George Washington’s first, and nearly only, veto and slavery: he might for instance argue that his focus is on the relation between mathematics and politics, and that bringing race into the discussion would muddy his argument. But that’s precisely the point, isn’t it? Meyerson’s reason for excluding slavery from his discussion of Washington’s first veto is, I suspect at any rate, driven precisely by his sense that race is a matter of geisteswissenschaften. 

After all, what else could it be? As Walter Benn Michaels of the University of Illinois at Chicago has put the point, despite the fact that “we don’t any longer believe in race as a biological entity, we still treat people as if they belonged to races”—which means that we must (still) think that race exists somehow. And since the biologists assure us that there is no way—biologically speaking—to link people from various parts of, say, Africa more than people from Asia or Europe (or as Michaels says, “there is no biological fact of the matter about what race you belong to”), we must thusly be treating race as a “social” or “cultural” fact rather than a natural one—which of course implies that we must think there is (still) a distinction to be made between the “natural sciences” and the “human sciences.” Hence, Meyerson excludes race from his analysis of Washington’s first veto because he (still) thinks of race as part of the “human sciences”: even Meyerson, it seems, cannot escape the gravity well of the German concept. Yet, since there isn’t any such thing as race, that necessarily raises the question of just why we think that there are two kinds of “science.” Perhaps there is little to puzzle over about just why some Americans might like the idea of race, but one might think that it is something of a mystery just why soi-disant “intellectuals” like that idea.

Or maybe not.

Forked

He had already heard that the Roman armies were hemmed in between the two passes at the Caudine Forks, and when his son’s courier asked for his advice he gave it as his opinion that the whole force ought to be at once allowed to depart uninjured. This advice was rejected and the courier was sent back to consult him again. He now advised that they should every one be put to death. On receiving these replies … his son’s first impression was that his father’s mental powers had become impaired through his physical weakness. … [But] he believed that by taking the course he first proposed, which he considered the best, he was establishing a durable peace and friendship with a most powerful people in treating them with such exceptional kindness; by adopting the second he was postponing war for many generations, for it would take that time for Rome to recover her strength painfully and slowly after the loss of two armies.
There was no third course.
Titus LiviusAb Urbe Condita. Book IX 

 

Of course, we want both,” wrote Lee C. Bollinger, the president of Columbia University, in 2012, about whether “diversity in post-secondary schools should be focused on family income rather than racial diversity.” But while many might wish to do both, is that possible? Can the American higher educational system serve two masters? According to Walter Benn Michaels of the University of Illinois at Chicago, Bollinger’s thought that American universities can serve both economic goals and racial justice has been the thought of “every academic” with whom he’s ever discussed the subject—but Michaels, for his part, wonders just how sincere that wish really is. American academia, he says, has spent “twenty years of fighting like a cornered raccoon on behalf of the one and completely ignoring the other”; how much longer, he wonders, before “‘we want both’ sounds hollow not only to the people who hear it but to the people who say it?” Yet what Michaels doesn’t say is just why, as pious as that wish is, it’s a wish that is necessarily doomed to go unfulfilled—something that is possible to see after meeting a fictional bank teller named Linda.

Linda”—the late 1970s creation of two Israeli psychologists, Amos Tversky and Daniel Kahneman—may be the most famous fictional woman in the history of the social sciences, but she began life as a single humble paragraph:

Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Following that paragraph, there were a series of eight statements describing Linda—but as the biologist Stephen Jay Gould would point out later, “five are a blind, and only three make up the true experiment.” The “true experiment” wouldn’t reveal anything about Linda—but it would reveal a lot about those who met her. “Linda,” in other words, is like Nietzsche’s abyss: she stares back into you.

The three pointed statements of Kahneman and Tversky’s experiment are these: “Linda is active in the feminist movement; Linda is a bank teller; Linda is a bank teller and is active in the feminist movement.” The two psychologists would then ask their test subjects to guess which of the three statements was more likely. Initially, these test subjects were lowly undergraduates, but as Kahneman and Tversky performed and then re-performed the experiment, they gradually upgraded: using graduate students with a strong background in statistics next—and then eventually faculty. Yet, no matter how sophisticated the audience to which they showed this description, what Kahneman and Tversky found was that virtually everyone always thought that the statement “Linda is a bank teller and active in the feminist movement” was more likely than the statement “Linda is a bank teller.” But as only a little thought requires, that is impossible.

I’ll let the journalist Michael Lewis, who recently published a book about the work of the pair of psychologists entitled The Undoing Project: A Friendship That Changed Our Minds, explain the impossibility:

“Linda is a bank teller and is active in the feminist movement” could never be more probable than “Linda is a bank teller.” “Linda is a bank teller and is active in the feminist movement” was just a special case of “Linda is a bank teller.” “Linda is a bank teller” included “Linda is a bank teller and is active in the feminist movement” along with “Linda is a bank teller and likes to walk naked through Serbian forests” and all other bank-telling Lindas. One description was entirely contained by the other.

“Linda is a bank teller and is active in the feminist movement” simply cannot be more likely than “Linda is a bank teller.” As Louis Menand of Harvard observed about the “Linda problem” in The New Yorker in 2005, thinking that “bank teller and feminist” is more likely than the “bank teller” description “requires two things to be true … rather than one.” If the one is true so is the other; that’s why, as Lewis observed in an earlier article on the subject, it’s “logically impossible” to think otherwise. Kahneman and Tversky’s finding is curious enough on its own terms for what it tells us about human cognition, of course, because it exposes a reaction that virtually every human being ever encountering it has made. But what makes it significant in the present context is that it is also the cognitive error Lee C. Bollinger makes in his opinion piece.

“The Linda problem,” as Michael Lewis observed in The Undoing Project, “resembled a Venn diagram of two circles, but with one of the circles wholly contained by the other.” One way to see the point, perhaps, is in relation to prison incarceration. As political scientist Marie Gottschalk of the University of Pennsylvania has observed, although the

African-American incarceration rate of about 2,300 per 100,000 people is clearly off the charts and a shocking figure … [f]ocusing so intently on these racial disparities often obscures the fact that the incarceration rates for other groups in the United States, including whites and Latinos, is also comparatively very high.

While the African-American rate of imprisonment is absurdly high, in other words, the “white incarceration rate in the United States is about 400 per 100,000,” which is at least twice the rate of “the most punitive countries in Western Europe.” What that means is that, while it is possible to do something regarding, say, African-American incarceration rates by lowering the overall incarceration rates, it can’t be done the other way.“Even,” as Gottschalk says, “if you released every African American from US prisons and jails today, we’d still have a mass incarceration crisis in this country.” Releasing more prisoners means fewer minority prisoners, but releasing minority prisoners still means a lot of prisoners.

Which, after all, is precisely the point of the “Linda problem”: just as “bank teller” contains both “bank teller” and any other set of descriptors that could be added to “bank teller,” so too does “prisoner” include any other set of descriptors that could be added to it. Hence, reducing the prison population will necessarily reduce the numbers of minorities in prison—but reducing the numbers of minority prisoners will not do (much) to reduce the number of prisoners. “Minority prisoners” is a circle contained within the circle of “prisoners”—saying you’d like to reduce the numbers of minority prisoners is essentially to say that you don’t want to do anything about prisons.

Hence, when Hillary Clinton asked her audience during the recent presidential campaign “If we broke up the big banks tomorrow … would that end racism?” and “Would that end sexism?”—and then answered her own question by saying, “No,” what she was effectively saying was that she would do nothing about any of those things, racism and sexism included. (Which, given that this was the candidate who asserted that politicians ought to have “both a public and a private position,” is not out of the question.) Wanting “both,” or an alleviation of economic inequality and discrimination—as Lee Bollinger and “every academic” Walter Benn Michaels has ever talked to say they want—is simply the most efficient way of not getting either. As Michaels says, “diversity and antidiscrimination have done and can do [emp. added] nothing whatsoever to mitigate economic inequality.” The sooner that Americans realize that Michaels isn’t kidding—that anti-discrimination, identity politics is not an alternative solution, but in fact no solution—and why he’s right, the sooner that something could be done about America’s actual problems.

Assuming, of course, that’s something anyone really wants.

Lawyers, Guns, and Caddies

Why should that name be sounded more than yours?
Julius Caesar. Act I, Scene 2.

 

One of Ryan’s steady golfers—supposedly the youngest man ever to own an American car dealership—likes to call Ryan, one of the better caddies I know at Medinah, his “lawyer-caddie.” Ostensibly, it’s meant as a kind of joke, although it’s not particularly hard to hear it as a complicated slight mixed up with Schadenfreude: the golfer, involved in the tiring process of piling up cash by snookering old ladies with terrible trade-in deals, never bothered to get a college degree—and Ryan has both earned a law degree and passed the Illinois bar, one of the hardest tests in the country. Yet despite his educational accomplishments Ryan still earns the bulk of his income on the golf course, not in the law office. Which, sorry to say, is not surprising these days: as Alexander Eichler wrote for The Huffington Post in 2012, not only are “jobs … hard to come by in recent years” for would-be lawyers, but the jobs that there are come in two flavors—either “something that pays in the modest five figures” (which implies that Ryan might never get out of debt), “or something that pays much better” (the kinds of jobs that are about as likely as playing in the NBA). The legal profession has in other words bifurcated: something that, according to a 2010 article called “Talent Grab” by New Yorker writer Malcolm Gladwell, is not isolated to the law. From baseball players to investment bankers, it seems, the cream of nearly every profession has experienced a great rise in recent decades, even as much of the rest of the nation has been largely stuck in place economically: sometime in the 1970s, Gladwell writes, “salaries paid to high-level professionals—‘talent’—started to rise.” There’s at least two possible explanations for that rise: Gladwell’s is that “members of the professional class” have learned “from members of the working class”—that, in other words, “Talent” has learned the atemporal lessons of negotiation. The other, however, is both pretty simple to understand and (perhaps for that reason) might be favored by campus “leftists”: to them, widening inequality might be explained by the same reason that, surprisingly enough, prevented Lord Cornwallis from burning Mount Vernon and raping Martha Washington.

That, of course, will sound shocking to many readers—but in reality, Lord Cornwallis’ forbearance really is unexpected if the American Revolution is compared to some other British colonial military adventures. Like, for instance, the so-called “Mau Mau Uprising”—also known as the “Kenya Emergency”—during the 1950s: although much of the documentation only came out recently, after a long legal battle—which is how we know about this in the detail we do now at all—what happened in Kenya in those years was not an atypical example of British colonial management. In a nutshell: after World War II, many Kenyans, like a lot of other European colonies, demanded independence, and like a lot of other European powers, Britain would not give it to them. (A response with which Americans ought to be familiar through our own history.) Therefore, the two sides fought to demonstrate their sincerity.

Yet unlike the American experience, which largely consisted—nearly anomalously in the history of wars of independence—of set-piece battles that pitted conventionally-organized troops against each other, what makes the Kenyan episode relevant is that it was fought using the doctrines of counterinsurgency: that is, the “best practices” for the purposes of ending an armed independence movement. In Kenya, this meant “slicing off ears, boring holes in eardrums, flogging until death, pouring paraffin over suspects who were then set alight, and burning eardrums with lit cigarettes,” as Mark Curtis reported in 2003’s Web of Deceit: Britain’s Real Role in the World. It also meant gathering, according to Wikipedia, somewhere around half a million Kenyans into concentration camps, while more than a million were held in what were called “enclosed villages.” Those gathered were then “questioned” (i.e., tortured) in order to find those directly involved in the independence movement, and so forth. It’s a catalogue of horror, but what’s more horrifying is that the methods being used in Kenya were also being used, at precisely the same moment, half a world away, by more or less the same people: at the same time as the “Kenya Emergency,” the British Empire was also fighting in what’s called the “Malay Emergency.”

In Malaysia, from 1948 to 1960 the Malayan Communist Party fought a guerrilla war for independence against the British Army—a war that became such a model for counterinsurgency war that one British leader, Sir Robert Thompson, later became a senior advisor to the American effort in Vietnam. (Which itself draws attention to the fact that France was also involved in counterinsurgency wars at the time: not only in Vietnam, but also in Algeria.) And in case you happen to think that all of this is merely an historical coincidence regarding the aftershocks of the Second World War, it’s important to remember that the very word “concentration camp” was first widely used in English during the Second Boer War of 1899-1902. “Best practice” in fighting colonial wars, that is, was pretty standardized: go in, grab the wives and kids, threaten them, and then just follow the trail back to the ringleaders. In other words, Abu Ghraib—but also, the Romans.

It’s perhaps no coincidence, in other words, that the basis of elite education in the Western world for millennia began with Julius Caesar’s Gallic Wars, usually the first book assigned to beginning students of Latin. Often justified educationally on the basis of its unusually clear rhetoric (the famously deadpan opening line: “Gaul is divided into three parts …”), the Gallic Wars could also be described as a kind of “how to” manual regarding “pacification” campaigns: in this case, the failed rebellion of Vercingetorix in 52 BCE, who, according to Caesar, “urged them to take up arms in order to win liberty for all.” In Gallic Wars, Caesar details such common counterinsurgency techniques as, say, hostage-taking: in negotiations with the Helvetii in Book One, for instance, Caesar makes the offer that “if hostages were to be given by them [the Helvetii] in order that he may be assured these will do what they promise … he [Caesar] will make peace with them.” The book also describes torture in several places throughout (though, to be sure, it is usually described as the work of the Gauls, not the Romans). Hostage-taking and torture was all, in other words, common stuff in elite European education—the British Army did not suddenly create these techniques during the 1950s. And that, in turn, begs the question: if British officers were aware of the standard methods of “counterinsurgency,” why didn’t the British Army use them during the “American Emergency” of the 1770s?

According to Pando Daily columnist “Gary Brecher” (a pseudonym for John Dolan), perhaps the “British took it very, very easy on us” during the Revolution because Americans “were white, English-speaking Protestants like them.” In fact, that leniency may have been the reason the British lost the war—at least, according to Lieutenant Colonel Paul Montanus’ (U.S.M.C.) paper for the U.S. Army War College, “A Failed Counterinsurgency Strategy: The British Southern Campaign, 1780-1781.” To Montanus, the British Army “needed to execute a textbook pacification program”—instead, the actions that army took “actually inflamed the [populace] and pushed them toward the rebel cause.” Montanus, in other words, essentially asks the question: why didn’t the Royal Navy sail up the Potomac and grab Martha Washington? Brecher’s point is pretty valid: there simply aren’t a lot of reasons to explain just why Lord Cornwallis or the other British commanders didn’t do that other than the notion that, when British Army officers looked at Americans, they saw themselves. (Yet, it might be pointed out that just what the British officers saw is still an open question: did they see “cultural Englishmen”—or simply rich men like themselves?)

If Gladwell were telling the story of the American Revolution, however, he might explain American independence as a result simply of the Americans learning to say no—at least, that is what he advances as a possible explanation for the bifurcation Gladwell describes in the professions in American life these days. Take, for instance, the profession with which Gladwell begins: baseball. In the early 1970s, Gladwell tells us, Marvin Miller told the players of the San Francisco Giants that “‘If we can get rid of the system as we now know it, then Bobby Bond’s son, if he makes it to the majors, will make more in one year than Bobby will in his whole career.’” (Even then, when Barry Bonds was around ten years old, people knew that Barry Bonds was a special kind of athlete—though they might not have known he would go on to shatter, as he did in 2001, the single season home run record.) As it happens, Miller wildly understated Barry Bonds’ earning power: Barry Bonds “ended up making more in one year than all the members of his father’s San Francisco Giants team made in their entire careers, combined” (emp. added). Barry Bonds’ success has been mirrored in many other sports: the average player salary in the National Basketball Association, for instance, increased more than 800 percent from the 1984-5 season to the 1998-99 season, according to a 2000 article by the Chicago Tribune’s Paul Sullivan. And so on: it doesn’t take much acuity to know that professional athletes have taken a huge pay jump in recent decades. But as Gladwell says, that increase is not limited just to sportsmen.

Take book publishing, for instance. Gladwell tells an anecdote about the sale of William Safire’s “memoir of his years as a speechwriter in the Nixon Administration to William Morrow & Company”—a book that might seem like the kind of “insider” account that often finds its way to publication. In this case, however, between Safire’s sale to Morrow and final publication Watergate happened—which caused Morrow to rethink publishing a book from a White House insider that didn’t mention Watergate. In those circumstances, Morrow decided not to publish—and could they please have the advance they gave to Safire back?

In book contracts in those days, the publisher had all the cards: Morrow could ask for their money back after the contract was signed because, according to the terms of a standard publishing deal, they could return a book at any time, for more or less any reason—and thus not only void the contract, but demand the return of the book’s advance. Safire’s attorney, however—Mort Janklow, a corporate attorney unfamiliar with the ways of book publishing—thought that was nonsense, and threatened to sue. Janklow told Morrow’s attorney (Maurice Greenbaum, of Greenbaum, Wolff & Ernst) that the “acceptability clause” of the then-standard literary contract—which held that a publisher could refuse to publish a book, and thereby reclaim any advance, for essentially any reason—“‘was being fraudulently exercised’” because the reason Morrow wanted to reject Safire’s book wasn’t due to the reason Morrow said they wanted to reject it (the intrinsic value of the content) but simply because an external event—Watergate—had changed Morrow’s calculations. (Janklow discovered documentary evidence of the point.) Hence, if Morrow insisted on taking back the advance, Janklow was going to take them to court—and when faced with the abyss, Morrow crumbled, and standard contracts with authors have become (supposedly) far less weighted towards publishing houses. Today, bestselling authors (like, for instance, Gladwell) now have a great deal of power: they more or less negotiate with publishing houses as equals, rather than (as before) as, effectively, servants. And not just in publishing: Gladwell goes on to tell similar anecdotes about modeling (Lauren Hutton), moviemaking (George Lucas), and investing (Teddy Forstmann). In all of these cases, the “Talent” (Gladwell’s word) eventually triumphs over “Capital.”

As I mentioned, for a variety of reasons—in the first place, the justification for the study of “culture,” which these days means, as political scientist Adolph Reed of the University of Pennsylvania has remarked, “the idea that the mass culture industry and its representational practices constitute a meaningful terrain for struggle to advance egalitarian interests”—to a lot of academic leftists these days that triumph would best be explained by the fact that, say, George Lucas and the head of Twentieth-Century Fox at the time, George Stulberg, shared a common rapport. (Perhaps they gossiped over their common name.) Or to put it another way, that “Talent” has been rewarded by “Capital” because of a shared “culture” between the two (apparent) antagonists—just as in the same way that Britain treated their American subjects different than their Kenyan ones because the British shared something with the Americans that they did not with the Kenyans (and the Malaysians and the Boer …). (Which was either “culture”—or money.) But there’s a problem with this analysis: it doesn’t particularly explain Ryan’s situation. After all, if this hypothesis correct that would appear to imply that—since Ryan shares a great deal “culturally” with the power elite that employs him on the golf course—that Ryan ought to have a smooth path towards becoming a golfer who employs caddies, not a caddie who works for golfers. But that is not, obviously, the case.

Gladwell, on the other hand, does not advance a “cultural” explanation for why some people in a variety of professions have become compensated far beyond that even of their fellows within the profession. Instead, he prefers to explain what happened beginning in the 1970s as being instances of people learning how to use a tool initially widely used by organized labor: the strike.

It’s an explanation that has an initial plausibility about it, in the first place, because of Marvin Miller’s personal history: he began his career working for the United Steelworkers before becoming an employee of the baseball players’ union. (Hence, there is a means of transmission.) But even aside from that, it seems clear that each of the “talents” Gladwell writes about made use of either a kind of one-person strike, or the threat of it, to get their way: Lauren Hutton, for example, “decided she would no longer do piecework, the way every model had always done, and instead demanded that her biggest client, Revlon, sign her to a proper contract”; in 1975 “Hollywood agent Tom Pollock,” demanded “that Twentieth Century Fox grant his client George Lucas full ownership of any potential sequels to Star Wars”; and Mort Janklow … Well, here is what Janklow said to Gladwell regarding how he would negotiate with publishers after dealing with Safire’s book:

“The publisher would say, ‘Send back that contract or there’s no deal,’ […] And I would say, ‘Fine, there’s no deal,’ and hang up. They’d call back in an hour: ‘Whoa, what do you mean?’ The point I was making was that the author was more important than the publisher.”

Each of these instances, I would say, is more or less what happens when a group of industrial workers walk out: Mort Janklow (whose personal political opinions, by the way, are apparently the farthest thing from labor’s), was for instance telling the publishers that he would withhold the labor product until his demands were met, just as the United Autoworkers shut down General Motors’ Flint, Michigan assembly plant in the Sit-Down Strike of 1936-37. And Marvin Miller did take baseball players out on strike: the first baseball strike was in 1972, and lasted all of thirteen days before management crumbled. What all of these people learned, in other words, was to use a common technique or tool—but one that is by no means limited to unions.

In fact, it’s arguable that one of the best examples of it in action is a James Dean movie—while another is the fact the world has not experienced a nuclear explosion delivered in anger since 1945. In the James Dean movie, Rebel Without a Cause, there’s a scene in which James Dean’s character gets involved in what the kids in his town call a “chickie run”—what some Americans know as the game of “Chicken.” In the variant played in the movie, two players each drive a car towards the edge of a cliff—the “winner” of the game is the one who exits his car closest to the edge, thus demonstrating his “courage.” (The other player is, hence, the “chicken,” or coward.) Seems childish enough—until you realize, as the philosopher Bertrand Russell did in a book called Common Sense and Nuclear Warfare, that it was more or less this game that the United States and the Soviet Union were playing throughout the Cold War:

Since the nuclear stalemate became apparent, the Governments of East and West have adopted the policy which Mr. Dulles calls “brinksmanship.” This is a policy adapted from a sport which, I am told, is practised [sic] by some youthful degenerates. This sport is called “Chicken!” …

As many people of less intellectual firepower than Bertrand Russell have noticed, Rebel Without A Cause thusly describes what happened between Moscow and Washington D.C. faced each other in October 1962, the incident later called the Cuban Missile Crisis. (“We’re eyeball to eyeball,” then-U.S. Secretary of State Dean Rusk said later about those events, “and I think the other fellow just blinked.”) The blink was, metaphorically, the act of jumping out of the car before the cliff of nuclear annihilation: the same blink that Twentieth Century Fox gave when it signed over the rights to sequels to Star Wars to Lucas, or Revlon did when it signed Lauren Hutton to a contract. Each of the people Gladwell describes played “Chicken”—and won.

To those committed to a “cultural” explanation, of course, the notion that all these incidents might instead have to do with a common negotiating technique rather than a shared “culture” is simply question begging: after all, there have been plenty of people, and unions, that have played games of “Chicken”—and lost. So by itself the game of “Chicken,” it might be argued, explains nothing about what led employers to give way. Yet, at two points, the “cultural” explanation also is lacking: in the first place, it doesn’t explain how “rebel” figures like Marvin Miller or Janklow were able to apply essentially the same technique across many industries. If it were a matter of “culture,” in other words, it’s hard to see how the same technique could work no matter what the underlying business was—or, if “culture” is the explanation, it’s difficult to see how that could be distinguished from saying that an all-benevolent sky fairy did it. As an explanation, in other words, “culture” is vacuous: it explains both too much and not enough.

What needs to be explained, in other words, isn’t why a number of people across industries revolted against their masters—just as it likely doesn’t especially need to be explained why Kenyans stopped thinking Britain ought to run their land any more. What needs to explained instead is why these people were successful. In each of these industries, eventually “Capital” gave in to “Talent”: “when Miller pushed back, the owners capitulated,” Gladwell says—so quickly, in fact, that even Miller was surprised. In all of these industries, “Capital” gave in so easily that it’s hard to understand why there was any dispute in the first place.

That’s precisely why the ease of that victory is grounds for being suspicious: surely, if “Capital” really felt threatened by this so-called “talent revolution” they would have fought back. After all, American capital was (and is), historically, tremendously resistant to the labor movement: blacklisting, arrest, and even mass murder were all common techniques capital used against unions prior to World War II: when Wyndham Mortimer arrived in Flint to begin organizing for what would become the Sit-Down Strike, for instance, an anonymous caller phoned him at his hotel within moments of his arrival to tell him to leave town if the labor organizer didn’t “want to be carried out in a wooden box.” Surely, although industries like sports or publishing are probably governed by less hard-eyed people than automakers, neither are they so full of softies that they would surrender on the basis of a shared liking for Shakespeare or the films of Kurosawa, nor even the fact that they shared a common language. On the other hand, however, neither does it seem likely that anyone might concede after a minor threat or two. Still, I’d say that thinking about these events using Gladwell’s terms makes a great deal more sense than the “cultural” explanation—not because of the final answer they provide, but because of the method of thought they suggest.

There is, in short, another possible explanation—one that, however, will mean trudging through yet another industry to explain. This time, that industry is the same one where the “cultural” explanation is so popular: academia, which has in recent decades also experienced an apparent triumph of “Talent” at the expense of “Capital”; in this case, the university system itself. As Christopher Shea wrote in 2014 for The Chronicle of Higher Education, “the academic star system is still going strong: Universities that hope to move up in the graduate-program rankings target top professors and offer them high salaries and other perks.” The “Talent Revolution,” in short, has come to the academy too. Yet, if so, it’s had some curious consequences: if “Talent” were something mysterious, one might suspect that it might come from anywhere—yet academia appears to think that it comes from the same sources.

As Joel Warner of Slate and Aaron Clauset, an assistant professor of computer science at the University of Colorado wrote in Slate recently, “18 elite universities produce half of all computer science professors, 16 schools produce half of all business professors, and eight schools account for half of all history professors.” (In fact, when it comes to history, “the top 10 schools produce three times as many future professors as those ranked 11 through 20.”) This, one might say, is curious indeed: why should “Talent” be continually discovered in the same couple of places? It’s as if, because William Wilkerson  discovered Lana Turner at the Top Hat Cafe on Sunset Boulevard  in 1937, every casting director and talent agent in Hollywood had decided to spend the rest of their working lives sitting on a stool at the Top Hat waiting for the next big thing to walk through that door.

“Institutional affiliation,” as Shea puts the point, “has come to function like inherited wealth” within the walls of the academy—a fact that just might explain another curious similarity between the academy and other industries these days. Consider, for example, that while Marvin Miller did have an enormous impact on baseball player salaries, that impact has been limited to major league players, and not their comrades at lower levels of organized baseball. “Since 1976,” Patrick Redford noted in Deadspin recently, major leaguers’ “salaries have risen 2,500 percent while minor league salaries have only gone up 70 percent.” Minor league baseball players can, Redford says, “barely earn a living while playing baseball”—it’s not unheard of, in fact, for ballplayers to go to bed hungry. (Glen Hines, a writer for The Cauldron, has a piece for instance describing his playing days in the Jayhawk League in Kansas: “our per diem,” Hines reports, “was a measly 15 dollars per day.”) And while it might difficult to have much sympathy for minor league baseball players—They get to play baseball!—that’s exactly what makes them so similar to their opposite numbers within academia.

That, in fact, is the argument Major League Baseball uses to deny minor leaguers are subject to the Fair Labor Standards Act: as the author called “the Legal Blitz” wrote for Above the Law: Redline, “Major League Baseball claims that its system [of not paying minimum wage] is legal as it is not bound by the FLSA [Fair Labor Standards Act] due to an exemption for seasonal and recreational employers.” In other words, because baseball is a “game” and not a business, baseball doesn’t have to pay the workers at the low end of the hierarchy—which is precisely what makes minor leaguers like a certain sort of academic.

Like baseball, universities often argue (as Yale’s Peter Brooks told the New York Times when Yale’s Graduate Employees and Student Organization (GESO) went out on strike in the late 1990s) that adjunct faculty are “among the blessed of the earth,” not its downtrodden. As Emily Eakin reported for the now-defunct magazine Lingua Franca during that same strike, in those days Yale’s administration argued “that graduate students can’t possibly be workers, since they are admitted (not hired) and receive stipends (not wages).” But if the pastoral rhetoric—a rhetoric that excludes considerations common to other pursuits, like gambling—surrounding both baseball and the academy is cut away, the position of universities is much the same as Major League Baseball’s, because both academia and baseball (and the law, and a lot of other professions) are similar types of industries at least in one respect: as presently constituted, they’re dependent on small numbers of highly productive people—which is just why “Capital” should have tumbled so easily in the way Gladwell described in the 1970s.

Just as scholars are only very rarely productive early in their careers, in other words, so too are baseball players: as Jim Callis noted for Baseball America (as cited by the paper, “Initial Public Offerings of Baseball Players” by John D. Burger, Richard D. Grayson, and Stephen Walters), “just one of every four first-round picks ultimately makes a non-trivial contribution to a major league team, and a mere one in twenty becomes a star.” Similarly, just as just a few baseball players hit most of the home runs or pitch most of the complete games, most academic production is done by just a few producers, as a number of researchers discovered in the middle of the twentieth century: a verity variously formulated as “Price’s Law,” “Lotka’s Law,” or “Bradford’s Law.” (Or, there’s the notion described as “Sturgeon’s Law”: “90% of everything is crap.”) Hence, rationally enough, universities (and baseball teams) only want to pay for those high-producers, while leaving aside the great mass of others: why pay for a load of .200 hitters, when with the same money you can buy just one superstar?

That might explain just why it is that William Morrow folded when confronted by Mort Janklow, or why Major League Baseball collapsed when confronted by Marvin Miller. They weren’t persuaded by the justice of the case Janklow or Miller brought—rather, they decided that it was in their long-term interests to reward wildly the “superstars” because that bought them the most production at the cheapest rate. Why pay for a ton of guys who hit all of the home runs, you might say—when, for much less, you can buy Barry Bonds? (In 2001, all major leaguers collectively hit over 5000 home runs, for instance—but Barry Bonds hit 73 of them, in a context in which the very best players might hit 20.) In such a situation, it makes sense (seemingly) to overpay Barry Bonds wildly (so that he made more money in a single season than all of his father’s teammates did for their entire careers): given that Barry Bonds is so much more productive than his peers, it’s arguable that, despite his vast salary, he was actually underpaid.

If you assign a value per each home run, that is, Bonds got a lower price per home run than his peers did: despite his high salary he was—in a sense—a bargain. (The way to calculate the point is to take all the home runs hit by all the major leaguers in a given season, and then work out the average price per home run. Although I haven’t actually done the calculation, I would bet that the average price is more than the price per home run received by Barry Bonds—which isn’t even to get into how standard major league rookie contracts deflate the market: as Newsday reported in March, Bryce Harper of the Washington Nationals, who was third on the 2015 home run list, was paid only $59,524 per home run—when virtually every other top ten home run hitter in the major leagues made at least a quarter of a million dollars.) Similarly, an academic superstar is also, arguably, underpaid: even though, according to citation studies, a small number of scholars might be responsible for 80 percent of the citations in a given field, there’s no way they can get 80 percent of the total salaries being paid in that field. Hence, by (seemingly) wildly overpaying a few superstars, major league owners (like universities) can pocket the difference between those salaries and what they (wildly underpay) to the (vastly more) non-superstars.

Not only that, but wildly overpaying also has a secondary benefit, as Walter Benn Michaels has observed: by paying “Talent” vastly more money, not only are they actually getting a bargain (because no matter what “Talent” got paid, they simply couldn’t be paid what they were really “worth”), but also “Talent’s” (seemingly vast, but in reality undervalued) salaries enable the system to be performed as  “fair”—if you aren’t getting paid what, say, Barry Bonds or Nobel Prize-winning economist Gary Becker is getting paid, in other words, then that’s because you’re not smart enough or good enough or whatever enough, jack. That is what Michaels is talking about when he discusses how educational “institutions ranging from U.I.C. to Harvard” like to depict themselves as “meritocracies that reward individuals for their own efforts and abilities—as opposed to rewarding them for the advantages of their birth.” Which, as it happens, just might explain why it is that, despite his educational accomplishments, Ryan is working on a golf course as a servant instead of using his talent in a courtroom or boardroom or classroom—as Michaels says, the reality of the United States today is that the “American Dream … now has a better chance of coming true in Sweden than it does in America, and as good a chance of coming true in western Europe (which is to say, not very good) as it does here.” That reality, in turn, is something that American universities, who are supposed to pay attention to events like this, have rapidly turned their heads away from: as Michaels says, “the intellectual left has responded to the increase in economic inequality”—that is, the supposed “Talent Revolution”—“by insisting on the importance of cultural identity.” In other words, “when it comes to class difference” (as Michaels says elsewhere), even though liberal professors “have understood our universities to be part of the solution, they are in fact part of the problem.” Hence, Ryan’s educational accomplishments (remember Ryan? There’s an essay about Ryan) aren’t actually helping him: in reality, they’re precisely what is holding him back. The question that Americans ought to be asking these days, then, is this one: what happens when Ryan realizes that?

It’s enough to make Martha Washington nervous.

 

This Doubtful Strife

Let me be umpire in this doubtful strife.
Henry VI. Act IV, Scene 1.

 

“Mike Carey is out as CBS’s NFL rules analyst,” wrote Claire McNear recently for (former ESPN writer and Grantland founder) Bill Simmons’ new website, The Ringer, “and we are one step closer to having robot referees.” McNear is referring to Carey and CBS’s “mutual agreement” to part last week: the former NFL referee, with 24 years of on-field experience, was not able to translate those years into an ability to convey rules decisions to CBS’s audience. McNear goes on to argue that Carey’s firing/resignation is simply another milestone on the path to computerized refereeing—a march that, she says, reached another milestone just days earlier, when the NBA released “Last Two Minute reports, which detail the officiating crew’s internal review of game calls.” About that release, it seems, the National Basketball Referees Association said it encourages “the idea that perfection in officiating is possible,” a standard that the association went on to say “is neither possible nor desirable” because “if every possible infraction were to be called, the game would be unwatchable.” It’s an argument that will appear familiar for many with experience in the humanities: at least since William Blake’s “dark satanic mills,” writers and artists have opposed the impact of science and technology—usually for reasons advertised as “political.” Yet, at least with regard to the recent history of the United States, that’s a pretty contestable proposition: it’s more than questionable, in other words, whether the humanities’ opposition to the sciences hasn’t had pernicious rather than beneficial effects. The work of the humanities, that is, by undermining the role of science, may not be helping to create the better society its proponents often say will result. Instead, the humanities may actually be helping to create a more unequal society.

That the humanities, that supposed bastion of “political correctness” and radical leftism, could in reality function as the chief support of the status quo might sound surprising at first, of course—according to any number of right-wing publications, departments of the humanities are strongholds of radicalism. But any real look around campus shouldn’t find it that confounding to think of the humanities as, in reality, something else : as Joe Pinsker reported for The Atlantic last year, data from the National Center for Education Statistics demonstrates that “the amount of money a college student’s parents make does correlate with what that person studies.” That is, while kids “from lower-income families tend toward ‘useful’ majors, such as computer science, math, and physics,” those “whose parents make more money flock to history, English, and the performing arts.” It’s a result that should not be that astonishing: as Pinsker observes, not only is it so that “the priciest, top-tier schools don’t offer Law Enforcement as a major,” it’s a point that cuts across national boundaries; Pinsker also reports that Greg Clark of the University of California found recently that students with “rare, elite surnames” at Great Britain’s Cambridge University “were much more likely to study classics, English, and history, and much less likely to study computer science and economics.” Far from being the hotbeds of far-left thought they are often portrayed as, in other words, departments of the humanities are much more likely to house the most elite, most privileged student body on campus.

It’s in those terms that the success of many of the more fashionable doctrines on American college campuses over the past several decades might best be examined: although deconstruction and many more recent schools of thought have long been thought of as radical political movements, they could also be thought of as intellectual weapons designed in the first place—long before they are put to any wider use—to keep the sciences at bay. That might explain just why, far from being the potent tools for social justice they are often said to be, these anti-scientific doctrines often produce among their students—as philosopher Martha Nussbaum of the University of Chicago remarked some two decades ago—a “virtually complete turning from the material side of life, toward a type of verbal and symbolic politics.” Instead of an engagement with the realities of American political life, in other words, many (if not all) students in the humanities prefer to practice politics by using “words in a subversive way, in academic publications of lofty obscurity and disdainful abstractness.” In this way, “one need not engage with messy things such as legislatures and movements in order to act daringly.” Even better, it is only in this fashion, it is said, that the conceptual traps of the past can be escaped.

One of the justifications for this entire practice, as it happens, was once laid out by the literary critic, Stanley Fish. The story goes that Bill Klem, a legendary umpire, was once behind the plate plying his trade:

The pitcher winds up, throws the ball. The pitch comes. The batter doesn’t swing. Klem for an instant says nothing. The batter turns around and says “O.K., so what was it, a ball or a strike?” And Klem says, “Sonny, it ain’t nothing ’till I call it.”

The story, Fish says, is illustrative of the notion that “of course the world is real and independent of our observations but that accounts of the world are produced by observers and are therefore relative to their capacities, education, training, etc.” It’s by these means, in other words, that academic pursuits like “cultural studies” and the like have come into being: means by which sociologists of science, for example, show how the productions of science may be the result not merely of objects in the world, but also the predilections of scientists to look in one direction and not another. Cancer or the planet Saturn, in other words, are not merely objects, but also exist—perhaps chiefly—by their place within the languages with which people describe them: an argument that has the great advantage of preserving the humanities against the tide of the sciences.

But, isn’t that for the best? Aren’t the humanities preserving an aspect of ourselves incapable of being captured by the net of the sciences? Or, as the union of professional basketball referees put it in their statement, don’t they protect, at the very least, that which “would cease to exist as a form of entertainment in this country” by their ministrations? Perhaps. Yet, as ought to be apparent, if the critics of science can demonstrate that scientists have their blind spots, then so too do the humanists—for one thing, an education devoted entirely to reading leaves out a rather simple lesson in economics.

Correlation is not causation, of course, but it is true that as the theories of academic humanists became politically wilder, the gulf between haves and have-nots in America became greater. As Nobel Prize-winning economist Joseph Stiglitz observed a few years ago, “inequality in America has been widening for decades”; to take one of Stiglitz’s examples, “the six heirs to the Walmart empire”—an empire that only began in the early 1960s—now “possess a combined wealth of some $90 billion, which is equivalent to the wealth of the entire bottom 30 percent of U.S. society.” To put the facts another way—as Christopher Ingraham pointed out in the Washington Post last year—“the wealthiest 10 percent of U.S. households have captured a whopping 76 percent of all the wealth in America.” At the same time, as University of Illinois at Chicago literary critic Walter Benn Michaels has noted, “social mobility” in the United States is now “lower than in both France and Germany”—so much so, in fact, that “[a]nyone born poor in Chicago has a better chance of achieving the American Dream by learning German and moving to Berlin.” (A point perhaps highlighted by the fact that Germany has made its universities free to any who wish to attend them.) In any case, it’s a development made all the more infuriating by the fact that diagnosing the harm of it involves merely the most remedial forms of mathematics.

“When too much money is concentrated at the top of society,” Stiglitz continued not long ago, “spending by the average American is necessarily reduced.” Although—in the sense that it is a creation of human society—what Stiglitz is referring to is “socially constructed,” it is also simply a fact of nature that would exist whether the economy in question involved Aztecs or ants. In whatever underlying substrate, it is simply the case that those at the top of a pyramid will spend less than those near the bottom. “Consider someone like Mitt Romney”—Stiglitz asks—“whose income in 2010 was $21.7 million.” Even were Romney to become even more flamboyant than Donald Trump, “he would spend only a fraction of that sum in a typical year to support himself and his wife in their several homes.” “But,” Stiglitz continues, “take the same amount of money and divide it among 500 people—say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all of the money gets spent.” In other words, by dividing the money more equally, more economic activity is generated—and hence the more equal society is also the more prosperous society.

Still, to understand Stiglitz’ point requires understanding a sequence of connected, ideas—among them a basic understanding of mathematics, a form of thinking that does not care who thinks it. In that sense, then, the humanities’ opposition to scientific, mathematical thought takes on rather a different sense than it is often cracked up to be. By training its students to ignore the evidence—and more significantly, the manner of argument—of mathematics and the sciences, the humanities are raising up a generation (or several) to ignore the evidence of impoverishment that is all around us here in 21st century America. Even worse, it fails to give students a means of combatting that impoverishment: an education without an understanding of mathematics cannot cope with, for instance, the difference between $10,000 and $10 billion—and why that difference might have a greater significance than simply being “unfair.” Hence, to ignore the failures of today’s humanities is also to ignore just how close the United States is … to striking out.

Hot Shots

 

… when the sea was calm all boats alike
Show’d mastership in floating …
—William Shakespeare.
     Coriolanus Act IV, Scene 3 (1608).

 

 

“Indeed,” wrote the Canadian scholar Marshall McLuhan in 1964, “it is only too typical that the ‘content’ of any medium blinds us to the character of the medium.” Once, it was a well-known line among literate people, though much less now. It occurred to me recently however as I read an essay by Walter Benn Michaels of the University of Illinois at Chicago, in the course of which Michaels took issue with Matthew Yglesias of Vox. Yglesias, Michaels tells us, tried to make the argument that

although “straight white intellectuals” might tend to think of the increasing economic inequality of the last thirty years “as a period of relentless defeat for left-wing politics,” we ought to remember that the same period has also seen “enormous advances in the practical opportunities available to women, a major decline in the level of racism … and wildly more public and legal acceptance of gays and lesbians.”

Michaels replies to Yglesias’ argument that “10 percent of the U.S. population now earns just under 50 percent of total U.S. income”—a figure that is, unfortunately, just the tip of the economic iceberg when it comes to inequality in America. But the real problem—the problem that Michaels’ reply does not do justice to—is that there just is a logical flaw in the kind of “left” that we have now: one that advocates for the rights of minorities rather than labors for the benefit of the majority. That is, a “cultural” left rather than a scientific one: the kind we had when, in 1910, American philosopher John Dewey could write (without being laughed at), that Darwin’s Origin of Species “introduced a mode of thinking that in the end was bound to transform the logic of knowledge, and hence the treatment of morals, politics, and religion.” When he was just twenty years old the physicist Freeman Dyson discovered why, when Winston Churchill’s government paid him to think about what was really happening in the flak-filled skies over Berlin.

The British had a desperate need to know, because they were engaged in bombing Nazi Germany at least back to the Renaissance. Hence they employed Dyson as a statistician, to analyze the operations of Britain’s Bomber Command. Specifically, Dyson was to investigate whether bomber crews “learned by experience”: if whether the more missions each crew flew, the better each crew became at blowing up Germany—and the Germans in it. Obviously, if they did, then Bomber Command could try to isolate what those crews were doing and teach what it was to the others so that Germany and the Germans might be blown up better.

The bomb crews themselves believed, Dyson tells us, that as “they became more skillful and more closely bonded, their chances of survival would improve”—a belief that, for obvious reasons, was “essential to their morale.” But as Dyson went over the statistics of lost bombers, examining the relation between experience and loss rates while controlling for the effects of weather and geography, he discovered the terrible truth:

“There was no effect of experience on loss rate.”

The lives of each bomber crew, in other words, were dependent on chance, not skill, and the belief in their own expertise was just an illusion in the face of horror—an illusion that becomes the more awful when you know that, out of the 125,000 air crews who served in Bomber Command, 55,573 were killed in action.

“Statistics and simple arithmetic,” Dyson therefore concluded, “tell us more about ourselves than expert intuition”: a cold lesson to learn, particularly at the age of twenty—though that can be tempered by the thought that at least it wasn’t Dyson’s job to go to Berlin. Still, the lesson is so appalling that perhaps it is little wonder that, after the war, it was largely forgotten, and has only been taken up again by a subject nearly as joyful as the business of killing people on an industrial scale is horrifying: sport.

In one of the most cited papers in the history of psychology, “The Hot Hand in Basketball: On the Misperception of Random Sequences,” Thomas Gilovich, Robert Vallone and Amos Tversky studied how “players and fans alike tend to believe that a player’s chance of hitting a shot are greater following a hit than following a miss on the previous shot”—but “detailed analysis … provided no evidence for a positive correlation between the outcomes of successive shots.” Just as, in other words, the British airmen believed some crews had “skill” that kept them in the air, when in fact all that kept them aloft was, say, the poor aim of a German anti-aircraft gunner or a happily-timed cloud, so too did the three co-authors find that, in basketball, people believed some shooters could get “hot.” That is, reel off seemingly impossible numbers of shots in a row, like when Ben Gordon, then with the Chicago Bulls, knocked down 9 consecutive three-pointers against Washington in 2006. But in fact hits and misses are reliant on a player’s skill, not his “luck”: toss a coin enough times and the coin will produce “runs” of heads and tails too.

The “hot hand” concept in fact applies to more than simply the players: it extends to coaches also. “In sports,” says Leonard Mlodinow in his book The Drunkard’s Walk: How Randomness Rules Our Lives, “we have developed a culture in which, based on intuitive feelings of correlation, a team’s success or failure is often attributed largely to the ability of the coach”—a reality that perhaps explains just why, as Florida’s Lakeland Ledger reported in in 2014, the average tenure of NFL coaches over the past decade has been 38 months. Yet as Mlodinow also says, “[m]athematical analysis of firings in all major sports … has shown that those firings had, on average, no effect on team performance”: fans (and perhaps more importantly, owners) tend to think of teams rising and falling based on their coach, while in reality a team’s success has more to do with the talent the team has.

Yet while sports are a fairly trivial part of most peoples’ lives, that is not true when it comes to our “coaches”: the managers that run large corporations. As Diane Stafford found out for the Kansas City Star a few years back, it turns out that American corporations have as little sense of the real value of CEOs as NFL owners have of their coaches: the “pay gap between large-company CEOs and average American employees,” Stafford said, “vaulted from 195 to 1 in 1993 to 354 to 1 in 2012.” Meanwhile, more than a third “of the men who appeared on lists ranking America’s 25 highest-paid corporate leaders between 1993 and 2012 have led companies bailed out by U.S. taxpayers, been fired for poor performance or led companies charged with fraud.” Just like the Lancasters flown by Dyson’s aircrews, American workers (and their companies’ stockholders) have been taken for a ride by men flying on the basis of luck, not skill.

Again, of course, many in what’s termed the “cultural” left would insist that they too, stand with American workers against the bosses, that they too, wish things were better, and they too, think paying twenty bucks for a hot dog and a beer is an outrage. What matters however isn’t what professors or artists or actors or musicians or the like say—just as it didn’t matter what Britain’s bomber pilots thought about their own skills during the war. What matters is what their jobs say. And the fact of the matter is that cultural production, whether it be in academia or in New York or in Hollywood, simply is the same as thinking you’re a hell of a pilot, or you must be “hot,” or Phil Jackson is a genius. That might sound counterintuitive, of course—I thought writers and artists and, especially, George Clooney were all on the side of the little guy!—but, like McLuhan says, what matters is the medium, not the message.

The point is likely easiest to explain in terms of the academic study of the humanities, because at least there people are forced to explain themselves in order to keep their jobs. What one finds, across the political spectrum, is some version of the same dogma: students in literary studies can, for instance, refer to American novelist James Baldwin’s insistence, in the 1949 essay “Everybody’s Protest Novel,” that “literature and sociology are not the same,” while, at the other end of the political spectrum, political science students can refer to Leo Strauss’ attack on “the ‘scientific’ approach to society” in his 1958 Thoughts on Machiavelli. Every discipline in the humanities has some version of the point, because without such a doctrine they couldn’t exist: without them, there’s just a bunch of people sitting in a room reading old books.

The effect of these dogmas can perhaps be best seen by reference to the philosophical version of it, which has the benefit of at least being clear. David Hume called it the “is-ought problem”; as the Scotsman claimed in  A Treatise of Human Nature, “the distinction of vice and virtue is not founded merely on the relations of objects.” Later, in 1903’s Principe Ethica, British philosopher G.E. Moore called the same point the “naturalistic fallacy”: the idea that, as J.B. Schneewind of Johns Hopkins has put it, “claims about morality cannot be derived from statements of facts.” The advantage for philosophers is clear enough: if it’s impossible to talk about morality or ethics strictly by the light of science, that certainly justifies talking about philosophy to the exclusion of anything else. But in light of the facts about shooting hoops or being killed by delusional Germans, I would hope that the absurdity of Moore’s “idea” ought to be self-evident: if it can be demonstrated that something is a matter of luck, and not skill, that changes the moral calculation drastically.

That then is the problem with running a “left” based around the study of novels or rituals or films or whatever: at the end of the day, the study of the humanities, just like the practice of the arts, discourages the thought that, as Mlodinow puts it, “chance events are often conspicuously misinterpreted as accomplishments or failures.” And without such a consideration, I would suggest, any talk of “values” or “morality” or whatever you would like to call it, is empty. It matters if your leader is lucky or skillful, it matters if success is the result of hard work or who your parents are—and a “left” built on the opposite premises is not, to my mind, a “left” at all. Although many people in the “cultural left,” then, might have the idea that their overt exhortations to virtue might outweigh the covert message being told by their institutional positions, reality tells a different tale: by telling people they can fly, you should not be shocked when they crash.