Good’n’Plenty

Literature as a pure art approaches the nature of pure science.
—“The Scientist of Letters: Obituary of James Joyce.” The New Republic 20 January 1941.

 

028f4e06ed5fa7b5c60c796c9c4ab59244fb41cc
James Joyce, in the doorway of Shakespeare & Co., sometime in the 1920s.

In 1910 the twenty-sixth president of the United States, Theodore Roosevelt, offered what he called a “Square Deal” to the American people—a deal that, the president explained, consisted of two components: “equality of opportunity” and “reward for equally good service.” Not only would everyone would be given a chance, but, also—and as we shall see, more importantly—pay would be proportional to effort. More than a century later, however—according to University of Illinois at Chicago professor of English Walter Benn Michaels—the second of Roosevelt’s components has been forgotten: “the supposed left,” Michaels asserted in 2006, “has turned into something like the human resources department of the right.” What Michaels meant was that, these days, “the model of social justice is not that the rich don’t make as much and the poor make more,” it is instead “that the rich [can] make whatever they make, [so long as] an appropriate percentage of them are minorities or women.” In contemporary America, he means, only the first goal of Roosevelt’s “Square Deal” matters. Yet, why should Michaels’ “supposed left” have abandoned Roosevelt’s second goal? An answer may be found in a seminal 1961 article by political scientists Peter B. Clark and James Q. Wilson called “Incentive Systems: A Theory of Organizations”—an article that, though it nowhere mentions the man, could have been entitled “The Charlie Wilson Problem.”

Charles “Engine Charlie” Wilson was president of General Motors during World War II and into the early 1950s; General Motors, which produced tanks, bombers, and ammunition during the war, may have been as central to the war effort as any other American company—which is to say, given the fact that the United States was the “Arsenal of Democracy,” quite a lot. (“Without American trucks, we wouldn’t have had anything to pull our artillery with,” commented Field Marshal Georgy Zhukov, who led the Red Army into Berlin.) Hence, it may not be a surprise that World War II commander Dwight Eisenhower selected Wilson to be his Secretary of Defense when the leader of the Allied war in western Europe was elected president in 1952, which led to the confirmation hearings that made Wilson famous—and the possible subject of “Incentive Systems.”

That’s because of something Wilson said during those hearings: when asked whether he could make a decision, as Secretary of Defense, that would be adverse for General Motors, Wilson replied that he could not imagine such a situation, “because for years I thought that what was good for our country was good for General Motors, and vice versa.” Wilson’s words revealed how sometimes people within an organization can forget about the larger purposes of the organization—or what could be called “the Charlie Wilson problem.” What Charlie Wilson could not imagine, however, was precisely what James Wilson (and his co-writer Peter Clark) wrote about in “Incentive Systems”: how the interests of an organization might not always align with society.

Not that Clark and Wilson made some startling discovery; in one sense “Incentive Systems” is simply a gloss on one of Adam Smith’s famous remarks in The Wealth of Nations: “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public.” What set their effort apart, however, was the specificity with which they attacked the problem: the thesis of “Incentive Systems” asserts that “much of the internal and external activity of organizations may be explained by understanding their incentive systems.” In short, in order to understand how an organization’s purposes might differ from that of the larger society, a big clue might be in how it rewards its members.

In the particular case of Engine Charlie, the issue was the more than $2.5 million in General Motors stock he possessed at the time of his appointment as Secretary of Defense—even as General Motors remained one of the largest defense contractors. Depending on the calculation, that figure would be nearly ten times that today—and, given contemporary trends in corporate pay for executives, would surely be even greater than that: the “ratio of CEO-to-worker pay has increased 1,000 percent since 1950,” according to a 2013 Bloomberg report. But “Incentive Systems” casts a broader net than “merely” financial rewards.

The essay constructs “three broad categories” of incentives: “material, solidary, and purposive.” That is, not only pay and other financial sorts of reward of the type possessed by Charlie Wilson, but also two other sorts: internal rewards within the organization itself—and rewards concerning the organization’s stated intent, or purpose, in society at large. Although Adam Smith’s pointed comment raised the issue of the conflict of material interest between organizations and society two centuries ago, what “Incentive Systems” thereby raises is the possibility that, even in organizations without the material purposes of a General Motors, internal rewards can conflict with external ones:

At first, members may derive satisfaction from coming together for the purpose of achieving a stated end; later they may derive equal or greater satisfaction from simply maintaining an organization that provides them with office, prestige, power, sociability, income, or a sense of identity.

Although Wealth of Nations, and Engine Charlie, provide examples of how material rewards can disrupt the straightforward relationship between members, organizations, and society, “Incentive Systems” suggests that non-material rewards can be similarly disruptive.

If so, Clark and Wilson’s view may perhaps circle back around to illuminate a rather pressing current problem within the United States concerning material rewards: one indicated by the fact that the pay of CEOs of large companies like General Motors has increased so greatly against that of workers. It’s a story that was usefully summarized by Columbia University economist Edward N. Wolff in 1998: “In the 1970s,” Wolff wrote then, “the level of wealth inequality in the United States was comparable to that of other developed industrialized countries”—but by the 1980s “the United States had become the most unequal society in terms of wealth among the advanced industrial nations.” Statistics compiled by the Census Bureau and the Federal Reserve, Nobel Prize-winning economist Paul Krugman pointed out in 2014, “have long pointed to a dramatic shift in the process of US economic growth, one that started around 1980.” “Before then,” Krugman says, “families at all levels saw their incomes grow more or less in tandem with the growth of the economy as a whole”—but afterwards, he continued, “the lion’s share of gains went to the top end of the income distribution, with families in the bottom half lagging far behind.” Books like Thomas Piketty’s Capital in the Twenty-first Century have further documented this broad economic picture: according to the Institute for Policy Studies, for example, the richest 20 Americans now have more wealth than the poorest 50% of Americans—more than 150 million people.

How, though, can “Incentive Systems” shine a light on this large-scale movement? Aside from the fact that, apparently, the essay predicts precisely the future we now inhabit—the “motivational trends considered here,” Wilson and Clark write, “suggests gradual movement toward a society in which factors such as social status, sociability, and ‘fun’ control the character of organizations, while organized efforts to achieve either substantive purposes or wealth for its own sake diminish”—it also suggests just why the traditional sources of opposition to economic power have, largely, been silent in recent decades. The economic turmoil of the nineteenth century, after all, became the Populist movement; that of the 1930s became the Popular Front. Meanwhile, although it has sometimes been claimed that Occupy Wall Street, and more lately Bernie Sanders’ primary run, have been contemporary analogs of those previous movements, both have—I suspect anyway—had nowhere near the kind of impact of their predecessors, and for reasons suggested by “Incentive Systems.”

What “Incentive Systems” can do, in other words, is explain the problem raised by Walter Benn Michaels: the question of why, to many young would-be political activists in the United States, it’s problems of racial and other forms of discrimination that appear the most pressing—and not the economic vice that has been squeezing the majority of Americans of all races and creeds for the past several decades. (Witness the growth of the Black Lives Matter movement, for instance—which frames the issue of policing the inner city as a matter of black and white, rather than dollars and cents.) The signature move of this crowd has, for some time, been to accuse their opponents of (as one example of this school has put it) “crude economic reductionism”—or, of thinking “that the real working class only cares about the size of its paychecks.” Of course, as Michaels says in The Trouble With Diversity, the flip side of that argument is to say that this school attempts to fit all problems into the Procrustean bed of “diversity,” or more simply, “that racial identity trumps class,” rather than the other way. But why do those activists need to insist on the point so strongly?

“Some people,” Jill Lepore wrote not long ago in The New Yorker about economic inequality, “make arguments by telling stories; other people make arguments by counting things.” Understanding inequality, as should be obvious, requires—at a minimum—a grasp of the most basic terms of mathematics: it requires knowing, for instance, that a 1,000 percent increase is quite a lot. But more significantly, it also requires understanding something about how rewards—incentives—operate in society: a “something” that, as Nobel Prize-winning economist Joseph Stiglitz explained not long ago, is “ironclad.” In the Columbia University professor’s view (and it is more-or-less the view of the profession), there is a fundamental law that governs the matter—which in turn requires understanding what a scientific law is, and how one operates, and so forth.

That law in this case, the Columbia University professor says, is this: “as more money becomes concentrated at the top, aggregate demand goes into decline.” Take, Stiglitz says, the example of Mitt Romney’s 2010 income of $21.7 million: Romney can “only spend a fraction of that sum in a typical year to support himself and his wife.” But, he continues, “take the same amount of money and divide it among 500 people—say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all the money gets spent.” The more evenly money is spread around, in other words, the more efficiently, and hence productively, the American economy works—for everyone, not just some people. Conversely, the more total income is captured by fewer people, the less efficiently the economy becomes, resulting in less productivity—and ultimately a poorer America. But understanding Stiglitz’ argument requires a kind of knowledge possessed by counters, not storytellers—which, in the light of “Incentive Systems,” illustrates just why it’s discrimination, and not inequality, that is the issue of choice for political activists today.

At least since the 1960s, that is, the center of political energy on university campuses has usually been the departments that “tell stories,” not the departments that “count things”: as the late American philosopher Richard Rorty remarked before he died, “departments of English literature are now the left-most departments of the universities.” But, as Clark and Wilson might point out (following Adam Smith), the departments that “tell stories” have internal interests that may not be identical to the interests of the public: as mentioned, understanding Joseph Stiglitz’ point requires understanding science and mathematics—and as Bruce Robbins (a colleague of Wolff and Stiglitz at Columbia University, only in the English department ) has remarked, “the critique of Enlightenment rationality is what English departments were founded on.” In other words, the internal incentive systems of English departments and other storytelling disciplines reward their members for not understanding the tools that are the only means of understanding foremost political issue of the present—an issue that can only be sorted out by “counting things.”

As viewed through the prism of “Incentive Systems,” then, the lesson taught by the past few decades of American life might well be that elevating “storytelling” disciplines above “counting” disciplines has had the (utterly predictable) consequence that economic matters—a field constituted by arguments constructed about “counting things”—have been largely vacated as a possible field of political contest. And if politics consists of telling stories only, that means that “counting things” is understood as apolitical—a view that is surely, as students of deconstruction have always said, laden with politics. In that sense, then, the deal struck by Americans with themselves in the past several decades hardly seems fair. Or, to use an older vocabulary:

Square.

Lest The Adversary Triumph

… God, who, though his power
Creation could repeat, yet would be loath
Us to abolish, lest the Adversary
Triumph …
Paradise Lost Book XI

… the literary chit-chat which makes the reputation of poets boom and crash in an imaginary stock exchange …
The Anatomy of Criticism

A list of articles for “liberal” magazine Salon.com. The first is an attack on Darwinians like Richard Dawkins; the others ridicule creationists for being anti-Darwinian
A list of articles for “liberal” magazine Salon.com. The first is an attack on Darwinians like Richard Dawkins; the others ridicule creationists for being anti-Darwinian

 

“Son, let me make one thing clear,” Air Force General Curtis Le May, the first head of the Strategic Air Command, supposedly said sometime in the 1950s to a young officer who repeatedly referred to the Soviet Union as the “enemy” during a presentation about Soviet nuclear capabilities. “The Soviet Union,” the general explained, “is our adversary. The enemy is the United States Navy.” Similarly, the “sharp rise in U.S. inequality, especially at the very top of the income scale” in recent years—as Nobel Prize winner Paul Krugman called it, in 1992—might equally be the result of confusion: as Professor Walter Benn Michaels of the University of Illinois at Chicago has written, “the intellectual left has responded to the increase in economic inequality by insisting on the importance of cultural identity.” The simplest explanation for that disconnect, I’d suggest, is that while the “intellectual left” might talk a good game about “speaking truth to power” and whatnot, “power” is just their adversary. The real enemy is science, especially Darwinian biology—and, yet more specifically, a concept called “survivorship bias”—and that enmity may demonstrate that the idea of an oppositional politics based around culture, rather than science, is absurd.

Like a lot of American wars, this one is often invisible to the American public, partly because when academics like University of Chicago English professor W.J.T. Mitchell do write for the public,  they often claim their modest aim is merely to curb scientific hubris. As Mitchell piously wrote in 1998’s The Last Dinosaur Book: The Life and Times of a Cultural Icon, his purpose in that book was merely to note that “[b]iological explanations of human behavior … are notoriously easy, popular, and insidious.” As far as that goes, of course, Mitchell is correct: the history of the twentieth century is replete with failed applications of Darwinian thought to social problems. But then, the twentieth century is replete with a lot of failed intellectual applications—yet academic humanists tend to focus on blaming biology for the mistakes of the past.

Consider for example how many current academics indict a doctrine called “social Darwinism” for the social ills of a century ago. In ascending order of sophistication, here is Rutgers historian Jackson Lears asserting from the orchestra pit, in a 2011 review of books by well-known atheist Sam Harris, that the same “assumptions [that] provided the epistemological foundations for Social Darwinism” did the same “for scientific racism and imperialism,” while from the mezzanine level of middlebrow popular writing here is William Kleinknecht, in The Man Who Sold The World: Ronald Reagan and the Betrayal of Main Street America, claiming that in the late nineteenth and early twentieth centuries, “social Darwinism … had nourished a view of the lower classes as predestined by genetics and breeding to live in squalor.” Finally, a diligent online search discovers, in the upper balcony, Boston University student Evan Razdan’s bald assertion that at the end of the nineteenth century, “Darwinism became a major justification for racism and imperialism.” I could multiply the examples: suffice it to say that for a good many in academe, it is now gospel truth that Darwinism was on the side of the wealthy and powerful during the early part of the twentieth century.

In reality however Darwin was usually thought of as on the side of the poor, not the rich, in the early twentieth century. For investigative reporters like Ida Tarbell, whose The History of the Standard Oil Company is still today the foundation of muckraking journalism, “Darwin’s theory [was] a touchstone,” according to Steve Weinberg’s Taking on the Trust: The Epic Battle of Ida Tarbell and John D. Rockefeller. The literary movement of the day, naturalism, drew its characters “primarily from the lower middle class or the lower class,” as Donald Pizer wrote in Realism and Naturalism in Nineteenth-Century American Fiction, and even a scholar with a pro-religious bent like Doug Underwood must admit, as he does in From Yahweh to Yahoo: The Religious Roots of the Secular Press, that the “naturalists were particularly influenced by the theories of Charles Darwin.” Progressive philosopher John Dewey wrote in 1910’s “The Influence of Darwinism on Philosophy” that Darwin’s On the Origin of Species “introduced a mode of thinking that in the end was bound to transform the logic of knowledge, and hence the treatment of morals, politics, and religion.” (As American philosopher Richard Rorty has noted, Dewey and his pragmatists began “from a picture of human beings as chance products of evolution.”) Finally, Karl Marx—a person no one has ever thought to be on the side of the wealthy—thought so highly of Darwin that he exclaimed, in a letter to Frederick Engels, that On the Origin of Species “contains the basis in natural history for our view.” To blame Darwin for the inequality of the Gilded Age is like blaming Smokey the Bear for forest fires.

Even aside from the plain facts of history, however, you’d think the sheer absurdity of pinning Darwin for the crimes of the robber barons would be self-evident. If a thief cited Matthew 5:40—“And if any man will sue thee at the law, and take away thy coat, let him have thy cloke also”—to justify his theft, nobody would think that he had somehow thereby indicted Jesus. Logically, the idea a criminal cites to justify his crime makes no difference either to the fact of the crime or to the idea: that is why the advocates of civil disobedience, like Martin Luther King Jr., held that lawbreaking in the name of a higher law still requires the lawbreaker to be arrested, tried, and, if found guilty, sentenced. (Conversely, is it somehow worse that King was assassinated by a white supremacist? Or would it have been better had he been murdered in the course of a bank robbery that had nothing to do with his work?) Just because someone commits a crime in the name of an idea, as King sometimes did, doesn’t make the idea itself wrong. nor could it make Martin Luther King Jr. any less dead. And anyway, isn’t the notion of taking a criminal’s word about her motivations at face value dubious?

Somehow however the notion that Darwin is to blame for the desperate situation of the poor at the beginning of twentieth century has been allowed to fester in the American university system: Eric Rauchway, a professor of history at the University of California Davis, even complained in 2007 that anti-Darwinism has become so widespread among his students that it’s now a “cliche of the history paper that during the industrial era” all “misery and suffering” was due to the belief of the period’s “lords of plutocracy” in the doctrines of “‘survival of the fittest’” and “‘natural selection.’” That this makes no sense doesn’t seem to enter anyone’s calculations—despite the fact that most of these “lords,” like John Rockefeller and Andrew Carnegie,  were “good Christian gentlemen,” just like many businessmen are today.

The whole idea of blaming Darwin, as I hope is clear, is at best exaggerated and at worst nonsense. But really to see the point, it’s necessary to ask why all those “progressive” and “radical” thinkers thought Darwin was on their side, not the rich man’s. The answer can be found by thinking clearly about what Darwin actually taught, rather than what some people supposedly used him to justify. And what the biologist taught was the doctrine of natural selection: a process that, understood correctly, is far from a doctrine that favors the wealthy and powerful. It would be closer to the truth to say that, on the contrary, what Darwin taught must always favor the poor against the wealthy.

To many in the humanities, that might sound absurd—but to those uncommitted, let’s begin by understanding Darwin as he understood himself, not by what others have claimed about him. And misconceptions of Darwin begin at the beginning: many people credit Charles Darwin with the idea of evolution, but that was not his chief contribution to human knowledge. A number of very eminent people, including his own grandfather, Erasmus Darwin, had argued for the reality of evolutionary descent long before Charles was even born: in his two-volume work of 1796, Zoonomia; or, the Laws of Organic Life, this older Darwin had for instance asserted that life had been evolving for “millions of ages before the commencement of the history of mankind.” So while the theory of evolution is at times presented as springing unbidden from Erasmus’ grandson Charles’ head, that’s simply not true.

By the time Charles published On the Origin of Species in 1859, the general outline of evolution was old hat to professionals, however shocking it may have been to the general public. On the Origin of Species had the impact it did because of the mechanism Darwin suggested to explain how the evolution of species could have proceeded—not that it presented the facts of evolutionary descent, although it do that in copious detail. Instead, as American philosopher Daniel Dennett has observed, “Darwin’s great idea” was “not the idea of evolution, but the idea of evolution by natural selection.” Or as the biologist Stephen Jay Gould has written, Darwin’s own chief aim in his work was “to advance the theory of natural selection as the most important mechanism of evolution.” Darwin’s contribution wasn’t to introduce the idea that species shared ancestors and hence were not created but evolved—but instead to explain how that could have happened.

What Darwin did was to put evolution together with a means of explaining it. In simplest terms, that natural selection is what Darwin would say it was in the Origin: the idea that, since “[m]ore individuals are born than can possibly survive,” something will inevitably “determine which individual shall live and which shall die.” In such a circumstances, as he would later write in the Historical Sketch of the Progress of Opinion on the Origin of Species, “favourable variations would tend to be preserved, and unfavourable ones would be destroyed.” Or as Stephen Jay Gould has succinctly put it, natural selection is “the unconscious struggle among individual organisms to promote their own personal reproductive success.” The word unconscious is the keyword here: the organisms don’t know why they have succeeded—nor do they need to understand. They just do—to paraphrase Yoda—or do not.

Why any of this should matter to the humanities or to people looking to contest economic inequality ought be immediately apparent—and would be in any rational society. But since the American education system seems designed at the moment to obscure the point I will now describe a scientific concept related to natural selection known as survivorship bias. Although that concept is used in every scientific discipline, it’s a particularly important one to Darwinian biology. There’s an argument, in fact, that survivorship bias is just a generalized version of natural selection, and thus it simply is Darwinian biology.

That’s because the concept of “survivorship bias” describes how human beings are tempted to describe mindless processes as mindful ones. Here I will cite one of the concept’s most well-known contemporary advocates, a trader and professor of something called “risk engineering” at New York University named Nassim Nicholas Taleb—precisely because of his disciplinary distance both from biology and the humanities: his distance from both, as Bertold Brecht might have has described it, “exposes the device” by stripping the idea from its disciplinary contexts. As Taleb says, one example of survivorship bias is the tendency all human beings have to think that someone is “successful because they are good.” Survivorship bias, in short, is the sometimes-dangerous assumption that there’s a cause behind every success. But, as Darwin might have said, that ain’t necessarily so.

Consider for instance a hypothetical experiment Taleb constructs in his Fooled By Randomness: The Hidden Role of Chance in Life and in the Markets, consisting of 10,000 money managers. The rules of this experiment are that “each one has a 50% probability of making $10,000 at the end of the year, and a 50% probability of losing $10,000.” If we should run this experiment five times—five runs through randomness—then at the end of those conjectural five years, by the laws of probability we can expect “313 managers who made money for five years in a row.” Is there anything especially clever about these few? No: their success has nothing to do with any quality each might possess. It’s simply due, as Taleb says, to “pure luck.” But these 313 will think of themselves as very fine fellows.

Now, notice that, by substituting the word “zebra” for the words “money managers” and “10 offspring” for “$10,000” Taleb has more or less described the situation of the Serengeti Plain—and, as early twentieth-century investigative reporter Ida Tarbell realized, the wilds of Cleveland, Ohio. Tarbell, in 1905’s “John D. Rockefeller: A Character Study” actually says that by 1868, when Rockefeller was a young businessman on the make, he “must have seen clearly … that nothing but some advantage not given by nature or recognized by the laws of fair play in business could ever make him a dictator in the industry.” In other words, Rockefeller saw that if he merely allowed “nature,” as it were, to take its course, he stood a good chance of being one of the 9000-odd failures, instead of the 300-odd success stories. Which is why he went forward with the various shady schemes Tarbell goes on to document in her studies of the man and his company. (Whose details are nearly unbelievable—unless you’re familiar with the details of the 2008 housing bubble.) The Christian gentleman John D. Rockefeller, in other words, hardly believed in the “survival of the fittest.”

It should in other words be clear just how necessary the concept of survivorship bias—and thus Darwin’s notion of natural selection—is to any discussion of economic inequality. Max Weber at least, the great founder of sociology, understood it—that’s why, in The Protestant Ethic and the Spirit of Capitalism, Weber famously described the Calvinist doctrine of predestination, in which “God’s grace is, since His decrees cannot change, as impossible for those to whom He has granted it to lose as it is unattainable for those to whom He has denied it.” As Weber knew, if the Chosen of God are known by their worldly success, then there is no room for debate: the successful simply deserve their success in a fashion not dissimilar to the notion of the divine right of kings.

If there’s a possibility that worldly success is however due to chance, i.e. luck, then the road is open to argue about the outcomes of the economic system. Since John D. Rockefeller, at least according to Tarbell, certainly did act as though worldly success was far more due to “chance” rather than the fair outcome of a square game, one could I suppose argue that he was a believer in Darwinism like the believers in the “social Darwinist” camp say. But that seems to stretch the point.

Still, what has this to do with the humanities? The answer is that you could do worse than define the humanities by saying they are the disciplines of the university that ignore survivorship bias—although, if so, that might mean that “business” ought to be classified alongside comparative literature in the course catalogue, at least as Taleb puts it.

Examine economist Gary Smith’s Standard Deviations: Flawed Assumptions, Tortured Data, And Other Ways To Lie With Statistics. As Michael Shermer of Pomona College notes in a review of Smith’s book, Smith shows how business books like Jim Collins’ Good to Great “culled 11 companies out of 1,435 whose stock beat the market average over a 40-year time span and then searched for shared characteristics among them,” or how In Search of Excellence, 1982’s best-seller,  “identified eight common attributes of 43 ‘excellent’ companies.” As Taleb says in his The Black Swan: The Impact of the Highly Improbable, such studies “take a population of hotshots, those with big titles and big jobs, and study their attributes”—they “look at what those big guns have in common: courage, risk taking, optimism and so on, and infer that these traits, most notably risk taking, help you to become successful.” But as Taleb observes, the “graveyard of failed persons [or companies] will be full of people who shared the following traits: courage, risk taking, optimism, et cetera.” The problem with “studies” like these is that they begin with Taleb’s 313, instead of the 10,000.

Another way to describe “survivorship bias” in other words is to say that any real investigation into anything must consider what Taleb calls the “silent evidence”: in the case of the 10,000 money managers, it’s necessary to think of the 9000-odd managers who started the game and failed, and not just the 300-odd managers who succeeded. Such studies will surely always find “commonalities” between the “winners,” just as Taleb’s 313 will surely always discover some common trait between them—and in the same way that a psychic can always “miraculously” know that somebody just died.

Yet, why should the intellectual shallowness of business writers matter to scholars in the humanities, who write not for popular consumption but for peer-review? Well, because as Taleb points out, the threat posed by survivorship bias to shoddy kinds of scholarship is not particular to shoddy studies and shoddy scholars, but instead is endemic to entire species of writing. Take for instance Shermer’s discussion of Walter Isaacson’s 2011 biography of Apple Computer’s Steve Jobs … which I’d go into if it were necessary.

But it isn’t, according to Taleb: the “entire notion of biography,” Taleb says in The Black Swan, “is grounded in the arbitrary ascription of a causal relation between specified traits and subsequent events.” Biography by definition takes a number of already-successful entities and then tries to explain their success, instead of starting with equally-unknown entities and watching them either succeed or fail. Nobody finds Beethoven before birth, and even Jesus Christ didn’t pick up disciples before adulthood. Biographies then might be entertaining, but they can’t possibly have any real intellectual substance. Biographies could only really be valuable if their authors predicted a future success—and nobody could possibly write a predictive biography. Biography then simply is an exercise in survivorship bias.

And if biography, then how about history? About the only historians who discuss the point of survivorship bias are those who write what’s known as “counterfactual” history. A genre largely kicked off by journalist MacKinlay Kantor’s fictitious 1960 speculation, If the South Had Won the Civil War, it’s been defined by former Regius Professor of History at Cambridge University Richard J. Evans as “alternative versions of the past in which one alteration in the timeline leads to a different outcome from the one we know actually occurred.” Or as David Frum, thinking in The Atlantic about what might have happened had the United States not entered World War I in 1917, says about his enterprise: “Like George Bailey in It’s a Wonderful Life, I contemplate these might-have-beens to gain a better appreciation for what actually happened.” In statements like these, historians confront the fact that their discipline is inevitably subject to the problem of survivorship bias.

Maybe that’s why counterfactual history is also a genre with a poor reputation with historians: Evans himself has condemned the genre, in The Guardian, by writing that it “threatens to overwhelm our perceptions of what really happened in the past.” “The problem with counterfactuals,” Evans says, “is that they almost always treat individual human actors … as completely unfettered,” when in fact historical actors are nearly always constrained by larger forces. FDR could, hypothetically, have called for war in 1939—it’s just that he probably wouldn’t have been elected in 1940, and someone else would have been in office on that Sunday in Oahu. Which, sure, is true, and responsible historians have always, as Evans says, tried “to balance out the elements of chance on the one hand, and larger historical forces (economic, cultural, social, international) on the other, and come to some kind of explanation that makes sense.” That, to be sure, is more or less the historian’s job. But I am sure the man on the wire doesn’t like to reminded of the absence of a net either.

The threat posed by survivorship bias extends even into genres that might appear to be immune to it: surely the study of literature, which isn’t about “reality” in any strict sense, is immune to the acid bath of survivorship bias. But look at Taleb’s example of how a consideration of survivorship bias affects just how we think about literature, in the form of a discussion of the reputation of the famous nineteenth French novelist Honoré de Balzac.

Let’s say, Taleb proposes, someone asks you why Balzac deserves to be preserved as a great writer, and in reply “you attribute the success of the nineteenth-century novelist … to his superior ‘realism,’ ‘insights,’ ‘sensitivity,’ ‘treatment of characters,’ ‘ability to keep the reader riveted,’ and so on.” As Taleb points out, those characteristics only work as a justification for preserving Balzac “if, and only if, those who lack what we call talent also lack these qualities.” If, on the other hand, there are actually “dozens of comparable literary masterpieces that happened to perish” merely by chance, then “your idol Balzac was just the beneficiary of disproportionate luck compared to his peers.” Without knowing who Balzac’s competitors were, in other words, we are not in a position to know with certainty whether Balzac’s success is due to something internal to his work, or whether his survival is simply the result of dumb luck. So even literature is threatened by survivorship bias.

If you wanted to define the humanities you could do worse than to say they are the disciplines that pay little to no attention to survivorship bias. Which, one might say, is fine: “In my father’s house are many mansions,” to cite John 14:2. But the trouble may be that, since as Taleb or Smith—and the examples could be multiplied—point out, the work of the humanities share the same “scholarly” standards as those of many “business writers,” it does not really matter how “radical”—or even merely reformist—their claims are. The similarities of method may simply overwhelm the message.

In that sense then, despite the efforts of many academics to center a leftist politics on the classrooms of the English department rather than the scientific lab, that just may not be possible: the humanities will always be centered on fending off survivorship bias in the guise of biology’s threat to “reduce the complexities of human culture to patterns in animal behavior,” as W.J.T. Mitchell says—and in so doing, the disciplines of culture will inevitably end up arguing, as Walter Benn Michaels says, “that the differences that divide us are not the differences between those of who have money and those who don’t but are instead the differences between those of us who are black and those who are white or Asian or Latino or whatever.” The humanities are antagonistic to biology because the central concept of Darwinian biology, natural selection, is a version of the principle of survivorship bias, while survivorship bias is a concept that poses a real and constant intellectual threat to the humanities—and finally, to complete the circle, survivorship bias is the only argument against allowing the rich to run the world according to their liking. It may then not be any wonder why, as the tide has gone out on the American dream, the American academy has essentially responded by saying “let’s talk about something else.” To the gentlemen and ladies of the American disciplines of the humanities, the wealthy are just the adversary.

Fine Points

 

Whenever asked a question, [John Lewis] ignored the fine points of whatever theory was being put forward and said simply, “We’re gonna march tonight.”
—Taylor Branch.
   Parting the Waters: America in the King Years Vol. 1 

 

 

“Is this how you build a mass movement?” asked social critic Thomas Frank in response to the Occupy Wall Street movement: “By persistently choosing the opposite of plain speech?” To many in the American academy, the debate is over—and plain speech lost. More than fifteen years ago articles like philosopher Martha Nussbaum’s 1999 criticism of professor Judith Butler, “The Professor of Parody,” or political scientist James Miller’s late 1999 piece “Is Bad Writing Necessary?” got published—and both articles sank like pianos. Since then it’s seemed settled that (as Nussbaum wrote at the time) the way “to do … politics is to use words in a subversive way.” Yet at a minimum this pedagogy diverts attention from, as Nussbaum says, “the material condition of others”—and at worst, as professor Walter Benn Michaels suggests, it turns the the academy into “the human resources department of the right, concerned that the women [and other minorities] of the upper middle class have the same privileges as the men.” Supposing then that bad writers are not simply playing their part in class war, what is their intention? I’d suggest that subversive writing is best understood as a parody of a tactic used, but not invented, by the civil rights movement: packing the jails.

“If the officials threaten to arrest us for standing up for our rights,” Martin Luther King, Jr. said in a January 1960 speech in Durham, North Carolina, “we must answer by saying that we are willing and prepared to fill up the jails of the South.” King’s speech was written directly towards the movement’s pressing problem: bailing out protestors cost money. In response, Thomas Gaither, a field secretary for the Congress for Racial Equality (CORE), devised a solution: he called it “Jail No Bail.” Taylor Branch, the historian, explained the concept in Parting the Waters: America in the King Years 1954-63: the “obvious advantage of ‘jail, no bail’ was that it reversed the financial burden of protest, costing the demonstrators no cash while obligating the white authorities to pay for jail space and food.” All protestors had to do was: get arrested, serve the time—and thereby cost the state their room and board.

Yet Gaither did not invent the strategy. “Packing the jails” as a strategy began, so far as I can tell, in October of 1909; so reports the Minnesotan, Harvey O’Connor, in his 1964 autobiography Revolution in Seattle: A Memoir. All that summer, the International Workers of the World (the “Wobblies”) had been engaged in a struggle against “job sharks”: companies that claimed to procure jobs for their clients after the payment of a fee—and then failed to deliver. (“It was customary,” O’Connor wrote, “for the employment agencies … to promote a rapid turnover”: the companies would take the money and either not produce the job, or the company that “hired” the newly-employed would fire them shortly afterwards.) In the summer of 1909 those companies succeeded in banning public assemblies and speaking on the part of the Wobblies, and legal challenges proved impossible. So in the October of that year the Wobblies “sent out a call” in the labor organization’s newspaper, the Industrial Worker: “Wanted: Men To Fill The Jails of Spokane.”

Five days later, the Wobblies held a “Free Speech Day” rally, and managed to get 103 men arrested. By “the end of November 500 Wobblies were in jail.” Through the “get arrested” strategy, the laborers filled the city’s jail “to bursting and then a school was used for the overflow, and when that filled up the Army obligingly placed a barracks at the city’s command.” And so the Wobblies’ strategy was working: the “jail expenses threatened to bankrupt the treasuries of cities even as large as Spokane.” As American writer and teacher Archie Binns had put the same point in 1942: it “was costing thousands of dollars every week to feed” the prisoners, and so the city was becoming “one big jail.” In this way, the protestors threatened to “eat the capitalistic city out of house and home”—and so the “city fathers” of Spokane backed down, instituting a permitting system for public marches and assemblies. “Packing the jails” won.

What, however, has this history to do with the dispute between plain-speakers and bad writers? In the first place it demonstrates how our present-day academy would much rather talk about Martin Luther King, Jr. and CORE than Harvey O’Connor and the Wobblies. Writing ruefully about left-wing professors like himself, Walter Benn Michaels writes “We would much rather get rid of racism than get rid of poverty”; elsewhere he says, “American liberals … carry on about racism and sexism in order to avoid doing so about capitalism.” Despite the fact that, historically, the civil rights movement borrowed a lot from the labor movement, today’s left doesn’t have much to say about that—nor much about today’s inequality. So connecting the tactics of the Wobblies to those of the civil rights movement is important because it demonstrates continuity where today’s academy wants to see, just as much as any billionaire, a sudden break.

That isn’t the only point of bringing up the “packing the jails” tactic however—the real point is that writers like Butler are making use of a version of this argument without publicly acknowledging it. As laid out by Nussbaum and others, the unsaid argument or theory or idea or concept (whatever name you’d have for it) behind “bad” writing is a version of “packing the jails.” To be plain: that by filling enough academic seats (with the right sort of person) political change will somehow automatically follow, through a kind of osmosis.

Admittedly, no search of the writings of America’s professors, Judith Butler or otherwise, will discover a “smoking gun” regarding that idea—if there is one, presumably it’s buried in an email or in a footnote in a back issue of Diacritics from 1978. The thesis can only to be discovered in the nods and understandings of the “professionals.” On what warrant, then, can I claim that it is their theory? If that’s the plan, how do I know?

My warrant extends from a man who knew, as Garry Wills of Northwestern says,  something about “the plain style”: Abraham Lincoln. To Lincoln, the only possible method of interpretation is a judgment of intent: as Lincoln said in his speech at Peoria in 1858, “when we see a lot of framed timbers, different portions of which we know have been gotten out at different times and places by different workmen,” and “we see these timbers joined together, and see they exactly make the frame of a house or a mill,” why, “in such a case we find it impossible not to believe” that everyone involved “all understood each other from the beginning.” Or as Walter Benn Michaels has put the same point: “you can’t do textual interpretation without some appeal to authorial intention.” In other words, when we see a lot of people acting in similar ways, we should be able to make a guess about what they’re trying to do.

In the case of Butlerian feminists—and, presumably, other kinds of bad writers—bad writing allows them to “do politics in [the] safety of their campuses,” as Nussbaum says, by “making subversive gestures through speech.” Instead of “packing the jails” this pedagogy, this bad writing, teaches “packing the academy”: the theory presumably being that, just as Spokane could only jail so many people, the academy can only hold so many professors. (Itself an issue, because there are a lot fewer professorships available these days, and only liable to be fewer.) Since, as Abraham Lincoln said about what he saw in the late 1850s, we can only make a guess—but we must make a guess—about what those intentions are, I’d hazard that my guess is more or less what these bad writers have in mind.

Unfortunately, in the hands of Butler and others, bad writing is only a parody—it only mimics the very real differences between the act of going to jail and that of attempting to become the, say, Coca-Cola Professor of Rhetoric at Wherever State. A black person willing to go to jail in the South in 1960 was a person with a great deal of courage—and still would be today. But it’s also true that it’s unlikely the courageous civil rights volunteers would have conceived of, much less carried out, the act of attempting to “pack the jails” without the example of the Wobblies prior to them—just as it might be argued that, without the sense of being of the same race and gender as their oppressors, the Wobblies might not have had the courage to pack the jails of Spokane. So it certainly could be argued that the work of the “bad writers” is precisely to make those connections—and so create the preconditions for similar movements in the future.

Yet, as George Orwell might have asked, “where’s the omelette?” Where are the people in jail—and where are the decent pay and equal rights that might follow them? Butler and other “radical” critics don’t produce either: I am not reliably informed of Judith Butler’s arrest record, but I’d suspect it’s not much. So Nussbaum’s observation that while Butler’s pedagogy “instructs people that they can, right now, without compromising their security, do something bold” [emp. added] she wasn’t entirely snide then, and her words look increasingly prescient now. That’s what Nussbaum means when she says that “Butlerian feminism is in many ways easier than the old feminism”: it is a path that demonstrates to middle-class white people, women especially, just how they can “dissent” without giving up their status or power. Nussbaum thus implies that feminism or any other kind of “leftism” practiced along Butler’s lines is not only, quite literally, physically cowardly—but perhaps more importantly suggests just why the “left,” such as it is, is losing.

For surely the “Left” is losing: as many, many people besides Walter Benn Michaels have written, economic inequality has risen, and is rising, even as the sentences and jargon of today’s academics have become more complex—and the academy’s own power slowly dissolves into a mire of adjunct professorships and cut-rate labor policies. Emmanuel Saez of the University of California says that “U.S. income inequality has been steadily increasing since the 1970s, and now has reached levels not seen since 1928,” and Nobel Prize winner Paul Krugman says that even the wages of “highly educated Americans have gone nowhere since the late 1990s.” We witness the rise of plutocrats on a scale never seen before, perhaps at least since the fall of the Bourbons—or even the Antonines.

That is not to suggest, to be sure, that individual “bad writers” are or are not cowards: merely to be a black person or a woman requires levels of courage many people will never be aware of in their lifetimes. Yet, Walter Benn Michaels is surely correct when he says that as things now stand, the academic left in the United States today is largely “a police force for, than an alternative to, the right,” insofar as it “would much rather get rid of racism [or sexism] than get rid of poverty.” Fighting “power” by means of a program of bad, rather than good, writing—writing designed to appeal to great numbers of people—is so obviously stupid it could only have been invented by smart people.

The objection is that giving up the program of Butlerian bad writing requires giving up the program of “liberation” her prose suggests: what Nussbaum calls Butler’s “radical libertarian” dream of the “sadomasochistic rituals of parody.” Yet as Thomas Frank has suggested, it’s just that kind of libertarian dream that led the United States into this mess in the first place: America’s recent troubles have, Frank says, resulted from “the political power of money”—a political power that was achieved courtesy of “a philosophy of liberation as anarchic in its rhetoric as Occupy [Wall Street] was in reality” [emp. Frank’s]. By rejecting that dream, American academics might obtain “food, schools, votes” and (possibly) less rape and violence for both women and men alike. But how?

Well, I have a few ideas—but you’d have to read some plain language.

Now and Forever

[B]ehold the … ensign of the republic … bearing for its motto, no such miserable interrogatory as “What is all this worth?” nor those other words of delusion and folly, “Liberty first and Union afterwards” …
—Daniel Webster. Second Reply to Hayne. 27 January 1830. 

       

       

The work on Medinah’s Course #1, older-but-not-as-accomplished brother to Course #3, began almost as soon as the last putt was struck during this year’s Ryder Cup. Already the ‘scape looks more moon than land, perhaps like a battlefield after the cannon have been silenced. Quite a few trees have been taken out, in keeping with Tom Doak’s philosophy of emphasizing golf’s ground (rather than aerial) game. Still, as interesting as it might be to discuss the new routing Doak is creating, the more significant point about Medinah’s renovation is that it is likely one of the few projects that Doak, or any other architect, has going on American soil right now. Yet today might be one of the best opportunities ever for American golf architecture—assuming, that is, Americans can avoid two different hazards.

The first hazard might be presented by people who’d prefer we didn’t remember our own history: in this case, the fact that golf courses were once weapons in the fight against the Great Depression. While immediately on assuming office in early 1933 Franklin Roosevelt began the Federal Emergency Relief Agency—which, as Encyclopedia.com reminds us, had the “authority to make direct cash payments to those with no other means of support,” amazing enough in this era when even relief to previously-honored homeowners is considered impossible—by 1935 that program had evolved into the Works Project Administration. By 1941, the WPA had invested $11.3 billion (in 1930s dollars!) in 8 million workers and such projects as 1,634 schools, 105 airports, 3,000 tennis courts, 3,300 dams, 5,800 mobile libraries. And lastly, but perhaps not leastly, 103 golf courses.

As per a fine website called The Living New Deal, dedicated to preserving the history of the New Deal’s contributions to American life, it’s possible to find that not only did these courses have some economic impact on their communities and the nation as a whole, but that some good courses got built—good enough to have had an impact on professional golf. The University of New Mexico’s North Course, for instance, was the first golf course in America to measure more than 7000 yards—today is the standard for professional-length golf courses—and was the site of a PGA Tour stop in 1947. The second 18-hole course in New Orleans’ City Park—a course built by the WPA—was host to the New Orleans Open for decades.

Great architects designed courses built by the WPA. Donald Ross designed the George Wright Golf Course in Boston, opened in 1938. A.W. Tillinghast designed the Black course at Bethpage State Park, opened in the depths of the Depression in 1936. George Wright is widely acclaimed as one of Ross’ best designs, while the Black hosted the first U.S. Open held at a government-owned golf course, in 2002, and then held an encore in 2009. Both Opens were successful: Tiger won the first, Lucas Glover the second, and six players, total, were under par in the two tournaments. In 2012, Golf Digest rated it #5 in its list of America’s toughest courses—public or private. (Course #3 at Medinah ranked 16th.)

Despite all that, some time ago one Raymond Keating at the Foundation for Economic Education wrote that “Bethpage represents what is wrong with … golf.” He also claimed that “there is no justification whatsoever for government involvement in the golf business.” But, aside from the possibility of getting another Bethpage Black, there are a number of reasons for Americans to invest in golf courses or other material improvements to their lives, whether it be high-speed rail or re-constructed bridges, at the moment.

The arguments by the economists can be, and are, daunting, but one point that everyone may agree on is that it is unlikely that Americans will ever again be able to borrow money on such attractive terms: as Elias Isquith put it at the website The League of Ordinary Gentlemen, the bond market is “still setting interest rates so low it’s almost begging the US to borrow money.” The dollars that we repay these loans with, in short, will in all likelihood—through the workings of time and inflation—be worth less than the ones on offer now. That’s one reason why Paul Krugman, the Nobel Prize-winning economist, says that “the danger for next year is not that the [federal] deficit will be too large but that it will be too small, and hence plunge America back into recession.” By not taking advantage of this cheap money that is, essentially, just lying there, America is effectively leaving productive forces (like Tom Doak’s company) idle, instead of engaging them in work: the labor that grows our economy.

America, thusly, has an historic opportunity for golf: given that American companies, like Tom Doak’s or Rees Jones’ or Pete Dye’s or Ben Crenshaw and Bill Coore’s, or any number of others, are at the forefront of golf design today, it would be possible to create any number of state-of-the-art golf courses that would first, stimulate our economy, and secondly, reward American citizens with some of the finest facilities on the planet at what would be essentially little to no cost. And, it might be worth bringing up, maybe that could help us with regard to that troublesome series of golf events known as the Ryder Cup: maybe a generation of golfers weaned on fine public, instead of private, courses might understand the ethos of team spirit better than the last several ensembles fielded by our side.

Unless, that is, another faction of American citizens has their sway. On the outskirts of San Francisco, there is a golf course known as Sharp Park. It was originally designed by Alastir MacKenzie, the architect who also designed Cypress Point and Pasatiempo, in California, and public golf courses for both the University of Michigan and the Ohio State University (both thought to be among the finest college courses in the world)—and also a course for a small golf club named the Augusta National Golf Club. Sharp Park remains the only public course MacKenzie designed on the ocean, and MacKenzie’s goal in designing it was to create “the finest municipal golf course in America”—a goal that, present-day conditioning aside, many experts would say he succeeded, or nearly succeeded, in doing.

Unfortunately, a small number of “environmentalists,” as reported by San Francisco’s “alternate” newspaper, SFWeekly, now “want the site handed over to the National Park Service for environmental restoration.” According to a story by Golf Digest, the activists “contend it harms two endangered species, the San Francisco garter snake and California red-legged frog.” A year ago, though, a federal judge found that, contrary to the environmentalists’ accusations, “experts for both sides agree[d] that the overall Sharp Park frog population has increased during the last 20 years.” Ultimately, in May of this year, the judge found the evidence that the golf course’s existence harmed the two endangered species so weak that the court in effect dismissed the lawsuit, saying it were better that the public agencies responsible for monitoring the two species continued to do their job, rather than the judiciary.

I bring all of this up because, in investigating the case of Sharp Park, it is hard to avoid considering that the source of the environmentalists’ actions wasn’t so much concern for the two species—which, it must be pointed out, appear to be doing fine, at least within the boundaries of the park—as it was animosity towards the sport of golf itself. The “anti-Sharp Park” articles I consulted, for instance, such as the SF Weekly piece I mentioned above, did not see fit to note Alister MacKenzie’s involvement in the course’s design. Omissions like that are a serious weakness, in my view, to any claim of objectivity regarding the case.

Still, regardless of the facts in this particular case, the instance of Sharp Park may be illustrative of a particular form of “leftism” can be, in its own way, as defeatist and gloomy as that species of “conservatism” that would condemn us to lifetimes of serving the national debt. Had we a mass “environmental movement” in the 1930s, in other words, how many of those golf courses—not to mention all of the other projects constructed by the WPA and other agencies—would have gotten built?

That isn’t to say, of course, that anyone is in favor of dirty air or water; far from it. It is, though, to say that, for a lot of so-called leftists, the problem with America is Americans, and that that isn’t too far from saying, with conservatives and Calvin Coolidge, that the “business of the American people is business.” We can choose to serve other masters, one supposes—whether they be of the future or the past—but I seem to recall that America isn’t supposed to work that way. The best articulation of the point, as it so occurs, may have been delivered precisely one hundred and forty-nine years ago on the 19th of November, over a shredded landscape over which the guns had drawn quiet.

I’ll give you a hint: it included the phrase “of the people, by the people, for the people.”