Good’n’Plenty

Literature as a pure art approaches the nature of pure science.
—“The Scientist of Letters: Obituary of James Joyce.” The New Republic 20 January 1941.

 

028f4e06ed5fa7b5c60c796c9c4ab59244fb41cc
James Joyce, in the doorway of Shakespeare & Co., sometime in the 1920s.

In 1910 the twenty-sixth president of the United States, Theodore Roosevelt, offered what he called a “Square Deal” to the American people—a deal that, the president explained, consisted of two components: “equality of opportunity” and “reward for equally good service.” Not only would everyone would be given a chance, but, also—and as we shall see, more importantly—pay would be proportional to effort. More than a century later, however—according to University of Illinois at Chicago professor of English Walter Benn Michaels—the second of Roosevelt’s components has been forgotten: “the supposed left,” Michaels asserted in 2006, “has turned into something like the human resources department of the right.” What Michaels meant was that, these days, “the model of social justice is not that the rich don’t make as much and the poor make more,” it is instead “that the rich [can] make whatever they make, [so long as] an appropriate percentage of them are minorities or women.” In contemporary America, he means, only the first goal of Roosevelt’s “Square Deal” matters. Yet, why should Michaels’ “supposed left” have abandoned Roosevelt’s second goal? An answer may be found in a seminal 1961 article by political scientists Peter B. Clark and James Q. Wilson called “Incentive Systems: A Theory of Organizations”—an article that, though it nowhere mentions the man, could have been entitled “The Charlie Wilson Problem.”

Charles “Engine Charlie” Wilson was president of General Motors during World War II and into the early 1950s; General Motors, which produced tanks, bombers, and ammunition during the war, may have been as central to the war effort as any other American company—which is to say, given the fact that the United States was the “Arsenal of Democracy,” quite a lot. (“Without American trucks, we wouldn’t have had anything to pull our artillery with,” commented Field Marshal Georgy Zhukov, who led the Red Army into Berlin.) Hence, it may not be a surprise that World War II commander Dwight Eisenhower selected Wilson to be his Secretary of Defense when the leader of the Allied war in western Europe was elected president in 1952, which led to the confirmation hearings that made Wilson famous—and the possible subject of “Incentive Systems.”

That’s because of something Wilson said during those hearings: when asked whether he could make a decision, as Secretary of Defense, that would be adverse for General Motors, Wilson replied that he could not imagine such a situation, “because for years I thought that what was good for our country was good for General Motors, and vice versa.” Wilson’s words revealed how sometimes people within an organization can forget about the larger purposes of the organization—or what could be called “the Charlie Wilson problem.” What Charlie Wilson could not imagine, however, was precisely what James Wilson (and his co-writer Peter Clark) wrote about in “Incentive Systems”: how the interests of an organization might not always align with society.

Not that Clark and Wilson made some startling discovery; in one sense “Incentive Systems” is simply a gloss on one of Adam Smith’s famous remarks in The Wealth of Nations: “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public.” What set their effort apart, however, was the specificity with which they attacked the problem: the thesis of “Incentive Systems” asserts that “much of the internal and external activity of organizations may be explained by understanding their incentive systems.” In short, in order to understand how an organization’s purposes might differ from that of the larger society, a big clue might be in how it rewards its members.

In the particular case of Engine Charlie, the issue was the more than $2.5 million in General Motors stock he possessed at the time of his appointment as Secretary of Defense—even as General Motors remained one of the largest defense contractors. Depending on the calculation, that figure would be nearly ten times that today—and, given contemporary trends in corporate pay for executives, would surely be even greater than that: the “ratio of CEO-to-worker pay has increased 1,000 percent since 1950,” according to a 2013 Bloomberg report. But “Incentive Systems” casts a broader net than “merely” financial rewards.

The essay constructs “three broad categories” of incentives: “material, solidary, and purposive.” That is, not only pay and other financial sorts of reward of the type possessed by Charlie Wilson, but also two other sorts: internal rewards within the organization itself—and rewards concerning the organization’s stated intent, or purpose, in society at large. Although Adam Smith’s pointed comment raised the issue of the conflict of material interest between organizations and society two centuries ago, what “Incentive Systems” thereby raises is the possibility that, even in organizations without the material purposes of a General Motors, internal rewards can conflict with external ones:

At first, members may derive satisfaction from coming together for the purpose of achieving a stated end; later they may derive equal or greater satisfaction from simply maintaining an organization that provides them with office, prestige, power, sociability, income, or a sense of identity.

Although Wealth of Nations, and Engine Charlie, provide examples of how material rewards can disrupt the straightforward relationship between members, organizations, and society, “Incentive Systems” suggests that non-material rewards can be similarly disruptive.

If so, Clark and Wilson’s view may perhaps circle back around to illuminate a rather pressing current problem within the United States concerning material rewards: one indicated by the fact that the pay of CEOs of large companies like General Motors has increased so greatly against that of workers. It’s a story that was usefully summarized by Columbia University economist Edward N. Wolff in 1998: “In the 1970s,” Wolff wrote then, “the level of wealth inequality in the United States was comparable to that of other developed industrialized countries”—but by the 1980s “the United States had become the most unequal society in terms of wealth among the advanced industrial nations.” Statistics compiled by the Census Bureau and the Federal Reserve, Nobel Prize-winning economist Paul Krugman pointed out in 2014, “have long pointed to a dramatic shift in the process of US economic growth, one that started around 1980.” “Before then,” Krugman says, “families at all levels saw their incomes grow more or less in tandem with the growth of the economy as a whole”—but afterwards, he continued, “the lion’s share of gains went to the top end of the income distribution, with families in the bottom half lagging far behind.” Books like Thomas Piketty’s Capital in the Twenty-first Century have further documented this broad economic picture: according to the Institute for Policy Studies, for example, the richest 20 Americans now have more wealth than the poorest 50% of Americans—more than 150 million people.

How, though, can “Incentive Systems” shine a light on this large-scale movement? Aside from the fact that, apparently, the essay predicts precisely the future we now inhabit—the “motivational trends considered here,” Wilson and Clark write, “suggests gradual movement toward a society in which factors such as social status, sociability, and ‘fun’ control the character of organizations, while organized efforts to achieve either substantive purposes or wealth for its own sake diminish”—it also suggests just why the traditional sources of opposition to economic power have, largely, been silent in recent decades. The economic turmoil of the nineteenth century, after all, became the Populist movement; that of the 1930s became the Popular Front. Meanwhile, although it has sometimes been claimed that Occupy Wall Street, and more lately Bernie Sanders’ primary run, have been contemporary analogs of those previous movements, both have—I suspect anyway—had nowhere near the kind of impact of their predecessors, and for reasons suggested by “Incentive Systems.”

What “Incentive Systems” can do, in other words, is explain the problem raised by Walter Benn Michaels: the question of why, to many young would-be political activists in the United States, it’s problems of racial and other forms of discrimination that appear the most pressing—and not the economic vice that has been squeezing the majority of Americans of all races and creeds for the past several decades. (Witness the growth of the Black Lives Matter movement, for instance—which frames the issue of policing the inner city as a matter of black and white, rather than dollars and cents.) The signature move of this crowd has, for some time, been to accuse their opponents of (as one example of this school has put it) “crude economic reductionism”—or, of thinking “that the real working class only cares about the size of its paychecks.” Of course, as Michaels says in The Trouble With Diversity, the flip side of that argument is to say that this school attempts to fit all problems into the Procrustean bed of “diversity,” or more simply, “that racial identity trumps class,” rather than the other way. But why do those activists need to insist on the point so strongly?

“Some people,” Jill Lepore wrote not long ago in The New Yorker about economic inequality, “make arguments by telling stories; other people make arguments by counting things.” Understanding inequality, as should be obvious, requires—at a minimum—a grasp of the most basic terms of mathematics: it requires knowing, for instance, that a 1,000 percent increase is quite a lot. But more significantly, it also requires understanding something about how rewards—incentives—operate in society: a “something” that, as Nobel Prize-winning economist Joseph Stiglitz explained not long ago, is “ironclad.” In the Columbia University professor’s view (and it is more-or-less the view of the profession), there is a fundamental law that governs the matter—which in turn requires understanding what a scientific law is, and how one operates, and so forth.

That law in this case, the Columbia University professor says, is this: “as more money becomes concentrated at the top, aggregate demand goes into decline.” Take, Stiglitz says, the example of Mitt Romney’s 2010 income of $21.7 million: Romney can “only spend a fraction of that sum in a typical year to support himself and his wife.” But, he continues, “take the same amount of money and divide it among 500 people—say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all the money gets spent.” The more evenly money is spread around, in other words, the more efficiently, and hence productively, the American economy works—for everyone, not just some people. Conversely, the more total income is captured by fewer people, the less efficiently the economy becomes, resulting in less productivity—and ultimately a poorer America. But understanding Stiglitz’ argument requires a kind of knowledge possessed by counters, not storytellers—which, in the light of “Incentive Systems,” illustrates just why it’s discrimination, and not inequality, that is the issue of choice for political activists today.

At least since the 1960s, that is, the center of political energy on university campuses has usually been the departments that “tell stories,” not the departments that “count things”: as the late American philosopher Richard Rorty remarked before he died, “departments of English literature are now the left-most departments of the universities.” But, as Clark and Wilson might point out (following Adam Smith), the departments that “tell stories” have internal interests that may not be identical to the interests of the public: as mentioned, understanding Joseph Stiglitz’ point requires understanding science and mathematics—and as Bruce Robbins (a colleague of Wolff and Stiglitz at Columbia University, only in the English department ) has remarked, “the critique of Enlightenment rationality is what English departments were founded on.” In other words, the internal incentive systems of English departments and other storytelling disciplines reward their members for not understanding the tools that are the only means of understanding foremost political issue of the present—an issue that can only be sorted out by “counting things.”

As viewed through the prism of “Incentive Systems,” then, the lesson taught by the past few decades of American life might well be that elevating “storytelling” disciplines above “counting” disciplines has had the (utterly predictable) consequence that economic matters—a field constituted by arguments constructed about “counting things”—have been largely vacated as a possible field of political contest. And if politics consists of telling stories only, that means that “counting things” is understood as apolitical—a view that is surely, as students of deconstruction have always said, laden with politics. In that sense, then, the deal struck by Americans with themselves in the past several decades hardly seems fair. Or, to use an older vocabulary:

Square.

Buck Dancer’s Choice

Buck Dancer’s Choice: “a tune that goes back to Saturday-night dances, when the Buck, or male partner, got to choose who his partner would be.”
—Taj Mahal. Oooh So Good ‘n’ Blues. (1973).

 

“Goddamn it,” Scott said, as I was driving down the Kennedy Expressway towards Medinah Country Club. Scott is another caddie I sometimes give rides to; he’s living in the suburbs now and has to take the train into the city every morning to get his methadone pill, where I pick him up and take him to work. On this morning, Scott was distracting himself, as he often does, from the traffic outside by playing, on his phone, the card game known as spades—a game in which, somewhat like contract bridge, two players team up against an opposing partnership. On this morning, he was matched with a bad partner—a player who had, it came to light later, not trumped a ten of spades with the king the other player had in possession, and instead had played a three of spades. (In so doing, Scott’s incompetent partner thereby negated the value of the latter while receiving nothing in return.) Since, as I agree, that sounds relentlessly boring, I wouldn’t have paid much attention to the whole complaint—until I realized that not only did Scott’s grumble about his partner essentially describe the chief event of the previous night’s baseball game, but also why so many potential Democratic voters will likely sit out this election. After all, arguably the best Democratic candidate for the presidency this year will not be on the ballot in November.

What had happened the previous night was described on ESPN’s website as “one of the worst managerial decisions in postseason history”: in a one-game, extra-innings, playoff between the Baltimore Orioles and and the Toronto Blue Jays, Orioles manager Buck Showalter used six relief pitchers after starter Chris Tillman got pulled in the fifth inning. But he did not order his best reliever, Zach Britton, into the game at all. During the regular season, Britton had been one of the best relief pitchers in baseball; as ESPN observed, Britton had allowed precisely one earned run since April, and as Jonah Keri wrote for CBS Sports, over the course of the year Britton posted an Earned Run Average (.53) that was “the lowest by any pitcher in major league history with that many innings [67] pitched.” (And as Deadspin’s Barry Petchesky remarked the next day, Britton had “the best ground ball rate in baseball”—which, given that Orioles ultimately lost on a huge, moon-shot walk-off home run by Edwin Encarnacion, seems especially pertinent.) Despite the fact that the game went 11 innings, Showalter did not put Britton on the mound even once—which is to say that the Orioles ended their season with one of their best weapons sitting on the bench.

Showalter had the king of spades in his hand—but neglected to play him when it mattered. He defended himself later by saying, essentially, that he is the manager of the Baltimore Orioles, and that everyone else was lost in hypotheticals. “That’s the way it went,” the veteran manager said in the post-game press conference—as if the “way it went” had nothing to do with Showalter’s own choices. Some journalists speculated, in turn, that Showalter’s choices were motivated by what Deadspin called “the long-held, slightly-less-long-derided philosophy that teams shouldn’t use their closers in tied road games, because if they’re going to win, they’re going to need to protect a lead anyway.” In this possible view, Showalter could not have known how long the game would last, and could only know that, until his team scored some runs, the game would continue. If so, then it might be possible to lose by using your ace of spades too early.

Yet, not only did Showalter deny that such was a factor in his thinking—“It [had] nothing to do with ‘philosophical,’” he said afterwards—but such a view takes things precisely backward: it’s the position that imagines the Orioles scoring some runs first that’s lost in hypothetical thinking. Indisputably, the Orioles needed to shut down the Jays in order to continue the game; the non-hypothetical problem presented to the Orioles manager was that the O’s needed outs. Showalter had the best instrument available to him to make those outs … but didn’t use him. And that is to say that it was Showalter who got lost in his imagination, not the critics. By not using his best pitcher Showalter was effectively reacting to an imaginative hypothetical scenario, instead of responding to the actual facts playing out before him.

What Showalter was flouting, in other words, was a manner of thinking that is arguably the reason for what successes there are in the present world: probability, the first principle of which is known as the Law of Large Numbers. First conceived by a couple of Italians—Gerolamo Cardano, the first man known to have devised the idea, during the sixteenth century, and Jacob Bernoulli, who publicized it during the eighteenth—the Law of Large Numbers holds that, as Bernoulli put it in his Ars Conjectandi from 1713, “the more observations … are taken into account, the less is the danger of straying.” Or, that the more observations, the less the danger of reaching wrong conclusions. What Bernoulli is saying, in other words, is that in order to demonstrate the truth of something, the investigator should look at as many instances as possible: a rule that is, largely, the basis for science itself.

What the Law of Large Numbers says then is that, in order to determine a course of action, it should first be asked, “what is more likely to happen, over the long run?” In the case of the one-game playoff, for instance, it’s arguable that Britton, who has one of the best statistical records in baseball, would have been less likely to give up the Encarnacion home run than the pitcher who did (Ubaldo Jimenez, 2016 ERA 5.44) was. Although Jimenez, for example, was not a bad ground ball pitcher in 2015—he had a 1.85 ground ball to fly ball ratio that season, putting him 27th out of 78 pitchers, according to SportingCharts.com—his ratio was dwarfed by Britton’s: as J.J. Cooper observed just this past month for Baseball America, Britton is “quite simply the greatest ground ball pitcher we’ve seen in the modern, stat-heavy era.” (Britton faced 254 batters in 2016; only nine of them got an extra-base hit.) Who would you rather have on the mound in a situation where a home run (which is obviously a fly ball) can end not only the game, but the season?

What Bernoulli (and Cardano’s) Law of Large Numbers does is define what we mean by the concept, “the odds”: that is, the outcome that is most likely to happen. Bucking the odds is, in short, precisely the crime Buck Showalter committed during the game with the Blue Jays: as Deadspin’s Petchesky wrote, “the concept that you maximize value and win expectancy by using your best pitcher in the highest-leverage situations is not ‘wisdom’—it is fact.” As Petchesky goes on to say “the odds are the odds”—and Showalter, by putting all those other pitchers on the mound instead of Britton, ignored those odds.

As it happens, “bucking the odds” is just what the Democratic Party may be doing by adopting Hillary Clinton as their nominee instead of Bernie Sanders. As a number of articles this past spring noted, at that time many polls were saying that Sanders had better odds of beating Donald Trump than Clinton did. In May, Linda Qiu and Louis Jacobson noted in The Daily Beast Sanders was making the argument that “he’s a better nominee for November because he polls better than Clinton in head-to-head matches against” Trump. (“Right now,” Sanders said then on the television show, Meet the Press, “in every major poll … we are defeating Trump, often by big numbers, and always at a larger margin than Secretary Clinton is.”) Then, the evidence suggested Sanders was right: “Out of eight polls,” Qiu and Jacobson wrote, “Sanders beat Trump eight times, and Clinton beat Trump seven out of eight times,” and “in each case, Sanders’s lead against Trump was larger.” (In fact, usually by double digits.) But, as everyone now knows, that argument did not help to secure the nomination for Sanders: in August, Clinton became the Democratic nominee.

To some, that ought to be the end of the story: Sanders tried, and (as Showalter said after his game), “it didn’t work out.” Many—including Sanders himself—have urged fellow Democrats to put the past behind them and work towards Clinton’s election. Yet, that’s an odd position to take regarding a campaign that, above everything, was about the importance of principle over personality. Sanders’ campaign was, if anything, about the same point enunciated by William Jennings Bryan at the 1896 Democratic National Convention, in the famous “Cross of Gold” speech: the notion that the “Democratic idea … has been that if you legislate to make the masses prosperous, their prosperity will find its way up through every class which rests upon them.” Bryan’s idea, as ought to clear, has certain links to Bernoulli’s Law of Large Numbers—among them, the notion that it’s what happens most often (or to the most people) that matters.

That’s why, after all, Bryan insisted that the Democratic Party “cannot serve plutocracy and at the same time defend the rights of the masses.” Similarly—as Michael Kazin of Georgetown University described the point in May for The Daily Beast—Sanders’ campaign fought for a party “that would benefit working families.” (A point that suggests, it might be noted, that the election of Sanders’ opponent, Clinton, would benefit others.) Over the course of the twentieth century, in other words, the Democratic Party stood for the majority against the depredations of the minority—or, to put it another way, for the principle that you play the odds, not hunches.

“No past candidate comes close to Clinton,” wrote FiveThirtyEight’s Harry Enten last May, “in terms of engendering strong dislike a little more than six months before the election.” It’s a reality that suggests, in the first place, that the Democratic Party is hardly attempting to maximize their win expectancy. But more than simply those pragmatic concerns regarding her electability, however, Clinton’s candidacy represents—from the particulars of her policy positions, her statements to Wall Street financial types, and the existence of electoral irregularities in Iowa and elsewhere—a repudiation, not simply of Bernie Sanders the person, but of the very idea about the importance of the majority the Democratic Party once proposed and defended. What that means is that, even were Hillary Clinton to be elected in November, the Democratic Party—and those it supposedly represents—will have lost the election.

But then, you probably don’t need any statistics to know that.

Lions For Lambs

And the remnant of Jacob shall be among the Gentiles in the midst of many people as a lion among the beasts of the forest, as a young lion among the flocks of sheep …
Micah 5:8

Micah was the first prophet to predict the downfall of Jerusalem. According to him, the city was doomed because its beautification was financed by dishonest business practices, which impoverished the city’s citizens. He also called to account the prophets of his day, whom he accused of accepting money for their oracles.
“Micah.” Wikipedia.

 

“Before long I’ll be dead, and you and your brother and your sister and all of her children, all of us dead, all of us rotting underground,” says the villainous patriarch of the aristocratic Lannister clan, Tywin, to his son Jaime in a conversation during the first season of the hit HBO show, Game of Thrones. “It’s the family name that lives on,” Tywin continues—a sentence that not only does much to explain the popularity of the show, but also overturns the usual explanation for that interest: the narrative uncertainty, or the way in which, at least in the first several seasons, it was never obvious which characters were the heroes, and so would survive to the end of the tale. But if Tywin is right, the attraction of the show isn’t that it is so unpredictable. It’s rather that the show’s uncertainty about the various characters’ fates is balanced by a matching certainty that they are in peril: either from the political machinations that end up destroying many of the characters the show had led us to think were protagonists (Ned and his son Robb Stark in particular)—or from the horror that, the opening minutes of the show’s very first episode display, has awakened in the frozen north of Thrones’ fictional world. Hence, the uncertainty about what is going to happen is mirrored by a certainty that something will happen—a certainty signified by the motto of the family to which many fan-favorite characters belong, House Stark: “Winter is Coming.” It’s that motto, I think, that furnishes much of the show’s power—because it is such a direct riposte to much of today’s conventional wisdom, a dogma that unites the supposed “radical left” of the contemporary university with their seeming ideological opposites: the financial elite of Wall Street.

To put it plainly, the relevant division in America today is not between Republicans and Democrats, but instead between those who (still) think the notion encapsulated by the phrase “Winter Is Coming” matters—and those who don’t. For the idea contained within the phrase “Winter Is Coming,” after all, is much older than George Martin’s series of fantasy novels. It is, for example, much the same as an idea expressed by the English writer George Orwell, author of 1984 and Animal Farm, in 1946:

… we are all capable of believing things which we know to be untrue, and then, when we are finally proved wrong, impudently twisting the facts so as to show that we were right. Intellectually, it is possible to carry on this process for an indefinite time: the only check on it is that sooner or later a false belief bumps up against solid reality, usually on a battlefield.

What Orwell expresses here, I’d say, is the Stark idea—the idea that, sooner or later, one’s beliefs run up against reality, whether that reality comes in the form of the weather or war or something else. It’s the notion that, sooner or later, things converge towards reality: a notion that many contemporary intellectuals have abandoned. To them, the view expressed by Orwell and the Starks is what’s known as “foundationalism”: something that all recent students in the humanities have been trained, over the past several generations, to boo and hiss.

“Foundationalism,” according to Pennsylvania State University literature professor Michael Bérubé, for example—a person I often refer to because, unlike the work of a lot others, he at least expresses what he’s saying clearly, and also because he represents a university well-known for its commitment to openness and transparency and occasionally less-than-enthusiastic opposition to child abuse—is the notion that there is a “principle that is independent of all human minds.” That is opposed, for people who think about this sort of thing, to “antifoundationalism”: the idea that a lot of stuff (maybe everything) is simply a matter of “human deliberation and consensus.” Also known as “social constructionism,” it’s an idea that Orwell, or the Starks, would have looked at slant-eyed: winter, for instance, doesn’t particularly care what people think about it, and while war is like both a seminar and a hurricane, the things that happen in war—like, say, having the technology to turn an entire city into a fireball—are not appreciably different from the impact of a tsunami.

Within the humanities however the “anti-foundationalist” or “social constructionist” idea has largely taken the field. “Notwithstanding,” as literature professor Mark Bauerlein of Emory University has remarked, “the diversity trumpeted by humanities departments these days, when it comes to conceptions of knowledge, one standpoint reigns supreme: social constructionism.” To those who hold it, it is a belief that straightforwardly powers what Bauerlein calls “a moral obligation to social justice”: in this view, either you are on the side of antifoundationalism, or you are a yahoo who thinks that the problem with the world is that there isn’t enough Donald Trump in it. Yet antifoundationalism, or the idea that everything is a matter of human discussion, is not necessarily so obviously on the side of good and not evil as the professors of the nation’s universities appear to believe.

In fact, while Bauerlein says that this dogma is “a party line, a tribal glue distinguishing humanities professors from their colleagues in the business school, the laboratory, the chapel, and the computing center, most of whom believe that at least some knowledge is independent of social conditions,” there’s actually good reason to think that a disbelief in an underlying reality isn’t all that unfamiliar to the business school. Arguably, there’s no portion of the university that pays more homage to the dogma of “social construction” than the business school.

Take, for instance, the idea Eugene Fama has built his career upon: the “random walk” theory of the stock market, also known as the “efficient market hypothesis.” Today, Fama is a Nobel Prize-laureate (well, winner of the Swedish National Bank’s Prize in Economic Sciences in Memory of Alfred Nobel, a prize not established by Alfred Nobel in his 1895 will), a professor at the University of Chicago’s Booth School of Business, and the so-called “Father of Finance, ” but in 1965 he was an obscure graduate student—at least, until he wrote the paper that established him within his profession that year, “The Behavior of Stock-Market Prices.” In that paper, Fama argued that “the future path of the price level of a security is no more predictable than the path of a series of cumulated random numbers,” which had the consequence that “the series of price changes has no memory.” (Which is what stock prospectuses mean when they say that “past performance cannot predict future performance.”) What Fama meant was that, no matter how many times he went back over the data, he could find no means by which to predict the future path of a particular stock. Hence he concluded that, when it comes to the market, “the past cannot be used to predict the future in any meaningful way”—an idea with some notably anti-foundationalist consequences.

Those consequences can be be viewed in such papers as Fama’s 2010 study with colleague Kenneth French: “Luck versus Skill in the Cross-Section of Mutual Fund Returns”—a study that set out to examine whether it was true that the managers of mutual funds can actually do what they claim they can do, and outperform the stock market. In “Luck versus Skill,” Fama and French say that the evidence shows those managers can’t: “For fund investors the … results are disheartening,” because “few active funds produce … returns that cover their costs.” Maybe there are really intelligent people out there who are smarter than the market, Fama is suggesting—but if there are, he can’t find them.

Now, so far Fama’s idea might sound pretty unexceptional: to readers of this blog, it might even sound like common sense. It’s a fairly close idea to the one explored, for instance, by psychologist Amos Tversky and his co-authors in the paper, “The Hot Hand in Basketball,” which was about how what appeared to be a “hot,” or “clutch,” basketball shooter was simply an effect of randomness: if your skill level is such that you expect to make a certain percentage of your shots, then—simply through the laws of probability—it is likely that you will make a certain number of baskets in a row. Similarly, if there are enough mutual funds in the market, some number of them will have gaudy track records to report: “Given the multitude of funds,” as Fama writes, “many have extreme returns by chance.” If there’s enough participants in any competition, some will be winners—or to put it another way, if a monkey throws enough shit at a wall, some of it will stick.

That, Fama might say, doesn’t mean that the monkey has somehow gotten in touch with Reality: if no one person can outperform the market, then there is nothing anyone can know that would help them to become a better stock-picker. What that must mean in turn is (as the Wikipedia article on the subject notes) that “market prices reflect all available information,” or that “stocks always trade at their fair value”—which is right about where that the work of seemingly-conservative professors in economics departments and business schools, and their seeming-liberal opponents in departments of the humanities begins to converge.

Fama, after all, denies the existence of what are known as “bubbles”: “speculative bubbles, market bubbles, price bubbles, financial bubbles, speculative manias or balloons” as Wikipedia terms them. “Bubbles” describe situations in which a given asset—like, I don’t know, a house—is traded “at a price or price range that strongly deviates from the corresponding asset’s intrinsic value.” The classic example is the Dutch tulip craze of the seventeenth century, during which a single tulip bulb might have sold for ten times the yearly wage of a workman. (Other instances might be closer to the reader’s mind than that.) But according to Fama there can be no such thing as a “bubble”: when John Cassidy of The New Yorker said to Fama in an interview that the chief problem during the financial crisis of 2008 was that “there was a credit bubble that inflated and ultimately burst,” Fama replied by saying, “I don’t know what a credit bubble means. I don’t even know what a bubble means. These words have become popular. I don’t think they have any meaning.” Although a careful reader might note that what Fama is saying here is something like that there is a bubble in the concept of bubbles, what he intends is to deny that there are bubbles, and thus that there is any “intrinsic value” to a given asset.

It’s at this point, I think, that the connection between Eugene Fama’s contention about the “efficient market hypothesis” and the doctrine in the humanities known as “antifoundationalism” becomes clear: both are denials of the Starks’ “Winter Is Coming” motto. After all, a bubble only makes sense if there is some kind of “intrinsic,” or “foundational,” value to something; similarly, a “foundationalist” thinks that there is some nonhuman reality. But why does this obscure and esoteric doctrinal dispute among a few intellectuals matter, aside from being the latest turn of the wheel of fashion within the walls of the academy?

Well, it matters because what they are really discussing—the real meaning of “intrinsic value”—is whether to allow ordinary people to have any say about the future of their lives.

Many liberals, for instance, have warned about the Republican assault on the right to vote in such matters as the Supreme Court’s 2013 ruling in Shelby County vs. Holder, which essentially gutted the Voting Rights Act of 1965, or the passage of “voter ID laws” in many states—sold as “protections” but in reality a means of preventing voting. What’s far less-often discussed, however, is that intellectuals of the supposed academic left have begun—quietly, to be sure—to question the very idea of voting.

Oxford don Mary Beard, for example—a scholar of the ancient world and avowed feminist—recently wrote a column for the London Review of Books concerning the “Brexit” referendum, in which the people of Great Britain decided whether to stay in the European Union or not. Beard’s sort—educated, with “progressive” opinions—thought that Britain ought to remain in the Union; when the results came in, however, the nation had decided to leave, or “Brexit.” “Handing us a referendum,” Beard wrote in response, “is not a way to reach a responsible decision”—“for God’s sake,” one can almost hear Beard lecturing, “how can you let an important decision be up to the [insert condescending adjective here] voters?” But while that might sound like a one-time response to a very particular situation, in fact many smart people who share Beard’s general views also share her distrust of elections.

What is an election, anyway, but an event analogous to a battle, or a hurricane? To people inclined to dismiss the significance of real events, it’s easy enough to dismiss the notion of elections. “Importantly”— wrote Princeton University’s Lawrance S. Rockefeller Professor of Politics, Stephen Macedo, recently—“majority rule is not a fundamental principle of either democracy or fairness, nor is it required by any basic principle of democracy or fairness.” According to Macedo, “the basic principle of democracy” isn’t elections, but instead “political equality,” or a “respect [for] minority rights and … fair and inclusive deliberation.” In other words, so long as “minority rights” are respected and there is “fair and inclusive deliberation,” it doesn’t matter if anyone votes or not—which is to say that to very many smart, and supposedly “liberal” or “leftist” people, the very notion that voting has any kind of “intrinsic value” to it at all has become irrelevant.

That, more or less, is what the characters on Game of Thrones think too. After all, as Tywin says to Jaime at one point during the conversation I began this essay with, a “lion doesn’t concern himself with the opinion of a sheep.” Which, one supposes, is not a very surprising sentiment on a show that, while it sometimes depicts depicts dragons and magic, mostly concerns the doings of a handful of aristocrats in a feudal age. What might be pretty surprising, however—depending on your level of distrust—is that, today, a great many of the people entrusted to be society’s shepherds appear to agree with them.

Human Events

Opposing the notion of minority rule, [Huger] argued that a majority was less likely to be wrong than a minority, and if this was not so “then republicanism must be a dangerous fallacy, and the sooner we return to the ‘divine rights’ of the kings the better.”
—Manisha Sinha. The Counterrevolution of Slavery. 2001.

Note that agreement [concordantia] is particularly required on matters of faith and the greater the agreement the more infallible the judgment.
—Nicholas of Cusa. Catholic Concordance. 1432.

 

It’s perhaps an irony, though a mild one, that the weekend of the celebrations of American independence the most notable sporting events are the Tour de France, soccer’s European Cup, and Wimbledon—maybe all the more so now that Great Britain has voted to “Brexit,” i.e., to leave the European Union.  A number of observers have explained that vote as at least somewhat analogous to the Donald Trump movement in the United States, in the first place because Donald himself called the “Brexit” decision a “great victory” at a press conference the day after the vote, and a few days later “praised the vote as a decision by British voters to ‘take back control of their economy, politics and borders,’” as The Guardian said Thursday. To the mainstream press, the similarity between the “Brexit” vote and Donald Trump’s candidacy is that—as Emmanuel Macron, France’s thirty-eight-year-old economy minister said about “Brexit”—both are a conflict between those “content with globalization” and those “who cannot find” themselves within the new order. Both Trump and “Brexiters” are, in other words, depicted as returns of—as Andrew Solomon put it in The New Yorker on Tuesday—“the Luddite spirit that led to the presumed arson at Albion Mills, in 1791, when angry millers attacked the automation that might leave them unemployed.” “Trumpettes” and “Brexiters” are depicted as wholly out of touch and stuck in the past—yet, as a contrast between Wimbledon and the Tour de France may help illuminate, it could also be argued that it is, in fact, precisely those who make sneering references both to Trump and to “Brexiters” who represent, not a smiling future, but instead the return of the ancien régime.

Before he outright won the Republican nomination through the primary process, after all, Trump repeatedly complained that the G.O.P.’s process was “rigged”: that is, it was hopelessly stacked against an outsider candidate. And while a great deal of what Trump has said over the past year has been, at best, ridiculously exaggerated when not simply outright lying, in that contention Trump has a great deal of evidence: as Josh Barro put it in Business Insider (not exactly a lefty rag) back in April, “the Republican nominating rules are designed to ignore the will of the voters.” Barro cites the example of Colorado’s Republican Party, which decided in 2015 “not to hold any presidential preference vote”—a decision that, as Barro rightly says, “took power away from regular voters and handed it to the sort of activists who would be likely … [to] participat[e] in party conventions.” And Colorado’s G.O.P. was hardly alone in making, quite literally, anti-democratic decisions about the presidential nominating process over the past year: North Dakota also decided against a primary or even a caucus, while Pennsylvania did hold a vote—but voters could only choose uncommitted delegates; i.e., without knowing to whom those delegates owed allegiance.

Still, as Mother Jones—which is a lefty rag—observed, also back in April, this is an argument that can easily be worked against as for Trump: in New York’s primary, for instance, “Kasich and Cruz won 40 percent of the vote but only 4 percent of the delegates,” while on Super Tuesday Trump’s opponents “won 66 percent of the vote but only 57 percent of the delegates.” And so on. Other critics have similarly attacked the details of Trump’s arguments: many, as Mother Jones’ Kevin Drum says, have argued that the details of the Republican nominating process could just as easily be used as evidence for “the way the Republican establishment is so obviously in the bag for Trump.” Those critics do have a point: investigating the whole process is exceedingly difficult because the trees overwhelm any sense of the forest.

Yet, such critics often use those details (about which they are right) to make an illicit turn. They have attacked, directly or indirectly, the premise of the point Trump tried to make in an op-ed piece in The Wall Street Journal this spring that—as Nate Silver paraphrased it on FiveThirtyEight—“the candidate who gets the most votes should be the Republican nominee.” In other words, they make an argumentative turn from the particulars of this year’s primary process to take a very disturbing swerve toward attacking the very premises of democratic government itself: by disputing this or that particular they obscure whether or not the will of the voters should be respected. Hence, even if Trump’s whole campaign is, at best, wholly misdirected, the point he is making—a point very similar to the one made by Bernie Sanders’ campaign—is not something to be treated lightly. But that, it seems, is something that elites are, despite their protests, skirting close to doing: which is to say that, despite the accusations directed at Trump that he is leading a fascistic movement, it is actually arguable that it is Trump’s supposedly “liberal” opponents who are far closer to authoritarianism than he is because they have no respect for sanctity of the ballot. Or, to put it another way, that it is Trump’s voters—and, by extension, those for “Brexit”—who have the cosmopolitan view, while it is his opponents who are, in fact, the provincialists.

The point, I think, can be seen by comparing the scoring rules between Wimbledon and the Tour de France. The Tour, as may or may not be known, is determined by the rider who—as Patrick Redford at Deadspin put it the other day in “The Casual Observer’s Guide to the Tour de France”—has “the lowest time over all 21 stages.” Although the race takes place over nearly the whole nation of France, and several more besides, and covers over 2,000 miles from the cobblestone flats of Flanders to the heights of the Alps and down to the streets of Paris, still the basic premise of the race is clear even to the youngest child: ride faster and win. Explaining Wimbledon however—like explaining the rules of the G.O.P. nominating process (or, for that matter, the Democratic nominating process)—is not so simple.

As I have noted before in this space, the rules of tennis are not like cycling—or even such familiar sports as baseball or football. In baseball and most other sports, including the Tour, the “score is cumulative throughout the contest … and whoever has the most points at the end wins,” as Allen Fox once described the difference between tennis and other games in Tennis magazine. But tennis is not like that: “The basic element of tennis scoring is the point,” as mathematician G. Edgar Parker has noted, “but tennis matches are won by the player who wins two out three (or three out of five) sets.” Sets are themselves accumulations of games, not points. During each game, points are won and lost until one player has not only won at least four points but also has a two-point advantage on the other; games go back and forth until one player does have that advantage. Then, at the set level, one player must have won at least six games (though the rules vary at some professional tournaments if that player also needs a two-game advantage to win the set). Finally, then, a player needs to win at least two, and—as at Wimbledon—sometimes three, sets to take a match.

If the Tour de France were won like Wimbledon is won, in other words, the winner would not be determined by whoever had the lowest overall time: the winner would be, at least at first analysis, whoever won the most number of stages. But even that comparison would be too simple: if the Tour winner were determined by the winner of the most stages, that would imply that each stage were equal—and it is certainly not the case that all points, games, or sets in tennis are equal. “If you reach game point and win it,” as Fox writes in Tennis, “you get the entire game while your opponent gets nothing—all of the points he or she won in the game are eliminated.” The points in one game don’t carry over to the next game, and previous games don’t carry over to the next set. That means that some points, some games, and some sets are more important than others: “game point,” “set point,” and “match point” are common tennis terms that mean “the point whose winner may determine the winner of the larger category.” If tennis’ type of scoring system were applied to the Tour, in other words, the winner of the Tour would not be the overall fastest cyclist, nor even the cyclist who won the most stages, but the cyclist who won certain stages, say—or perhaps even certain moments within stages.

Despite all the Sturm und Drang surrounding Donald Trump’s candidacy, then—the outright racism and sexism, the various moronic-seeming remarks concerning American foreign policy, not to mention the insistence that walls are more necessary to the American future than they even are to squash—there is one point about which he, like Bernie Sanders in the Democratic camp, is making cogent sense: the current process for selecting an American president is much more like a tennis match than it is like a bicycle race. After all, as Hendrik Hertzberg of The New Yorker once pointed out, Americans don’t elect their presidents “the same way we elect everybody else—by adding up all the voters’ votes and giving the job to the candidate who gets the most.” Instead, Americans have (as Ed Grabianowski puts it on the how stuff works website), “a whole bunch of separate state elections.” And while both of these comments were directed at the presidential general election, which depends on the Electoral College, they equally, if not more so, apply to the primary process: at least in the general election in November, each state’s rules are more or less the same.

The truth, and hence power, of Trump’s critique of this process can be measured by the vitriol of the response to it. A number of people, on both sides of the political aisle, have attacked Trump (and Sanders) for drawing attention to the fashion in which the American political process works: when Trump pointed out that Colorado had refused to hold a primary, for instance, Reince Priebus, chairman of the Republican National Committee, tweeted (i.e., posted on Twitter, for those of you unfamiliar with, you know, the future) “Nomination process known for a year + beyond. It’s the responsibility of the campaigns to understand it. Complaints now? Give us all a break.” In other words, Priebus was implying that the rules were the same for all candidates, and widely known before hand—so why the whining? Many on the Democratic side said the same about Sanders: as Albert Hunt put it in the Chicago Tribune back in April, both Trump and Sanders ought to shut up about the process: “Both [campaigns’] charges [about the process] are specious,” because “nobody’s rules have changed since the candidates entered the fray.” But as both Trump and Sanders’ campaigns have rightly pointed out, the rules of a contest do matter beyond just the bare fact that they are the same for every candidate: if the Tour de France were conducted under rules similar to tennis’, it seems likely that the race would be won by very different kinds of winners—sprinters, perhaps, who could husband their stamina until just the right moment. It’s very difficult not to think that the criticisms of Trump and Sanders as being “whiners” is disingenuous—an obvious attempt to protect a process that transparently benefits insiders.

Trump’s supporters, like Sanders’ and those who voted “Leave” in the “Brexit” referendum, have been labeled as “losers”—and while, to those who consider themselves “winners,” the thoughts of losers are (as the obnoxious phrase has it) like the thoughts of sheep to wolves, it seems indisputably true that the voters behind all three campaigns represent those for whom the global capitalism of the last several decades hasn’t worked so well. As Matt O’Brian noted in The Washington Post a few days ago, “the working class in rich countries have seen their real, or inflation-adjusted, incomes flatline or even fall since the Berlin Wall came down and they were forced to compete with all the Chinese, Indian, and Indonesian workers entering the global economy.” (Real economists would dispute O’Brian’s chronology here: at least in the United States, wages have not risen since the early 1970s, which far predates free trade agreements like the North American Free Trade Agreement signed by Bill Clinton in the 1990s. But O’Brian’s larger argument, as wrongheaded as it is in detail, instructively illustrates the muddleheadedness of the conventional wisdom.) In this fashion, O’Brian writes, “the West’s triumphant globalism” has “fuel[ed] a nationalist backlash”: “In the United States it’s Trump, in France it’s the National Front, in Germany it’s the Alternative for Germany, and, yes, in Britain it’s the Brexiters.” What’s astonishing about this, however, is that—despite not having, as so, so many articles decrying their horribleness have said, a middle-class senses of decorum—all of these movements stand for a principle that, you would think, the “intellectuals” of the world would applaud: the right of the people themselves to determine their own destiny.

It is they, in other words, who literally embody the principle enunciated by the opening words of the United States Constitution, “We the People,” or enunciated by the founding document of the French Revolution (which, by the by, began on a tennis court), The Declaration of the Rights of Man and the Citizen, whose first article holds that “Men are born and remain free and equal in rights.” In the world of this Declaration, in short, each person has—like every stage of the Tour de France, and unlike each point played during Wimbledon—precisely the same value. It’s a principle that Americans, especially, ought to remember this weekend of all weekends—a weekend that celebrates another Declaration, one whose opening lines reads “We hold these truths to be self-evident, that all men are created equal.” Americans, in other words, despite the success individual Americans like John McEnroe or Pete Sampras or Chris Evert, are not tennis players, as Donald Trump (and Bernie Sanders) have rightfully pointed out over the past year—a sport, as one history of the game has put it, “so clearly aligned with both The Church and Aristocracy.” Americans, as the first modern nation in the world, ought instead to be associated with a sport unknown to the ancients and unthinkable without modern technology.

We are bicycle riders.

To Hell Or Connacht

And I looked, and behold a pale horse, and his name that sat on him was Death,
and Hell followed with him.
Revelations 6:8. 

In republics, it is a fundamental principle, that the majority govern, and that the minority comply with the general voice.
—Oliver Ellsworth.

In all Republics the voice of a majority must prevail.
—Andrew Jackson.

 

“They are at the present eating, or have already eaten, their seed potatoes and seed corn, to preserve life,” goes the sentence from the Proceedings of the Mansion House Committee for the Relief of Distress in Ireland During the Months of January and February, 1880. Not many are aware, but the Great Hunger of 1845-52 (or, in Gaelic, an Gorta Mór) was not the last Irish potato famine; by the autumn of 1879, the crop had failed and starvation loomed for thousands—especially in the west of the country, in Connacht. (Where, Oliver Cromwell had said two centuries before, was one choice for Irish Catholics to go if they did not wish to be murdered by Cromwell’s New Model Army—the other being Hell.) But this sentence records the worst fear: it was because the Irish had been driven to eat their seed potatoes in the winter of 1846 that the famine that had been brewing since 1845 became the Great Hunger in the year known as “Black ’47”: although what was planted in the spring of 1847 largely survived to harvest, there hadn’t been enough seeds to plant in the first place. Hence, everyone who heard that sentence from the Mansion House Committee in 1880 knew what it meant: the coming of that rider on a pale horse spoken of in Revelations. It’s a history lesson I bring up to suggest that “eating your seed corn” also explains the coming of another specter that many American intellectuals may have assumed lay in the past: Donald Trump.

There are two hypotheses about the rise of Donald Trump to the presumptive candidacy of the Republican Party. The first—that of many Hillary Clinton Democrats—is that Trump is tapping into a reservoir of racism that is simply endemic to the United States: in this view, “’murika” is simply a giant cesspool of hate waiting to break out at any time. But that theory is an ahistorical one: why should a Trump-like candidate—that is, one sustained by racism—only become the presumptive nominee of a major party now? “Since the 1970s support for public and political forms of discrimination has shrunk significantly” says one voice on the subject (Anna Maria Barry-Jester’s, surveying many sociological studies for FiveThirtyEight). If the studies Barry-Jester highlights are correct, and yet levels of racism remain precisely the same as in the past, then that must mean that the American public is not getting less racist—but instead merely getting better at hiding it. That then raises the question: if the level of racism still remains as high as in the past, why wasn’t it enough to propel, say, former Alabama governor George Wallace to a major party nomination in 1968 or 1972? In other words, why Trump now, rather than George Wallace then? Explaining Trump’s rise as due to racism has a timing problem: it’s difficult to think that, somehow, racism has become more acceptable today than it was forty or more years ago.

Yet, if not racism, then what is fueling Trump? Journalist and gadfly Thomas Frank suggests an answer: the rise of Donald Trump is not the result of racism, but of efforts to fight racism—or rather, the American Left’s focus on racism at the expense of economics. To wildly overgeneralize: Trump is not former Republican political operative Karl Rove’s fault, but rather Fannie Lou Hamer’s.

Although little known today, Fannie Lou Hamer was once famous as a leader of the Mississippi Freedom Democratic Party’s delegation to the 1964 Democratic Party Convention. On arrival Hamer addressed the convention’s Credentials Committee to protest the seating of Mississippi’s “regular” Democratic delegation on the grounds that Mississippi’s official delegation, an all-white slate of delegates, had only become the “official” delegation by suppressing the votes of the state’s 400,000 black people—which had the disadvantageous quality, from the national party’s perspective, of being true. What’s worse, when the “practical men” sent to negotiate with her—especially Senator Hubert Humphrey of Minnesota—asked her to step down her challenge on the pragmatic grounds that her protest risked losing the entire South for President Lyndon Johnson in the upcoming general election, Hamer refused: “Senator Humphrey,” Hamer rebuked him; “I’m going to pray to Jesus for you.” With that, Hamer rejected the hardheaded, practical calculus that informed Humphrey’s logic; in doing so, she set a example that many on the American Left have followed since—an example that, to follow Frank’s argument, has provoked the rise of Trump.

Trump’s success, Frank explains, is not the result of cynical Republican electoral exploitation, but instead because of policy choices made by Democrats: choices that not only suggest that cynical Republican choices can be matched by cynical Democratic ones, but that Democrats have abandoned the key philosophical tenet of their party’s very existence. First, though, the specific policy choices: one of them is the “austerity diet” Jimmy Carter (and Carter’s “hand-picked” Federal Reserve chairman, Paul Volcker), chose for the nation’s economic policy at the end of the 1970s. In his latest book, Listen, Liberal: or, Whatever Happened to the Party of the People?, Frank says that policy “was spectacularly punishing to the ordinary working people who had once made up the Democratic base”—an assertion Frank is hardly alone in repeating, because as noted not-radical Fortune magazine has observed, “Volcker’s policies … helped push the country into recession in 1980, and the unemployment rate jumped from 6% in August 1979, the month of Volcker’s appointment, to 7.8% in 1980 (and peaked at 10.8 % in 1982).” And Carter was hardly the last Democratic president who made economic choices contrary to the interests of what might appear to be the Democratic Party’s constituency.

The next Democratic president, Bill Clinton, after all put the North American Free Trade Agreement through Congress: an agreement that had the effect (as the Economic Policy Institute has observed) of “undercut[ing] the bargaining power of American workers” because it established “the principle that U.S. corporations could relocate production elsewhere and sell back into the United States.” Hence, “[a]s soon as NAFTA became law,” the EPI’s Jeff Faux wrote in 2013, “corporate managers began telling their workers that their companies intended to move to Mexico unless the workers lowered the cost of their labor.” (The agreement also allowed companies to extort tax breaks from state and municipal coffers by threatening to move, with the attendant long-term costs—including an inability to fight for workers.) In this way, Frank says, NAFTA “ensure[d] that labor would be too weak to organize workers from that point forward”—and NAFTA has also become the basis for other trade agreements, such as the Trans-Pacific Partnership backed by another Democratic administration: Barack Obama’s.

That these economic policies have had the effects described is, perhaps, debatable; what is not debatable, however, is that economic inequality has grown in the United States. As the Pew Research Center reports, “in real terms the average wage peaked more than 40 years ago,” and as Christopher Ingraham of the Washington Post reported last year, “the fact that the top 20 percent of earners rake in over 50 percent of the total earnings in any given year” has become something of a cliché in policy circles. Ingraham also reports that “the wealthiest 10 percent of U.S. households have captured a whopping 76 percent of all the wealth in America”—a “number [that] is considerably higher than in other rich nations.” These figures could be multiplied; they represent a reality that even Republican candidates other than Trump—who for the most part was the only candidate other than Bernie Sanders to address these issues—began to respond to during the primary season over the past year.

“Today,” said Senator and then-presidential candidate Ted Cruz in January—repeating the findings of University of California, Berkeley economist Emmanuel Saez—“the top 1 percent earn a higher share of our national income than any year since 1928.” While the cause of these realities are still argued over—Cruz for instance sought to blame, absurdly, Obamacare—it’s nevertheless inarguable that the country has become radically remade economically over recent decades.

That reformation has troubling potential consequences, if they have not already themselves become real. One of them has been adequately described by Nobel Prize-winning economist Joseph Stiglitz: “as more money becomes concentrated at the top, aggregate demand goes into a decline.” What Stiglitz means is this: say you’re Mitt Romney, who had a 2010 income of $21.7 million. “Even if Romney chose to live a much more indulgent lifestyle” than he actually does, Stiglitz says, “he would only spend a fraction of that sum in a typical year to support himself and his wife in their several homes.” “But take the same amount of money and divide it among 500 people,” Stiglitz continues, “say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all of the money gets spent.” That expenditure represents economic activity: as should surely, but apparently isn’t to many people, be self-evident, a lot more will happen economically if 500 people split twenty million dollars than if one person has all of it.

Stiglitz, of course, did not invent this argument: it used to be bedrock for Democrats. As Frank points out, the same theory was advanced by the Democratic Party’s presidential nominee—in 1896. As expressed by William Jennings Bryan at the 1896 Democratic Convention, the Democratic idea is, or used to be, this one:

There are two ideas of government. There are those who believe that, if you will only legislate to make the well-to-do prosperous, their prosperity will leak through on those below. The Democratic idea, however, has been that if you legislate to make the masses prosperous, their prosperity will find its way up through every class which rests upon them.

To many, if not most, members of the Democratic Party today, this argument is simply assumed to fit squarely with Fannie Lou Hamer’s claim for representation at the 1964 Democratic Convention: on the one hand, economic justice for working people; on the other, political justice for those oppressed on account of their race. But there are good reasons to think that Hamer’s claim for political representation at the 1964 convention puts Bryan’s (and Stiglitz’) argument in favor of a broadly-based economic policy in grave doubt—which might explain just why so many of today’s campus activists against racism, sexism, or homophobia look askance at any suggestion that they demonstrate, as well, against neoliberal economic policies, and hence perhaps why the United States has become more and more unequal in recent decades.

After all, the focus of much of the Democratic Party has been on Fannie Lou Hamer’s question about minority representation, rather than majority representation. A story told recently by Elizabeth Kolbert of The New Yorker in a review of a book entitled Ratf**ked: The True Story Behind the Secret Plan to Steal America’s Democracy, by David Daley, demonstrates the point. In 1990, it seems, Lee Atwater—famous as the mastermind behind George H.W. Bush’s presidential victory in 1988 and then-chairman of the Republican National Committee—made an offer to the Congressional Black Caucus, as a result of which the “R.N.C. [Republican National Committee] and the Congressional Black Caucus joined forces for the creation of more majority-black districts”—that is, districts “drawn so as to concentrate, or ‘pack,’ African-American voters.” The bargain had an effect: Kolbert mentions the state of Georgia, which in 1990 had nine Democratic congressmen—eight of whom were white. “In 1994,” however, Kolbert notes, “the state sent three African-Americans to Congress”—while “only one white Democrat got elected.” 1994 was, of course, also the year of Newt Gingrich’s “Contract With America” and the great wave of Republican congressmen—the year Democrats lost control of the House for the first time since 1952.

The deal made by the Congressional Black Caucus in other words, implicitly allowed by the Democratic Party’s leadership, enacted what Fannie Lou Hamer demanded in 1964: a demand that was also a rejection of a political principle known as “majoritarianism”—the right of majorities to rule. It’s a point that’s been noticed by those who follow such things: recently, some academics have begun to argue against the very idea of “majority rule.” Stephen Macedo—perhaps significantly, the Laurance S. Rockefeller Professor of Politics and the University Center for Human Values at Princeton University—recently wrote, for instance, that majoritarianism “lacks legitimacy if majorities oppress minorities and flaunt their rights.” Hence, Macedo argues, “we should stop talking about ‘majoritarianism’ as a plausible characterization of a political system that we would recommend” on the grounds that “the basic principle of democracy” is not that it protects the interests of the majority but instead something he calls “political equality.” In other words, Macedo asks: “why should we regard majority rule as morally special?” Why should it matter, in other words, if one candidate should get more votes than another? Some academics, in short, have begun to wonder publicly about why we should even bother holding elections.

What is so odd about Macedo’s arguments to a student of American history, of course, is that he is merely echoing certain older arguments—like this one, from the nineteenth century: “It is not an uncommon impression, that the government of the United States is a government based simply on population; that numbers are its only element, and a numerical majority its only controlling power,” this authority says. But that idea is false, the writer goes on to say: “No opinion can be more erroneous.” The United States is, instead, “a government of the concurrent majority,” and “population, mere numbers,” are, “strictly speaking, excluded.” It’s an argument that, as it is spieled out, might sound plausible; after all, the structure of the government of the United States does have a number of features that are, “strictly speaking,” not determined solely by population: the Senate and the Supreme Court, for example, are pieces of the federal government that are, in conception and execution, nearly entirely opposed to the notion of “numerical majority.” (“By reference to the one person, one vote standard,” Francis E. Lee and Bruce I. Oppenheimer observe for instance in Sizing Up the Senate: The Unequal Consequences of Equal Representation, “the Senate is the most malapportioned legislature in the world.”) In that sense, then, one could easily imagine Macedo having written the above, or these ideas being articulated by Fannie Lou Hamer or the Congressional Black Caucus.

Except, of course, for one thing: the quotes in the above paragraph were taken from the writings of John Calhoun, the former Senator, Secretary of War, and Vice President of the United States—which, in one sense, might seem to give the weight of authority to Macedo’s argument against majoritarianism. At least, it might if not for a couple of other facts about Calhoun: not only did he personally own dozens of slaves (at his plantation, Fort Hill; now the site of Clemson University), he is also well-known as the most formidable intellectual defender of slavery in American history. His most cunning arguments after all—laid out in such works as the Fort Hill Address and the Disquisition on Government—are against majoritarianism and in favor of slavery; indeed, to Calhoun they are much the same: anti-majoritarianism is more or less the same as being pro-slavery. (A point that historians like Paul Finkelman of the University of Tulsa have argued is true: the anti-majoritarian features of the U.S. Constitution, these historians say, were originally designed to protect slavery—a point that might sound outré except for the fact that it was made at the time of the Constitutional Convention itself by none other than James Madison.) And that is to say that Stephen Macedo and Fannie Lou Hamer are choosing a very odd intellectual partner—while the deal between the RNC and the Congressional Black Caucus demonstrates that those arguments are having very real effects.

What’s really significant, in short, about Macedo’s “insights” about majoritarianism is that, as a possessor of a named chair at one of the most prestigious universities in the world, his work shows just how a concern, real or feigned, for minority rights can be used as a means of undermining the very idea of democracy itself. It’s in this way that activists against racism, sexism, homophobia and other pet campus causes can effectively function as what Lenin called “useful idiots”: by dismantling the agreements that have underwritten the existence of a large and prosperous proportion of the population for nearly a century, “intellectuals” like Macedo may be helping to dismantle economically the American middle class. If the opinion of the majority of the people does not matter politically, after all, it’s hard to think that their opinion could matter in any other way—which is to say that arguments like Macedo’s are thusly a kind of intellectual strip-mining operation: they consume the intellectual resources of the past in order to provide a short-term gain for a small number of operators.

They are, in sum, eating their seed-corn.

In that sense, despite the puzzled brows of many of the country’s talking heads, the Trump phenomenon makes a certain kind of potted sense—even if it appears utterly irrational to the elite. Although they might not express themselves in terms that those with elite educations find palatable—in a fashion that, significantly, suggests a return to those Victorian codes of “breeding” and “politesse” that elites have always used against what used to be called the “lower classes”—there really may be an ideological link between a Democratic Party governed by those with elite educations and the current economic reality faced by the majority of Americans. That reality may be the result of the elites’ loss of faith in what even Calhoun called the “fundamental principle, the great cardinal maxim” of democratic government: “that the people are the source of all power.” So, while the organs of elite opinion like The New York Times or other outlets might continue to crank out stories decrying the “irrationality” of Donald Trump’s supporters, it may be that Trumps’ fans (Trumpettes?) are in fact in possession of a deeper rationality than that of those criticizing them. What their votes for Trump may signal is a recognition that, if the Republican Party has become the party of the truly rich, “the 1%,” the Democratic Party has ceased to be the party of the majority and has instead become the party of the professional class: the “10%.” Or, as Frank says, in swapping Republicans and Democrats the nation “merely exchange[s] one elite for another: a cadre of business types for a collection of high-achieving professionals.” Both, after all, disbelieve in the virtues of democracy; what may (or may not) be surprising, while also deeply terrifying, is that supposed “intellectuals” have apparently come to accept that there is no difference between Connacht—and the Other Place.

 

 

**Update: In the hours since I first posted this, I’ve come across two different recent articles in magazines with “New York” in their titles: in one, for The New Yorker, Jill Lepore—a professor of history at Harvard in her day job—argues that “more democracy is very often less,” while the other, written by Andrew Sullivan for New York magazine, is entitled “Democracies End When They Are Too Democratic.” Draw conclusions where you will.

She Won’t Survive

I will survive.
—Gloria Gaynor.

I had no idea that it was that easy to get the attention of, much less—apparently—annoy the hell out of a national talking head for a semi-big-time news network like MSNBC, but apparently in the brand-new world of social media such things are easily possible. Such, at least, is what I learned when I happened to object to that network’s Joan Walsh’s cheerleading for Hillary Clinton on Twitter the weekend before the New Hampshire primary. I won’t get into the particulars—the lowlight was probably when she got taken to task by a city councilman from New Rochelle, New York for attempting to use race as a bludgeon (the councilman is black, seems like a decent guy)—but suffice it to say that many supporters of Hillary Clinton seem to think that she deserves the Democratic nomination on the basis that she has climbed through all sorts of slime to get to the position she is in now. From one perspective, of course, that might be a good reason to think she should not be elected—crawling through slime tends to get dirty—but as Glenn Greenwald, the journalist who broke the Edward Snowden story, pointed out the other day, logic does not appear to be a strong suit in Hillaryland. What Greenwald’s story suggests is that the difference between Clinton supporters and Sanders’ supporters is that the latter understand the logical error known as “survivorship bias,” and the former don’t. The trouble for Hillary Clinton’s campaign is that without such an understanding, there seems little reason to vote Democratic at all.

That then would seem to make “survivorship bias” a significant concept—but what it is it? Essentially, survivorship bias is the magical belief that something successful possesses a special quality that caused that success, instead of considering that it may simply be the result of coincidence. Nicolas Taleb advances an example of how survivorship bias can skew our assessments of the world in his book, Fooled By Randomness: imagine, he writes there, 10,000 money managers whose annual results are decided by a coin flip. If the flips are conducted for five years it could be expected, simply out “of pure luck,” that 313 of those managers would have “winning” records—that is, for every year for five years running, those 300-odd managers would have won their coin flip. One can only imagine how they might feel about themselves; one suspects that at least a few of them would write books describing their “successful methods” for “beating Wall Street.” (And perhaps one or two of those books would themselves be successful, increasing the self-esteem of those people even more.) In other words, imagine Donald Trump.

It’s the notion of survivorship bias that is the very basis for science—the thought that maybe the eye of newt wasn’t what made little Timmy well, but instead that he happened to get well on his own. And it’s also something that, according to Glenn Greenwald, Hillary Clinton’s supporters in the U.S. media simply don’t understand—which is how we have gotten the narrative known by the name “Bernie Bros.” Greenwald explained the point recently in a piece for The Intercept, the magazine he started after being one of the first journalists to meet Edward Snowden, the former federal employee who blew the whistle on the National Security Agency’s spying on Americans.

What Greenwald calls the “‘Bernie Bros’ narrative” has, he says, two components: the first the conviction that Hillary Clinton has not received universal acclaim because of sexism, and the second that “Sanders supporters are uniquely abusive and misogynistic in their online behavior.” The goal of this game, Greenwald goes on to say, is to “delegitimize all critics of Hillary Clinton by accusing them of … sexism, thus distracting attention away from Clinton’s policy views, funding, and political history.” Greenwald’s insight is that, while many in the mainstream media have taken the idea seriously (or at least claimed to), in fact being subjected to “a torrent of intense anger and vile abuse” is simply a function of being on the Internet. “There are,” as Greenwald points out, “literally no polarizing views one can advocate online … that will not subject” a person to such screeds. In other words, pro-Clinton journalists are attracting hateful messages from supposed Sanders supporters because they are on the Internet, not because Sanders’ supporters are somehow less polite than partisans of other candidates: “If you spend your time praising Clinton and/or criticizing Sanders,” Greenwald observes, “of course you personally will experience more anger and vitriol from Sanders supporters than Clinton supporters.” As Greenwald points out, Sanders’ women supporters—and boy, there seem to be a lot of them—also have unpleasant experiences online. But because—surprise surprise—Hillary is the “establishment” candidate, very few of them have the pulpit of the national media from which to parade their hurt feelings.

What the whole episode I think demonstrates—though Greenwald does not draw this out—is precisely what this primary season is about: it conclusively demonstrates that Clinton’s version of the Democratic Party has very little interest in considering the role of chance in how our lives turn out. That’s a pretty stunning renunciation for a party that once denounced a Republican candidate (as Jim Hightower said about George H. W. Bush during the 1988 Democratic Convention) for being “born on third base and think[ing] he hit a triple.” Survivorship bias, in other words, has been the intellectual link between the Democratic Party’s reliance on science and its interest in society’s less fortunates: it’s not only what makes the Democratic Party the party whose members are far more concerned about the welfare of their fellow citizens, but also far more likely to believe the word of climate change scientists. To either misunderstand—or worse, deliberately misunderstand—the concept of survivorship bias is a far stronger argument against a Clinton presidency than virtually any listing of the campaign contributions she has accepted from various dubious sources. Which is something, because Clinton’s financial dealings with such charming fellows as the gentlemen at Goldman Sachs and the sheiks of Saudi Arabia are pretty alarming—and alarmingly plentiful.

Yet, maybe it’s a sign of hope that the American electorate is rejecting Hillary Clinton because for all Hillary Clinton claims to be a “survivor,” she doesn’t really understand what it means.

Art Will Not Save You—And Neither Will Stanley

 

But I was lucky, and that, I believe, made all the difference.
—Stanley Fish. “My Life Report” 31 October 2011, New York Times. 

 

Pfc. Bowe Bergdahl, United States Army, is the subject of the new season of Serial, the National Public Radio show that tells “One story. Week by week.” as the advertising tagline has it. NPR is doing a show about Bergdahl because of what Bergdahl chose to do on the night of 30 June 2009: as Serial reports, that night he walked off his “small outpost in eastern Afghanistan and into hostile territory,” where he was captured by Taliban guerrillas and held prisoner for nearly five years. Bergdahl’s actions have led some to call him a deserter and a traitor; as a result of leaving his unit Bergdahl faces a life sentence from a military court. But the line Bergdahl crossed when he stepped beyond the concertina wire and into the desert of Paktika Province was far greater than the line between a loyal soldier and a criminal. When Bowe Bergdahl wandered into the wilderness, he also crossed the line between the sciences and the humanities—and demonstrated why the political hopes some people place in the humanities is not only illogical, but arguably holding up actual political progress.

Bergdahl can be said to have crossed that line because what happens to him when he is tried by a military court regarding what happened will, likely, turn on what the intent behind his act was: in legal terms, this is known as mens rea, which is Latin for “guilty mind.” Intent is one of the necessary components prosecutors must prove to convict Bergdahl for desertion: according to Article 85 of the Uniform Code of Military Justice, to be convicted of desertion Bergdahl must be shown to have had the “intent to remain away” from his unit “permanently.” It’s this matter of intent that demonstrates the difference between the humanities and the sciences.

The old devil, Stanley Fish, once demonstrated that border in an essay in the New York Times designed to explain what it is that literary critics, and other people who engage in interpretation, do, and how it differs from other lines of work:

Suppose you’re looking at a rock formation and see in it what seems to be the word ‘help.’ You look more closely and decide that, no, what you’re seeing is an effect of erosion, random marks that just happen to resemble an English word. The moment you decide that nature caused the effect, you will have lost all interest in interpreting the formation, because you no longer believe that it has been produced intentionally, and therefore you no longer believe that it’s a word, a bearer of meaning.

To put it another way, matters of interpretation concern agents who possess intent: any other kind of discussion is of no concern to the humanities. Conversely, the sciences can be said to concern all those things not produced by an agent, or more specifically an agent who intended to convey something to some other agent.

It’s a line that seems clear enough, even in what might be marginal cases: when a beaver builds a dam, surely he intends to build that dam, but it also seems inarguable that the beaver intends nothing more to be conveyed to other beavers than, “here is my dam.” More questionable cases might be when, say, a bird or some other animal performs a “mating dance”: surely the bird intends his beloved to respond, but still it would seem ludicrous to put a scholar of, say, Jane Austen’s novels to the task of recovering the bird’s message. That would certainly be overkill.

Yes yes, you will impatiently say, but what has that to do with Bergdahl? The answer, I think, might be this: if Bergdahl’s lawyer had a scientific, instead of a humanistic, sort of mind, he might ask how many soldiers were stationed in Afghanistan during Bergdahl’s time there, and how many overall. The reason a scientist would ask that question about, say, a flock of birds he was studying is because, to a scientist, the overall numbers matter. The reason why they matter demonstrates just what the difference between science and the humanities is, but also why the faith some place in the political utility of the humanities is ridiculous.

The reason why the overall numbers of the flock would matter to a scientist is because sample size matters: a behavior that one bird in a flock of twelve birds exhibited is probably not as significant as a behavior that one bird in a flock of millions exhibited. As Nassim Taleb put it in his book, Fooled By Randomness, how impressive it is if a monkey has managed to type a verbatim copy of the Iliad “Depends On The Number of Monkeys.” “If there are five monkeys in the game,” Taleb elaborates, “I would be rather impressed with the Iliad writer”—but if, on the other hand, “there are a billion to the power one billion monkeys I would be less impressed.” Or to put it in another context, the “greater the number of businessmen, the greater the likelihood of one of them performing in a stellar manner just by luck.” What matters to a scientist, in other words, isn’t just what a given bird does—it’s how big the flock was in the first place.

To a lawyer, of course, none of that would be significant: the court that tries Bergdahl will not view that question as a relevant one in determining whether he is guilty of the crime of desertion. That is because, as a discipline concerned with interpretation, such a question will have been ruled out of court, as we say, before the court has even met: to consider how many birds in the flock there were when one of them behaved strangely, in other words, is to have a priori ceased to consider that bird as an agent because when one asks how many other birds there are, the implication is that what matters more is simply the role of chance rather than any intent on the part of the bird. Any lawyer that brought up the fact that Bergdahl was the only one out of so many thousands of soldiers to have done what he did, without taking up the matter of Bergdahl’s intent, would not be acting as a lawyer.

By the way, in case you’re wondering, roughly 65,000 soldiers were in Afghanistan by early October of 2009, behind the “surge” ordered by President Barack Obama shortly after taking office. The number, according to a contemporary story by The Washington Post, would be “more than double the number there when Bush left office,” which is to say that when Bergdahl left his tiny outpost at the end of June that year, the military was in the midst of a massive buildup of troops. The sample size, in Taleb’s terms, was growing rapidly at that time—with what effects on Bergdahl’s situation, if any, I await enlightenment, if there be any.

Whether that matters or not in terms of Bergdahl’s story—in Serial or anywhere else—remains to be seen; as a legal matter it would be very surprising if any military lawyer brought it up. What that, in turn, suggests is that the caution with which Stanley Fish has greeted many in the profession of literary study regarding the application of such work to actual political change is thoroughly justified: “when you get to the end” of the road many of those within the humanities have been traveling at least since the 1960s or 70s, Fish has remarked for instance, “nothing will have changed except the answers you might give to some traditional questions in philosophy and literary theory.” It’s a warning of crisis that even now may be reaching its peak as the nation realizes that, after all, the great political story of our time has not been about the minor league struggles within academia, but rather the story of how a small number of monkeys have managed to seize huge proportions of the planet’s total wealth: as Bernie Sanders, the political candidate, tweeted recently in a claim rated “True” by Politifact, “the Walton family of Walmart own more wealth than the bottom 40 percent of America.”

In that story, the intent of the monkeys hardly matters.