Home of the Brave


audentes Fortuna iuvat.
The Aeneid. Book X, line 284. 

American prosecutors in the last few decades have—Patrick Keefe recently noted in The New Yorker—come to use more and more “a type of deal, known as a deferred-prosecution agreement, in which the company would acknowledge wrongdoing, pay a fine, and pledge to improve its corporate culture,” rather than prosecuting either the company officers or the company itself for criminal acts. According to prosecutors, it seems, this is because “the problem with convicting a company was that it could have ‘collateral consequences’ that would be borne by employees, shareholders, and other innocent parties.” In other words, taking action against a corporation could put it out of business. Yet, declining to prosecute because of the possible consequences is an odd position for a prosecutor to take: “Normally a grand jury will indict a ham sandwich if a prosecutor asks it to,” former Virginia governor Chuck Robb, once a prosecutor himself, famously remarked. Prosecutors, in other words, aren’t usually known for their sensitivity to circumstance—so why the change in recent decades? The answer may lie, perhaps, in a knowledge of child-raising practices of the ancient European nobility—and the life of Galileo Galilei.

“In those days,” begins one of the stories described by Nicola Clarke in The Muslim Conquest of Iberia: Medieval Arabic Narratives, “the custom existed amongst the Goths that the sons and daughters of the nobles were brought up in the king’s palace.” Clarke is describing the tradition of “fosterage”: the custom, among the medieval aristocracy, of sending one’s children to be raised by another noble family while raising another such family’s children in turn. “It is not clear what … was the motive” for fostering children, according to Laurence Ginnell’s The Brehon Laws (from 1894), “but its practice, whether designed for that end or not, helped materially to strengthen the natural ties of kinship and sympathy which bound the chief and clan or the flaith and sept together.” In Ginnell’s telling, “a stronger affection oftentimes sprang up between persons standing in those relations than that between immediate relatives by birth.” One of the purposes of fostering, in other words, was to decrease the risk of conflict by ensuring that members of the ruling classes grew up together: it’s a lot harder to go to war, the thinking apparently went, when you are thinking of your potential opponent as the kid who skinned his knee that one time, instead of the fearsome leader of a gang of killers.

Perhaps one explanation for why prosecutors appear to be willing to go easier on corporate criminals these days than in the past might be because they share “natural ties”: they attended the same schools as those they are authorized to prosecute. Although statistics on the matter appear lacking, there’s reason to think that future white collar criminals and their (potential) prosecutors share the same “old school” ties more and more these days: there’s reason to think, in other words, that just as American law schools have seized a monopoly on the production of lawyers—Robert H. Jackson, who served from 1941 to 1954, was the last American Supreme Court Justice without a law degree—so too have America’s “selective” colleges seized a monopoly on the production of CEOs. “Just over 10% of the highest paid CEOs in America came from the Ivy League plus MIT and Stanford,” a Forbes article noted in 2012—a percentage higher than at any previous moment in American history. In other words, just as lawyers all come from the same schools these days, so too does upper management—producing the sorts of “natural ties” that not only lead to rethinking that cattle raid on your neighbor’s castle, but perhaps also any thoughts of subjecting Jaime Dimon to a “perp walk.” Yet as plausible an explanation as that might seem, it’s even more satisfying when it is combined with an incident in the life of the great astronomer.

In 1621, a Catholic priest named Scipio Chiaramonti published a book about a supernova that had occurred in 1572; the exploded star (as we now know it to have been) had been visible during daylight for several weeks in that year. The question for astronomers in that pre-Copernican time was whether the star had been one of the “fixed stars,” and thus existed beyond the moon, or whether it was closer to the earth than the moon: since—as James Franklin, from whose The Science of Conjecture: Evidence and Probability Before Pascal I take this account, notes—it was “the doctrine of the Aristotelians that there could be no change beyond the sphere of the moon,” a nova that far away would refute their theory. Chiaramonti’s book claimed that the measurements of 12 astronomers showed that the object was not as far as the moon—but Galileo pointed out that Chiaramonti’s work had, in effect, “cherrypicked”: he did not use all the data actually available, but merely used that which supported his thesis. Galileo’s argument, oddly enough, can also be applied to why American prosecutors aren’t pursuing financial crimes.

The point is supplied, Keefe tells us, by James Comey: the recent head of the FBI fired by President Trump. Before moving to Washington Comey was U.S. Attorney for the Southern District of New York, in which position he once called—Keefe informs us—some of the attorneys working for the Justice Department members of “the Chickenshit Club.” Comey’s point was that while a “perfect record of convictions and guilty pleas might signal simply that you’re a crackerjack attorney,” it might instead “mean that you’re taking only those cases you’re sure you’ll win.” To Comey’s mind, the marvelous winning records of those working under him was not a sign of not a guarantee of the ability of those attorneys, but instead a sign that his office was not pursuing enough cases. In other words, just as Chiaramonti chose only those data points that confirmed his thesis, the attorneys in Comey’s office were choosing only those cases they were sure they would win.

Yet, assuming that the decrease in financial prosecution is due to prosecutorial choice, why are prosecutors more likely, when it comes to financial crimes, to “cherrypick” today than they were a few decades ago? Keefe says this may be because “people who go to law school are risk-averse types”—but that begs the question of why today’s lawyers are more risk-averse than their predecessors. The answer, at least according to a former Yale professor, may be that they are more likely to cherrypick because they are the product of cherrypicking.

Such at least was the answer William Deresiewicz arrived at in 2014’s “Don’t Send Your Kid to the Ivy League”—the most downloaded article in the history of The New Republic. “Our system of elite education manufactures young people who are smart and talented and driven, yes,” Deresiewicz wrote  there—but, he wrote, it also produces students that are “anxious, timid, and lost.” Such students, the Yale faculty member wrote, had “little intellectual curiosity and a stunted sense of purpose”; they are “great at what they’re doing but [have] no idea why they’re doing it.” The question Deresiewicz wanted answered was, of course, why the students he saw in New Haven were this way; the answer he hit upon was that the students he saw were themselves the product of a cherrypicking process.

“So extreme are the admissions standards now,” Deresiewicz wrote in “Don’t,” “that kids who manage to get into elite colleges have, by definition, never experienced anything but success.” The “result,” he concluded, “is a violent aversion to risk.” Deresiewicz, in other words, is thinking systematically: in other words, it isn’t so much that prosecutors and white collar criminals share the same background that has made prosecutions so much less likely, but instead the fact that prosecutors have experienced a certain kind of winnowing process in the course of achieving their positions in life.

To most people, in other words, scarcity equals value: Harvard admits very few people, therefore Harvard must provide an excellent education. But what the Chiaramonti episode brings to light is the notion that what makes Harvard so great may not be that it provides an excellent education, but instead that it admits such “excellent” people in the first place: Harvard’s notably long list of excellent alumni may not be a result of what’s happening in the classroom, but instead in the admissions office. The usual understanding of education, in other words, takes the significant action of education to be what happens inside the school—but what Galileo’s statistical perspective says, instead, is that the important play may be what happens before the students even arrive.

The question that Deresiewicz’ work suggests, in turn, is that this very process may itself have unseen effects: efforts to make Harvard (along with other schools) more “exclusive”—and thus, ostensibly, provide a better education—may actually be making students worse off than they might otherwise be. Furthermore, Keefe’s work intimates that this insidious effect might not be limited to education; it may be causing invisible ripples throughout American society—ripples that may not be limited to the criminal justice system. If the same effects Keefe says are affecting lawyers is also affecting the future CEOs the prosecutors are not prosecuting, then perhaps CEOs are becoming less likely to pursue the legitimate risks that are the economic lifeblood of the nation—and perhaps more susceptible to pursuing illegitimate risks, of the sort that once landed CEOs in non-pinstriped suits. Accordingly, perhaps that old conservative bumper sticker really does have something to teach American academics—it’s just that what both sides ought perhaps to realize is that this relationship may be, at bottom, a mathematical one. That relation, you ask?

The “land of the free” because of “the brave.”

Good’n’Plenty

Literature as a pure art approaches the nature of pure science.
—“The Scientist of Letters: Obituary of James Joyce.” The New Republic 20 January 1941.

 

028f4e06ed5fa7b5c60c796c9c4ab59244fb41cc
James Joyce, in the doorway of Shakespeare & Co., sometime in the 1920s.

In 1910 the twenty-sixth president of the United States, Theodore Roosevelt, offered what he called a “Square Deal” to the American people—a deal that, the president explained, consisted of two components: “equality of opportunity” and “reward for equally good service.” Not only would everyone would be given a chance, but, also—and as we shall see, more importantly—pay would be proportional to effort. More than a century later, however—according to University of Illinois at Chicago professor of English Walter Benn Michaels—the second of Roosevelt’s components has been forgotten: “the supposed left,” Michaels asserted in 2006, “has turned into something like the human resources department of the right.” What Michaels meant was that, these days, “the model of social justice is not that the rich don’t make as much and the poor make more,” it is instead “that the rich [can] make whatever they make, [so long as] an appropriate percentage of them are minorities or women.” In contemporary America, he means, only the first goal of Roosevelt’s “Square Deal” matters. Yet, why should Michaels’ “supposed left” have abandoned Roosevelt’s second goal? An answer may be found in a seminal 1961 article by political scientists Peter B. Clark and James Q. Wilson called “Incentive Systems: A Theory of Organizations”—an article that, though it nowhere mentions the man, could have been entitled “The Charlie Wilson Problem.”

Charles “Engine Charlie” Wilson was president of General Motors during World War II and into the early 1950s; General Motors, which produced tanks, bombers, and ammunition during the war, may have been as central to the war effort as any other American company—which is to say, given the fact that the United States was the “Arsenal of Democracy,” quite a lot. (“Without American trucks, we wouldn’t have had anything to pull our artillery with,” commented Field Marshal Georgy Zhukov, who led the Red Army into Berlin.) Hence, it may not be a surprise that World War II commander Dwight Eisenhower selected Wilson to be his Secretary of Defense when the leader of the Allied war in western Europe was elected president in 1952, which led to the confirmation hearings that made Wilson famous—and the possible subject of “Incentive Systems.”

That’s because of something Wilson said during those hearings: when asked whether he could make a decision, as Secretary of Defense, that would be adverse for General Motors, Wilson replied that he could not imagine such a situation, “because for years I thought that what was good for our country was good for General Motors, and vice versa.” Wilson’s words revealed how sometimes people within an organization can forget about the larger purposes of the organization—or what could be called “the Charlie Wilson problem.” What Charlie Wilson could not imagine, however, was precisely what James Wilson (and his co-writer Peter Clark) wrote about in “Incentive Systems”: how the interests of an organization might not always align with society.

Not that Clark and Wilson made some startling discovery; in one sense “Incentive Systems” is simply a gloss on one of Adam Smith’s famous remarks in The Wealth of Nations: “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public.” What set their effort apart, however, was the specificity with which they attacked the problem: the thesis of “Incentive Systems” asserts that “much of the internal and external activity of organizations may be explained by understanding their incentive systems.” In short, in order to understand how an organization’s purposes might differ from that of the larger society, a big clue might be in how it rewards its members.

In the particular case of Engine Charlie, the issue was the more than $2.5 million in General Motors stock he possessed at the time of his appointment as Secretary of Defense—even as General Motors remained one of the largest defense contractors. Depending on the calculation, that figure would be nearly ten times that today—and, given contemporary trends in corporate pay for executives, would surely be even greater than that: the “ratio of CEO-to-worker pay has increased 1,000 percent since 1950,” according to a 2013 Bloomberg report. But “Incentive Systems” casts a broader net than “merely” financial rewards.

The essay constructs “three broad categories” of incentives: “material, solidary, and purposive.” That is, not only pay and other financial sorts of reward of the type possessed by Charlie Wilson, but also two other sorts: internal rewards within the organization itself—and rewards concerning the organization’s stated intent, or purpose, in society at large. Although Adam Smith’s pointed comment raised the issue of the conflict of material interest between organizations and society two centuries ago, what “Incentive Systems” thereby raises is the possibility that, even in organizations without the material purposes of a General Motors, internal rewards can conflict with external ones:

At first, members may derive satisfaction from coming together for the purpose of achieving a stated end; later they may derive equal or greater satisfaction from simply maintaining an organization that provides them with office, prestige, power, sociability, income, or a sense of identity.

Although Wealth of Nations, and Engine Charlie, provide examples of how material rewards can disrupt the straightforward relationship between members, organizations, and society, “Incentive Systems” suggests that non-material rewards can be similarly disruptive.

If so, Clark and Wilson’s view may perhaps circle back around to illuminate a rather pressing current problem within the United States concerning material rewards: one indicated by the fact that the pay of CEOs of large companies like General Motors has increased so greatly against that of workers. It’s a story that was usefully summarized by Columbia University economist Edward N. Wolff in 1998: “In the 1970s,” Wolff wrote then, “the level of wealth inequality in the United States was comparable to that of other developed industrialized countries”—but by the 1980s “the United States had become the most unequal society in terms of wealth among the advanced industrial nations.” Statistics compiled by the Census Bureau and the Federal Reserve, Nobel Prize-winning economist Paul Krugman pointed out in 2014, “have long pointed to a dramatic shift in the process of US economic growth, one that started around 1980.” “Before then,” Krugman says, “families at all levels saw their incomes grow more or less in tandem with the growth of the economy as a whole”—but afterwards, he continued, “the lion’s share of gains went to the top end of the income distribution, with families in the bottom half lagging far behind.” Books like Thomas Piketty’s Capital in the Twenty-first Century have further documented this broad economic picture: according to the Institute for Policy Studies, for example, the richest 20 Americans now have more wealth than the poorest 50% of Americans—more than 150 million people.

How, though, can “Incentive Systems” shine a light on this large-scale movement? Aside from the fact that, apparently, the essay predicts precisely the future we now inhabit—the “motivational trends considered here,” Wilson and Clark write, “suggests gradual movement toward a society in which factors such as social status, sociability, and ‘fun’ control the character of organizations, while organized efforts to achieve either substantive purposes or wealth for its own sake diminish”—it also suggests just why the traditional sources of opposition to economic power have, largely, been silent in recent decades. The economic turmoil of the nineteenth century, after all, became the Populist movement; that of the 1930s became the Popular Front. Meanwhile, although it has sometimes been claimed that Occupy Wall Street, and more lately Bernie Sanders’ primary run, have been contemporary analogs of those previous movements, both have—I suspect anyway—had nowhere near the kind of impact of their predecessors, and for reasons suggested by “Incentive Systems.”

What “Incentive Systems” can do, in other words, is explain the problem raised by Walter Benn Michaels: the question of why, to many young would-be political activists in the United States, it’s problems of racial and other forms of discrimination that appear the most pressing—and not the economic vice that has been squeezing the majority of Americans of all races and creeds for the past several decades. (Witness the growth of the Black Lives Matter movement, for instance—which frames the issue of policing the inner city as a matter of black and white, rather than dollars and cents.) The signature move of this crowd has, for some time, been to accuse their opponents of (as one example of this school has put it) “crude economic reductionism”—or, of thinking “that the real working class only cares about the size of its paychecks.” Of course, as Michaels says in The Trouble With Diversity, the flip side of that argument is to say that this school attempts to fit all problems into the Procrustean bed of “diversity,” or more simply, “that racial identity trumps class,” rather than the other way. But why do those activists need to insist on the point so strongly?

“Some people,” Jill Lepore wrote not long ago in The New Yorker about economic inequality, “make arguments by telling stories; other people make arguments by counting things.” Understanding inequality, as should be obvious, requires—at a minimum—a grasp of the most basic terms of mathematics: it requires knowing, for instance, that a 1,000 percent increase is quite a lot. But more significantly, it also requires understanding something about how rewards—incentives—operate in society: a “something” that, as Nobel Prize-winning economist Joseph Stiglitz explained not long ago, is “ironclad.” In the Columbia University professor’s view (and it is more-or-less the view of the profession), there is a fundamental law that governs the matter—which in turn requires understanding what a scientific law is, and how one operates, and so forth.

That law in this case, the Columbia University professor says, is this: “as more money becomes concentrated at the top, aggregate demand goes into decline.” Take, Stiglitz says, the example of Mitt Romney’s 2010 income of $21.7 million: Romney can “only spend a fraction of that sum in a typical year to support himself and his wife.” But, he continues, “take the same amount of money and divide it among 500 people—say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all the money gets spent.” The more evenly money is spread around, in other words, the more efficiently, and hence productively, the American economy works—for everyone, not just some people. Conversely, the more total income is captured by fewer people, the less efficiently the economy becomes, resulting in less productivity—and ultimately a poorer America. But understanding Stiglitz’ argument requires a kind of knowledge possessed by counters, not storytellers—which, in the light of “Incentive Systems,” illustrates just why it’s discrimination, and not inequality, that is the issue of choice for political activists today.

At least since the 1960s, that is, the center of political energy on university campuses has usually been the departments that “tell stories,” not the departments that “count things”: as the late American philosopher Richard Rorty remarked before he died, “departments of English literature are now the left-most departments of the universities.” But, as Clark and Wilson might point out (following Adam Smith), the departments that “tell stories” have internal interests that may not be identical to the interests of the public: as mentioned, understanding Joseph Stiglitz’ point requires understanding science and mathematics—and as Bruce Robbins (a colleague of Wolff and Stiglitz at Columbia University, only in the English department ) has remarked, “the critique of Enlightenment rationality is what English departments were founded on.” In other words, the internal incentive systems of English departments and other storytelling disciplines reward their members for not understanding the tools that are the only means of understanding foremost political issue of the present—an issue that can only be sorted out by “counting things.”

As viewed through the prism of “Incentive Systems,” then, the lesson taught by the past few decades of American life might well be that elevating “storytelling” disciplines above “counting” disciplines has had the (utterly predictable) consequence that economic matters—a field constituted by arguments constructed about “counting things”—have been largely vacated as a possible field of political contest. And if politics consists of telling stories only, that means that “counting things” is understood as apolitical—a view that is surely, as students of deconstruction have always said, laden with politics. In that sense, then, the deal struck by Americans with themselves in the past several decades hardly seems fair. Or, to use an older vocabulary:

Square.

Size Matters

That men would die was a matter of necessity; which men would die, though, was a matter of circumstance, and Yossarian was willing to be the victim of anything but circumstance.
Catch-22.
I do not pretend to understand the moral universe; the arc is a long one, my eye reaches but little ways; I cannot calculate the curve and complete the figure by the experience of sight; I can divine it by conscience. And from what I see I am sure it bends towards justice.
Things refuse to be mismanaged long.
—“Of Justice and the Conscience.

 

monte-carlo-casino
The Casino at Monte Carlo

 

 

Once, wrote the baseball statistician Bill James, there was “a time when Americans” were such “an honest, trusting people” that they actually had “an unhealthy faith in the validity of statistical evidence”–but by the time James wrote in 1985, things had gone so far the other way that “the intellectually lazy [had] adopted the position that so long as something was stated as a statistic it was probably false.” Today, in no small part because of James’ work, that is likely no longer as true as it once was, but nevertheless the news has not spread to many portions of academia: as University of Virginia historian Sophia Rosenfeld remarked in 2012, in many departments it’s still fairly common to hear it asserted—for example—that all “universal notions are actually forms of ideology,” and that “there is no such thing as universal common sense.” Usually such assertions are followed by a claim for their political utility—but in reality widespread ignorance of statistical effects is what allowed Donald Trump to be elected, because although the media spent much of the presidential campaign focused on questions like the size of Donald Trump’s … hands, the size that actually mattered in determining the election was a statistical concept called sample size.

First mentioned by the mathematician Jacob Bernoulli made in his 1713 book, Ars Conjectandi, sample size is the idea that “it is not enough to take one or another observation for such a reasoning about an event, but that a large number of them are needed.” Admittedly, it might not appear like much of an observation: as Bernoulli himself acknowledged, even “the most stupid person, all by himself and without any preliminary instruction,” knows that “the more such observations are taken into account, the less is the danger of straying from the goal.” But Bernoulli’s remark is the very basis of science: as an article in the journal Nature put the point in 2013, “a study with low statistical power”—that is, few observations—“has a reduced chance of detecting a true effect.” Sample sizes need to be large enough to be able to eliminate chance as a possible factor.

If that isn’t known it’s possible to go seriously astray: consider an example drawn from the work of Israeli psychologists Amos Tversky (MacArthur “genius” grant winner) and (Nobel Prize-winning) Daniel Kahneman—a study “of two toys infants will prefer.” Let’s say that in the course of research our investigator finds that, of “the first five infants studied, four have shown a preference for the same toy.” To most psychologists, the two say, this would be enough for the researcher to conclude that she’s on to something—but in fact, the two write, a “quick computation” shows that “the probability of a result as extreme as the one obtained” being due simply to chance “is as high as 3/8.” The scientist might be inclined to think, in other words, that she has learned something—but in fact her result has a 37.5 percent chance of being due to nothing at all.

Yet when we turn from science to politics, what we find is that an American presidential election is like a study that draws grand conclusions from five babies. Instead of being one big sample—as a direct popular national election would be—presidential elections are broken up into fifty state-level elections: the Electoral College system. What that means is that American presidential elections maximize the role of chance, not minimize it.

The laws of statistics, in other words, predict that chance will play a large role in presidential elections—and as it happens, Tim Meko, Denise Lu and Lazaro Gamio reported for The Washington Post three days after the election that “Trump won the presidency with razor-thin margins in swing states.” “This election was effectively decided,” the trio went on to say, “by 107,000 people”—in an election in which more than 120 million votes were cast, that means that election was decided by less than a tenth of one percent of the total votes. Trump won Pennsylvania by less than 70,000 votes of nearly 6 million, Wisconsin by less than 30,000 of just less than three million, and finally Michigan by less than 11,000 out of 4.5 million: the first two by just more than one percent of the total vote each—and Michigan by a whopping .2 percent! Just to give you an idea of how insignificant these numbers are by comparison with the total vote cast, according to the Michigan Department of Transportation it’s possible that a thousand people in the five largest counties were involved in car crashes—which isn’t even to mention people who just decided to stay home because they couldn’t find a babysitter.

Trump owes his election, in short, to a system that is vulnerable to chance because it is constructed to turn a large sample (the total number of American voters) into small samples (the fifty states). Science tells us that small sample sizes increase the risk of random chance playing a role, American presidential elections use a smaller sample size than they could, and like several other presidential elections, the 2016 election did not go as predicted. Donald Trump could, in other words, be called “His Accidency” with even greater justice than John Tyler—the first vice-president to be promoted due to the death of his boss in office—was. Yet, why isn’t that point being made more publicly?

According to John Cassidy of The New Yorker, it’s because Americans haven’t “been schooled in how to think in probabilistic terms.” But just why that’s true—and he’s essentially making the same point Bill James did in 1985, though more delicately—is, I think, highly damaging to many of Clinton’s biggest fans: the answer is, because they’ve made it that way. It’s the disciplines where many of Clinton’s most vocal supporters make their home, in other words, that are most directly opposed to the type of probabilistic thinking that’s required to see the flaws in the Electoral College system.

As Stanford literary scholar Franco Moretti once observed, the “United States is the country of close reading”: the disciplines dealing with matters of politics, history, and the law within the American system have, in fact, more or less been explicitly constructed to prevent importing knowledge of the laws of chance into them. Law schools, for example, use what’s called the “case method,” in which a single case is used to stand in for an entire body of law: a point indicated by the first textbook to use this method, Christopher Langdell’s A Selection of Cases on the Law of Contracts. Other disciplines, such as history, are similar: as Emory University’s Mark Bauerlein has written, many such disciplines depend for their very livelihood upon “affirming that an incisive reading of a single text or event is sufficient to illustrate a theoretical or historical generality.” In other words, it’s the very basis of the humanities to reject the concept of sample size.

What’s particularly disturbing about this point is that, as Joe Pinsker documented in The Atlantic last year, the humanities attract a wealthier student pool than other disciplines—which is to say that the humanities tend to be populated by students and faculty with a direct interest in maintaining obscurity around the interaction between the laws of chance and the Electoral College. That doesn’t mean that there’s a connection between the architecture of presidential elections and the fact that—as Geoffrey Harpham, former president and director of the National Humanities Center, has observed—“the modern concept of the humanities” (that is, as a set of disciplines distinct from the sciences) “is truly native only to the United States, where the term acquired a meaning and a peculiar cultural force that it does not have elsewhere.” But it does perhaps explain just why many in the national media have been silent regarding that design in the month after the election.

Still, as many in the humanities like to say, it is possible to think that the current American university and political structure is “socially constructed,” or in other words could be constructed differently. The American division between the sciences and the humanities is not the only way to organize knowledge: as the editors of the massive volumes of The Literary and Cultural Reception of Darwin in Europe pointed out in 2014, “one has to bear in mind that the opposition of natural sciences … and humanities … does not apply to the nineteenth century.” If that opposition that we today find so omnipresent wasn’t then, it might not be necessary now. Hence, if the choice of the American people is between whether they ought to get a real say in the affairs of government (and there’s very good reason to think they don’t), or whether a bunch of rich yahoos spend time in their early twenties getting drunk, reading The Great Gatsby, and talking about their terrible childhoods …well, I know which side I’m on. But perhaps more significantly, although I would not expect that it happens tomorrow, still, given the laws of sample size and the prospect of eternity, I know how I’d bet.

Or, as another sharp operator who’d read his Bernoulli once put the point:

The arc of the moral universe is long, but it bends towards justice.”

 

All Even

George, I am an old man, and most people hate me.
But I don’t like them either so that makes it all even.

—Mr. Potter. It’s A Wonderful Life (1946).

 

dscf3230-1

Because someone I love had never seen it, I rewatched Frank Capra’s 1946 It’s A Wonderful Life the other night. To most people, the film is the story of how one George Bailey comes to perceive the value of helping “a few people get outta [the] slums” of the “scurvy little spider” of the film, the wealthy banker Mr. Potter—but to some viewers, what’s important about the inhabitants of Bedford Falls isn’t that they are poor by comparison to Potter, but instead that some of them are black: the man who plays the piano in the background of one scene, for instance, or Annie, the Bailey family’s maid. To Vincent Nobile, a professor of history at Rancho Cucamonga’s Chaffey College, the casting of these supporting roles not only demonstrates that “Capra showed no indication he could perceive blacks in roles outside the servant class,” but also that Potter is the story’s villain not because he is a slumlord, but because he calls the people Bailey helps “garlic eaters” (http://historynewsnetwork.org/article/1846). What makes Potter evil, in other words, isn’t his “cold monetary self-interest,” but because he’s “bigoted”: to this historian, Capra’s film isn’t the heartwarming story of how Americans banded together to stop a minority (rich people) from wrecking things, but instead the horrifying tragedy of how Americans banded together to stop a minority (black people) from wrecking things. Unfortunately, there’s two problems with that view—problems that can be summarized by referring to the program for a football game that took place five years before the release of Capra’s classic: the Army-Navy game of 29 November, 1941.

Played at Philadelphia’s Franklin Memorial Stadium (once home of the NFL’s Philadelphia Eagles and still the home of the Penn Relays, one of track and field’s premier events), Navy won the contest 14-6; according to Vintage College Football Programs & Collectibles (collectable.wordpress.com [sic]), the program for that game contains 212 pages. On page 180 of that program there is a remarkable photograph. It is of the USS Arizona, the second and last of the American “Pennsylvania” class of super-dreadnought battleships—a ship meant to be, according to the New York Times of 13 July 1913, “the world’s biggest and most powerful, both offensively and defensively, superdreadnought ever constructed.” The last line of the photograph’s caption reads thusly:

It is significant that despite the claims of air enthusiasts, no battleship has yet been sunk by bombs.”

Slightly more than a week later, of course, on a clear bright Sunday morning just after 8:06 Hawaiian time, the hull of the great ship would rest on the bottom of Pearl Harbor, along with the bodies of nearly 1200 of her crew—struck down by the “air enthusiasts” of the Empire of the Sun. The lesson taught that morning, by aircraft directed by former Harvard student Isoroku Yamamoto, was a simple one: that “a saturation attack by huge numbers of low-value attackers”—as Pando Daily’s “War Nerd” columnist, Gary Brecher, has referred to this type of attack—can bring down nearly any target, no matter how powerful (http://exiledonline.com/the-war-nerd-this-is-how-the-carriers-will-die/all/1/). (A lesson that the U.S. Navy has received more than once: in 2002, for instance, when during the wargame “Millennium Challenge 2002” Marine Corps Lieutenant General Paul K. Riper (fictionally) sent 16 ships to the bottom of the Persian Gulf with the creative use of, essentially, a bunch of cruise missiles and several dozen speedboats loaded with cans of gasoline driven by gentlemen with, shall we say, a cavalier approach to mortality.) It’s the lesson that the cheap and shoddy can overcome quality—or in other words that, as the song says, the bigger they come, the harder they fall.

It’s a lesson that applies to more than merely the physical plane, as the Irish satirist Jonathan Swift knew: “Falsehood flies, and the Truth comes limping after,” the author of Gulliver’s Travels wrote in 1710. What Swift refers to is how saturation attacks can work on the intellectual as well as physical plane—as Emory University’s Mark Bauerlein (who, unfortunately for the warmth of my argument’s reception, endorsed Donald Trump in this past election) argued, in Partisan Review in 2001, American academia has over the past several generations essentially become flooded with the mental equivalents of Al Qaeda speedboats. “Clear-sighted professors,” Bauerlein wrote then, understanding the conditions of academic research, “avoid empirical methods, aware that it takes too much time to verify propositions about culture, to corroborate facts with multiple sources, to consult primary documents, and to compile evidence adequate to inductive conclusions” (http://www.bu.edu/partisanreview/books/PR2001V68N2/HTML/files/assets/basic-html/index.html#226). Discussing It’s A Wonderful Life in terms of, say, the economic differences between banks like the one owned by Potter and the savings-and-loan run by George Bailey—and the political consequences therein—is, in other words, hugely expensive in terms of time and effort invested: it’s much more profitable to discuss the film in terms of its hidden racism. By “profitable,” in other words, I mean not merely because it’s intrinsically easier, but also because such a claim is much more likely to upset people, and thus attract attention to its author: the crass stunt once called épater le bourgeois.

The current reward system of the humanities, in other words, favors those philosopher Isaiah Berlin called “foxes” (who know a great many things) rather than “hedgehogs” (who know one important thing). To the present defenders of the humanities, of course, such is the point: that’s the pro-speedboat argument noted feminist literary scholar Jane Tompkins made so long ago as 1981, in her essay “Sentimental Power: Uncle Tom’s Cabin and the Politics of American Literary History.” There, Tompkins suggested that the “political and economic measures”—i.e., the battleships of American political discourse—“that constitute effective action for us” are, in reality, merely “superficial”: instead, what’s necessary are “not specific alterations in the current political and economic arrangements, but rather a change of heart” (http://engl651-jackson.wikispaces.umb.edu/file/view/Sentimental+Power.pdf). To those who think like Tompkins—or apparently, Nobile—discussing It’s A Wonderful Life in terms of economics is to have missed the point entirely: what matters, according to them, isn’t the dreadnought clash of, for example, the unit banking system of the antebellum North (speedboats) versus the branch banking system of the antebellum South (battleships) within the sea of the American economy. (A contest that, incidentally, not only did branch banking largely win in 1994, during Bill Clinton’s administration, but a victory that in turn—because it helped to create the enormous “too big to fail” interstate banks of today—arguably played no small role in the crash of 2008). Instead, what’s important is the seemingly-minor attack of a community college teacher upon a Titanic of American culture. Or, to put the point in terms popularized by Silicon Valley: the sheer BS quality of Vincent Nobile’s argument about It’s A Wonderful Life isn’t a bug—it’s a feature.

There is, however, one problem with such tactics—the same problem described by Rear Admiral Chuichi (“King Kong”) Hara of the Imperial Japanese Navy after the Japanese surrender in September 1945: “We won a great tactical victory at Pearl Harbor—and thereby lost the war.” Although, as the late American philosopher Richard Rorty commented before his death in his Achieving Our Country: Leftist Thought in Twentieth Century America, “[l]eftists in the academy” have, in collaboration with “the Right,” succeeded in “making cultural issues central to public debate,” that hasn’t necessarily resulted in a victory for leftists, or even liberals (https://www.amazon.com/Achieving-Our-Country-Leftist-Twentieth-Century/dp/0674003128). Indeed, there’s some reason to suppose that, by discouraging certain forms of thought within left-leaning circles, academic leftists in the humanities have obscured what Elizabeth Drew, in the New York Review of Books, has called “unglamorous structural questions” in a fashion ultimately detrimental not merely to minority communities, but ultimately all Americans (http://www.nybooks.com/articles/2016/08/18/american-democracy-betrayed/).

What Drew was referring to this past August was such matters as how—in the wake of the 2010 Census and the redistricting it entailed in every state in the Union—the Democrats ended up, in the 2012 election cycle, winning the popular vote for Congress “by 1.2 per cent, but still remained in the minority, with two hundred and one seats to the G.O.P.’s two hundred and thirty-four.” In other words, Democratic candidates for the House of Representatives got, as Katie Sanders noted in Politifact in 2013, “50.59 percent of the two-party vote” that November, but “won just 46.21 percent of seats”: only “the second time in 70 years that a party won the majority of the vote but didn’t win a majority of the House seats” (http://www.politifact.com/truth-o-meter/statements/2013/feb/19/steny-hoyer/steny-hoyer-house-democrats-won-majority-2012-popu/). The Republican advantage didn’t end there: as Rob Richie reported for The Nation in 2014, in that year’s congressional races Republicans won “about 52 percent of votes”—but ended “up with 57 percent of seats” (https://www.thenation.com/article/republicans-only-got-52-percent-vote-house-races/). And this year, the numbers suggest, the Republicans received less than half the popular vote—but will end up with fifty-five percent (241) of the total seats (435). These losses, Drew suggests, are ultimately due to the fact that “the Democrats simply weren’t as interested in such dry and detailed stuff as state legislatures and redistricting”—or, to put it less delicately, because potentially-Democratic schemers have been put to work constructing re-readings of old movies instead of building arguments that are actually politically useful.

To put this even less delicately, many people on the liberal or left-wing side of the political aisle have, for the past several generations, spent their college educations learning, as Mark Bauerlein wrote back in 2001, how to “scoff[…] at empirical notions, chastising them as ‘näive positivism.’” At the same time, a tiny minority among them—those destined to “relax their scruples and select a critical practice that fosters their own professional survival”—have learned, and are learning, to swim the dark seas of academia, taught by their masters how to live by feeding upon the minds of essentially defenseless undergraduates. The lucky ones, like Vince Nobile, manage—by the right mix of bowing and scraping—to land some kind of job security at some far-flung outpost of academia’s empire, where they make a living entertaining the yokels; the less-successful, of course, write deeply ironic blogs.

Be that as it may, while there isn’t necessarily a connection between the humanistic academy’s flight from what Bauerlein calls “the canons of logic” and the fact that it was so easy—as John Cassidy of The New Yorker observed after this past presidential election—for so many in the American media and elsewhere “to dismiss the other outcome [i.e., Trump’s victory] as a live possibility” before the election, Cassidy at least ascribed the ease with which so many predicted a Clinton victory then to the fact that many “haven’t been schooled in how to think in probabilistic terms” (http://www.newyorker.com/news/john-cassidy/media-culpa-the-press-and-the-election-result). That lack of education, which extends from the impact of mathematics upon elections to the philosophical basis for holding elections at all (which extends far beyond the usual seventeenth-century suspects rounded up in even the most erudite of college classes to medieval thinkers like Nicholas of Cusa, who argued in 1434’s Catholic Concordance that the “greater the agreement, the more infallible the judgment”—or in other words that speedboats are more trustworthy than battleships), most assuredly has had political consequences (http://www.cambridge.org/us/academic/subjects/politics-international-relations/texts-political-thought/nicholas-cusa-catholic-concordance?format=PB&isbn=9780521567732). While the ever-more abstruse academic turf wars between the sciences and the humanities might be good for the ever-dwindling numbers of tenured college professors, in other words, it’s arguably disastrous, not only for Democrats and the populations they serve, but for the country as a whole. Although Clarence, angel second class, says to George Bailey, “we don’t use money in Heaven”—suggesting the way in which American academics swear off knowledge of the sciences upon entering their secular priesthood—George replies, “it comes in real handy down here, bub.” What It’s A Wonderful Life wants to tell us is that a nation whose leadership balances so precariously upon such a narrow educational foundation is, no matter what the program says, as vulnerable as a battleship on a bright Pacific morning.

Or a skyscraper, on a cloudless September one.

I Think I’m Gonna Be Sad

In all Republics the voice of a majority must prevail.
—Andrew Jackson.

I know no safe depository of the ultimate powers of the society but the people themselves, and if we think them not enlightened enough to exercise that control with a wholesome discretion, the remedy is not to take control from them, but to inform their discretion.
—Thomas Jefferson. “Letter to William Charles Jarvis.” 28 September, 1820

 

 

When the Beatles first came to America, in February of 1964—Michael Tomasky noted recently for The Daily Beast—they rode from their gig at Ed Sullivan’s show in New York City to their first American concert in Washington, D.C. by train, arriving two hours and fifteen minutes after leaving Manhattan. It’s a seemingly trivial detail—until it’s pointed out, as Tomasky realized, that anyone trying that trip today would be lucky to do it in three hours. American infrastructure in short is not what it was: as the American Society of Civil Engineers wrote in 2009’s Report Card for American Infrastructure, “years of delayed maintenance and lack of modernization have left Americans with an outdated and failing infrastructure that cannot meet our needs.” But what to do about it? “What’s needed,” wrote John Cassidy, of The New Yorker, recently, “is some way to protect essential infrastructure investments from the vicissitudes of congressional politics and the cyclical ups and downs of the economy.” He suggests, instead, “an independent, nonpartisan board” that could “carry out cost-benefit analyses of future capital-spending proposals.” This board, presumably, would be composed of professionals above the partisan fray, and thus capable of seeing to the long-term needs of the country. It all sounds really jake, and just the thing that the United States ought to do—excepting only for the disappointing fact that the United States already has just such a board, and the existence of that “board” is the very reason why Americans don’t invest in infrastructure.

First though—has national spending on infrastructure declined, and is “politics” the reason for that decline? Many think so: “Despite the pressing infrastructure investment needs of the United States,” businessman Scott Thomasson wrote for the Council on Foreign Relations recently, “federal infrastructure policy is paralyzed by partisan wrangling over massive infrastructure bills that fail to move through Congress.” Those who take that line do have evidence, at least for the first proposition.

Take for instance the Highway Trust Fund, an account that provides federal money for investments in roads and bridges. In 2014, the Fund was in danger of “drying up,” as Rebecca Kaplan reported for CBS News at the time, mostly because the federal gas tax of 18.4 cents per gallon hasn’t been increased since 1993. Gradually, then, both the federal government and the states have, in relative terms, decreased spending on highways and other projects of that sort—so much so that people like former presidential economic advisor and president of Harvard University, Lawrence Summers, say (as Summers did last year) that “the share of public investment [in infrastructure], adjusting for depreciation … is zero.” (That is, spending on infrastructure is effectively less than the rate of inflation—which itself is pretty low.) So, while the testimony of the American Society of Civil Engineers might, to say the least, be biased—asking an engineer whether there ought to be more spending on engineering is like asking an ice cream man whether you need a sundae—there’s a good deal of evidence that the United States could stand more investment in the structures that support American life.

Yet, even if that’s so, is the relative decline in spending really the result of politics—rather than, say, a recognition that the United States simply doesn’t need the same sort of spending on highways and railroads that it once did? Maybe—because “the Internet,” or something—there simply isn’t the need for so much physical building any more. Still, aside from such spectacular examples as the Minneapolis Interstate 35 bridge collapse in 2007 or the failure of the levees in New Orleans during Hurricane Katrina in 2005, there’s evidence that the United States would be spending more money on infrastructure under a different political architecture.

Consider, for example, how the U.S. Senate “shot down … a measure to spend $50 billion on highway, rail, transit and airport improvements” in November of 2011, as The Washington Post’s Rosalind S. Helderman reported at the time. Although the measure was supported by 51 votes in favor to 49 votes against, the measure failed to pass—because, as Helderman wrote, according to the rules of the Senate “the measure needed 60 votes to proceed to a full debate.” Passing bills in the Senate these days requires, it seems, more than majority support—which, near as I can make out, is just what is meant by “congressional gridlock.” What “gridlock” means is the inability of a majority to pass its programs—absent that inability, nearly certainly the United States would be spending more money on infrastructure. At this point, then, the question can be asked: why should the American government be built in a fashion that allows a minority to hold the majority for ransom?

The answer, it seems, might be deflating for John Cassidy’s idea: when the American Constitution was written, it inscribed into its very foundation what has been called (by The Economist, among many, many others) the “dream of bipartisanship”—the notion that, somewhere, there exists a group of very wise men (and perhaps women?) who can, if they were merely handed the power, make all the world right again, and make whole that which is broken. In America, the name of that body is the United States Senate.

As every schoolchild knows, the Senate was originally designed as a body of “notables,” or “wise men”: as the Senate’s own website puts it, the Senate was originally designed to be an “independent body of responsible citizens.” Or, as James Madison wrote to another “Founding Father,” Edmund Randolph, justifying the institution, the Senate’s role was “first to protect the people against their rulers [and] secondly to protect the people against transient impressions into which they themselves might be led.” That last justification may be the source of the famous anecdote regarding the Senate, which involves George Washington saying to Thomas Jefferson that “we pour our legislation into the senatorial saucer to cool it.” While the anecdote itself only appeared nearly a century later, in 1872, still it captures something of what the point of the Senate has always been held to be: a body that would rise above petty politicking and concern itself with the national interest—just the thing that John Cassidy recommends for our current predicament.

This “dream of bipartisanship,” as it happens, is not just one held by the founding generation. It’s a dream that, journalist and gadfly Thomas Frank has said, “is a very typical way of thinking for the professional class” of today. As Frank amplified his remarks, “Washington is a city of professionals with advanced degrees,” and the thought of those professionals is “‘[w]e know what the problems are and we know what the answers are, and politics just get in the way.’” To members of this class, Frank says, “politics is this ugly thing that you don’t really need.” For such people, in other words, John Cassidy’s proposal concerning an “independent, nonpartisan board” that could make decisions regarding infrastructure in the interests of the nation as a whole, rather than from the perspective of this or that group, might seem entirely “natural”—as the only way out of the impasse created by “political gridlock.” Yet in reality—as numerous historians have documented—it’s in fact precisely the “dream of bipartisanship” that created the gridlock in the first place.

An examination of history in other words demonstrates that—far from being the disinterested, neutral body that would look deep into the future to examine the nation’s infrastructure needs—the Senate has actually functioned to discourage infrastructure spending. After John Quincy Adams was elected president in the contested election of 1824, for example, the new leader proposed a sweeping program of investment in roads and canals and bridges, but also a national university, subsidies for scientific research and learning, a national observatory, Western exploration, a naval academy, and a patent law to encourage invention. Yet, as Paul C. Nagel observes in his recent biography of the Massachusetts president, virtually none of Adams’ program was enacted: “All of Adams’ scientific and educational proposals were defeated, as were his efforts to enlarge the road and canal systems.” Which is true, so far as that goes. But Nagel’s somewhat bland remarks do not do justice to the matter of how Adams’ proposals were defeated.

After the election of 1824, which elected the 19th Congress, Adams’ party had a majority in the House of Representatives—one reason why Adams became president at all, because the chaotic election of 1824, split between three major candidates, was decided (as per the Constitution) by the House of Representatives. But while Adams’ faction had a majority in the House, they did not in the Senate, where Andrew Jackson’s pro-Southern faction held sway. Throughout the 19th Congress, the Jacksonian party controlled the votes of 25 Senators (in a Senate of 48 senators, two to a state) while Adams’ faction controlled, at the beginning of the Congress, 20. Given the structure of the U.S. Constitution, which requires agreement between the two houses of Congress as the national legislature before bills can become law, this meant that the Senate could—as it did—effectively veto any of the Adams’ party’s proposals: control of the Senate effectively meant control of the government itself. In short, a recipe for gridlock.

The point of the history lesson regarding the 19th Congress is that, far from being “above” politics as it was advertised to be in the pages of The Federalist Papers and other, more recent, accounts of the U.S. Constitution, the U.S. Senate proved, in the event, hardly to be more neutral than the House of Representatives—or even the average city council. Instead of considering the matter of investment in the future on its own terms, historians have argued, senators thought about Adams’ proposals in terms of how they would affect a matter seemingly remote from the matters of building bridges or canals. Hence, although senators like John Tyler of Virginia, for example—who would later be elected president himself—opposed Adams-proposed “bills that mandated federal spending for improving roads and bridges and other infrastructure” on the grounds that such bills “were federal intrusions on the states” (as Roger Matuz put it in his The Presidents’ Fact Book), many today argue that their motives were not so high-minded. In fact, they were actually as venial as any motive could be.

Many of Adams’ opponents, that is—as William Lee Miller of the University of Virginia wrote in his Arguing About Slavery: John Quincy Adams and the Great Battle in the United States Congress—thought that the “‘National’ program that [Adams] proposed would have enlarged federal powers in a way that might one day threaten slavery.” And, as Miller also remarks, the “‘strict construction’ of the Constitution and states’ rights that [Adams’] opponents insisted upon”— were, “in addition to whatever other foundations in sentiment and philosophy they had, barriers of protection against interference with slavery.” In short—as historian Harold M. Hyman remarked in his magisterial A More Perfect Union: The Impact of the Civil War and Reconstruction on the Constitution—while the “constitutional notion that tight limits existed on what government could do was a runaway favorite” at the time, in reality these seemingly-resounding defenses of limited government were actually motivated by a less-than savory interest: “statesmen of the Old South,” Hyman wrote, found that these doctrines of constitutional limits were “a mighty fortress behind which to shelter slavery.” Senators, in other words, did not consider whether spending money on a national university would be a worthwhile investment for its own sake; instead, they worried about the effect that such an expenditure would have on slavery.

Now, it could still reasonably be objected at this point—and doubtless will be—that the 19th Congress is, in political terms, about as relevant to today’s politics as the Triassic: the debates between a few dozen, usually elderly, white men nearly two centuries ago have been rendered impotent by the passage of time. “This time, it’s different,” such arguments could, and probably will, say. Yet, at a different point in American history, it was well-understood that the creation of such “blue-ribbon” committees or the like—such as the Senate—were in fact simply a means for elite control.

As Alice Sturgis, of Stanford University, wrote in the third edition of her The Standard Code of Parliamentary Procedure (now in its fourth edition, after decades in print, and still the paragon of the field), while some “parliamentary writers have mistakenly assumed that the higher the vote required to take an action, the greater the protection of the members,” in reality “the opposite is true.” “If a two-thirds vote is required to pass a proposal and sixty-five members vote for the proposal and thirty-five members vote against it,” Sturgis went on to write, “the thirty-five members make the decision”—which then makes for “minority, not majority, rule.” In other words, even if many circumstances in American life have changed since 1825, it still remains the case that the American government is (still) largely structured in a fashion that solidifies the ability of a minority—like, say, oligarchical slaveowners—to control the American government. And while slavery was abolished by the Civil War, it still remains the case that a minority can block things like infrastructure spending.

Hence, since infrastructure spending is—nearly by definition—for the improvement of every American, it’s difficult to see how making infrastructure spending less democratic, as Cassidy wishes, would make it easier to spend money on infrastructure. We already have a system that’s not very democratic—arguably, that’s the reason why we aren’t spending money on infrastructure, not because (as pundits like Cassidy might have it), “Washington” has “gotten too political.” The problem with American spending on infrastructure, in sum, is not that it is political. In fact, it is precisely the opposite: it isn’t political enough. That people like John Cassidy—who, by the way, is a transplanted former subject of the Queen of England—think the contrary is itself, I’d wager, reason enough to give him, and people like him, what the boys from Liverpool called a ticket to ride.

Lions For Lambs

And the remnant of Jacob shall be among the Gentiles in the midst of many people as a lion among the beasts of the forest, as a young lion among the flocks of sheep …
Micah 5:8

Micah was the first prophet to predict the downfall of Jerusalem. According to him, the city was doomed because its beautification was financed by dishonest business practices, which impoverished the city’s citizens. He also called to account the prophets of his day, whom he accused of accepting money for their oracles.
“Micah.” Wikipedia.

 

“Before long I’ll be dead, and you and your brother and your sister and all of her children, all of us dead, all of us rotting underground,” says the villainous patriarch of the aristocratic Lannister clan, Tywin, to his son Jaime in a conversation during the first season of the hit HBO show, Game of Thrones. “It’s the family name that lives on,” Tywin continues—a sentence that not only does much to explain the popularity of the show, but also overturns the usual explanation for that interest: the narrative uncertainty, or the way in which, at least in the first several seasons, it was never obvious which characters were the heroes, and so would survive to the end of the tale. But if Tywin is right, the attraction of the show isn’t that it is so unpredictable. It’s rather that the show’s uncertainty about the various characters’ fates is balanced by a matching certainty that they are in peril: either from the political machinations that end up destroying many of the characters the show had led us to think were protagonists (Ned and his son Robb Stark in particular)—or from the horror that, the opening minutes of the show’s very first episode display, has awakened in the frozen north of Thrones’ fictional world. Hence, the uncertainty about what is going to happen is mirrored by a certainty that something will happen—a certainty signified by the motto of the family to which many fan-favorite characters belong, House Stark: “Winter is Coming.” It’s that motto, I think, that furnishes much of the show’s power—because it is such a direct riposte to much of today’s conventional wisdom, a dogma that unites the supposed “radical left” of the contemporary university with their seeming ideological opposites: the financial elite of Wall Street.

To put it plainly, the relevant division in America today is not between Republicans and Democrats, but instead between those who (still) think the notion encapsulated by the phrase “Winter Is Coming” matters—and those who don’t. For the idea contained within the phrase “Winter Is Coming,” after all, is much older than George Martin’s series of fantasy novels. It is, for example, much the same as an idea expressed by the English writer George Orwell, author of 1984 and Animal Farm, in 1946:

… we are all capable of believing things which we know to be untrue, and then, when we are finally proved wrong, impudently twisting the facts so as to show that we were right. Intellectually, it is possible to carry on this process for an indefinite time: the only check on it is that sooner or later a false belief bumps up against solid reality, usually on a battlefield.

What Orwell expresses here, I’d say, is the Stark idea—the idea that, sooner or later, one’s beliefs run up against reality, whether that reality comes in the form of the weather or war or something else. It’s the notion that, sooner or later, things converge towards reality: a notion that many contemporary intellectuals have abandoned. To them, the view expressed by Orwell and the Starks is what’s known as “foundationalism”: something that all recent students in the humanities have been trained, over the past several generations, to boo and hiss.

“Foundationalism,” according to Pennsylvania State University literature professor Michael Bérubé, for example—a person I often refer to because, unlike the work of a lot others, he at least expresses what he’s saying clearly, and also because he represents a university well-known for its commitment to openness and transparency and occasionally less-than-enthusiastic opposition to child abuse—is the notion that there is a “principle that is independent of all human minds.” That is opposed, for people who think about this sort of thing, to “antifoundationalism”: the idea that a lot of stuff (maybe everything) is simply a matter of “human deliberation and consensus.” Also known as “social constructionism,” it’s an idea that Orwell, or the Starks, would have looked at slant-eyed: winter, for instance, doesn’t particularly care what people think about it, and while war is like both a seminar and a hurricane, the things that happen in war—like, say, having the technology to turn an entire city into a fireball—are not appreciably different from the impact of a tsunami.

Within the humanities however the “anti-foundationalist” or “social constructionist” idea has largely taken the field. “Notwithstanding,” as literature professor Mark Bauerlein of Emory University has remarked, “the diversity trumpeted by humanities departments these days, when it comes to conceptions of knowledge, one standpoint reigns supreme: social constructionism.” To those who hold it, it is a belief that straightforwardly powers what Bauerlein calls “a moral obligation to social justice”: in this view, either you are on the side of antifoundationalism, or you are a yahoo who thinks that the problem with the world is that there isn’t enough Donald Trump in it. Yet antifoundationalism, or the idea that everything is a matter of human discussion, is not necessarily so obviously on the side of good and not evil as the professors of the nation’s universities appear to believe.

In fact, while Bauerlein says that this dogma is “a party line, a tribal glue distinguishing humanities professors from their colleagues in the business school, the laboratory, the chapel, and the computing center, most of whom believe that at least some knowledge is independent of social conditions,” there’s actually good reason to think that a disbelief in an underlying reality isn’t all that unfamiliar to the business school. Arguably, there’s no portion of the university that pays more homage to the dogma of “social construction” than the business school.

Take, for instance, the idea Eugene Fama has built his career upon: the “random walk” theory of the stock market, also known as the “efficient market hypothesis.” Today, Fama is a Nobel Prize-laureate (well, winner of the Swedish National Bank’s Prize in Economic Sciences in Memory of Alfred Nobel, a prize not established by Alfred Nobel in his 1895 will), a professor at the University of Chicago’s Booth School of Business, and the so-called “Father of Finance, ” but in 1965 he was an obscure graduate student—at least, until he wrote the paper that established him within his profession that year, “The Behavior of Stock-Market Prices.” In that paper, Fama argued that “the future path of the price level of a security is no more predictable than the path of a series of cumulated random numbers,” which had the consequence that “the series of price changes has no memory.” (Which is what stock prospectuses mean when they say that “past performance cannot predict future performance.”) What Fama meant was that, no matter how many times he went back over the data, he could find no means by which to predict the future path of a particular stock. Hence he concluded that, when it comes to the market, “the past cannot be used to predict the future in any meaningful way”—an idea with some notably anti-foundationalist consequences.

Those consequences can be be viewed in such papers as Fama’s 2010 study with colleague Kenneth French: “Luck versus Skill in the Cross-Section of Mutual Fund Returns”—a study that set out to examine whether it was true that the managers of mutual funds can actually do what they claim they can do, and outperform the stock market. In “Luck versus Skill,” Fama and French say that the evidence shows those managers can’t: “For fund investors the … results are disheartening,” because “few active funds produce … returns that cover their costs.” Maybe there are really intelligent people out there who are smarter than the market, Fama is suggesting—but if there are, he can’t find them.

Now, so far Fama’s idea might sound pretty unexceptional: to readers of this blog, it might even sound like common sense. It’s a fairly close idea to the one explored, for instance, by psychologist Amos Tversky and his co-authors in the paper, “The Hot Hand in Basketball,” which was about how what appeared to be a “hot,” or “clutch,” basketball shooter was simply an effect of randomness: if your skill level is such that you expect to make a certain percentage of your shots, then—simply through the laws of probability—it is likely that you will make a certain number of baskets in a row. Similarly, if there are enough mutual funds in the market, some number of them will have gaudy track records to report: “Given the multitude of funds,” as Fama writes, “many have extreme returns by chance.” If there’s enough participants in any competition, some will be winners—or to put it another way, if a monkey throws enough shit at a wall, some of it will stick.

That, Fama might say, doesn’t mean that the monkey has somehow gotten in touch with Reality: if no one person can outperform the market, then there is nothing anyone can know that would help them to become a better stock-picker. What that must mean in turn is (as the Wikipedia article on the subject notes) that “market prices reflect all available information,” or that “stocks always trade at their fair value”—which is right about where that the work of seemingly-conservative professors in economics departments and business schools, and their seeming-liberal opponents in departments of the humanities begins to converge.

Fama, after all, denies the existence of what are known as “bubbles”: “speculative bubbles, market bubbles, price bubbles, financial bubbles, speculative manias or balloons” as Wikipedia terms them. “Bubbles” describe situations in which a given asset—like, I don’t know, a house—is traded “at a price or price range that strongly deviates from the corresponding asset’s intrinsic value.” The classic example is the Dutch tulip craze of the seventeenth century, during which a single tulip bulb might have sold for ten times the yearly wage of a workman. (Other instances might be closer to the reader’s mind than that.) But according to Fama there can be no such thing as a “bubble”: when John Cassidy of The New Yorker said to Fama in an interview that the chief problem during the financial crisis of 2008 was that “there was a credit bubble that inflated and ultimately burst,” Fama replied by saying, “I don’t know what a credit bubble means. I don’t even know what a bubble means. These words have become popular. I don’t think they have any meaning.” Although a careful reader might note that what Fama is saying here is something like that there is a bubble in the concept of bubbles, what he intends is to deny that there are bubbles, and thus that there is any “intrinsic value” to a given asset.

It’s at this point, I think, that the connection between Eugene Fama’s contention about the “efficient market hypothesis” and the doctrine in the humanities known as “antifoundationalism” becomes clear: both are denials of the Starks’ “Winter Is Coming” motto. After all, a bubble only makes sense if there is some kind of “intrinsic,” or “foundational,” value to something; similarly, a “foundationalist” thinks that there is some nonhuman reality. But why does this obscure and esoteric doctrinal dispute among a few intellectuals matter, aside from being the latest turn of the wheel of fashion within the walls of the academy?

Well, it matters because what they are really discussing—the real meaning of “intrinsic value”—is whether to allow ordinary people to have any say about the future of their lives.

Many liberals, for instance, have warned about the Republican assault on the right to vote in such matters as the Supreme Court’s 2013 ruling in Shelby County vs. Holder, which essentially gutted the Voting Rights Act of 1965, or the passage of “voter ID laws” in many states—sold as “protections” but in reality a means of preventing voting. What’s far less-often discussed, however, is that intellectuals of the supposed academic left have begun—quietly, to be sure—to question the very idea of voting.

Oxford don Mary Beard, for example—a scholar of the ancient world and avowed feminist—recently wrote a column for the London Review of Books concerning the “Brexit” referendum, in which the people of Great Britain decided whether to stay in the European Union or not. Beard’s sort—educated, with “progressive” opinions—thought that Britain ought to remain in the Union; when the results came in, however, the nation had decided to leave, or “Brexit.” “Handing us a referendum,” Beard wrote in response, “is not a way to reach a responsible decision”—“for God’s sake,” one can almost hear Beard lecturing, “how can you let an important decision be up to the [insert condescending adjective here] voters?” But while that might sound like a one-time response to a very particular situation, in fact many smart people who share Beard’s general views also share her distrust of elections.

What is an election, anyway, but an event analogous to a battle, or a hurricane? To people inclined to dismiss the significance of real events, it’s easy enough to dismiss the notion of elections. “Importantly”— wrote Princeton University’s Lawrance S. Rockefeller Professor of Politics, Stephen Macedo, recently—“majority rule is not a fundamental principle of either democracy or fairness, nor is it required by any basic principle of democracy or fairness.” According to Macedo, “the basic principle of democracy” isn’t elections, but instead “political equality,” or a “respect [for] minority rights and … fair and inclusive deliberation.” In other words, so long as “minority rights” are respected and there is “fair and inclusive deliberation,” it doesn’t matter if anyone votes or not—which is to say that to very many smart, and supposedly “liberal” or “leftist” people, the very notion that voting has any kind of “intrinsic value” to it at all has become irrelevant.

That, more or less, is what the characters on Game of Thrones think too. After all, as Tywin says to Jaime at one point during the conversation I began this essay with, a “lion doesn’t concern himself with the opinion of a sheep.” Which, one supposes, is not a very surprising sentiment on a show that, while it sometimes depicts depicts dragons and magic, mostly concerns the doings of a handful of aristocrats in a feudal age. What might be pretty surprising, however—depending on your level of distrust—is that, today, a great many of the people entrusted to be society’s shepherds appear to agree with them.

Human Events

Opposing the notion of minority rule, [Huger] argued that a majority was less likely to be wrong than a minority, and if this was not so “then republicanism must be a dangerous fallacy, and the sooner we return to the ‘divine rights’ of the kings the better.”
—Manisha Sinha. The Counterrevolution of Slavery. 2001.

Note that agreement [concordantia] is particularly required on matters of faith and the greater the agreement the more infallible the judgment.
—Nicholas of Cusa. Catholic Concordance. 1432.

 

It’s perhaps an irony, though a mild one, that the weekend of the celebrations of American independence the most notable sporting events are the Tour de France, soccer’s European Cup, and Wimbledon—maybe all the more so now that Great Britain has voted to “Brexit,” i.e., to leave the European Union.  A number of observers have explained that vote as at least somewhat analogous to the Donald Trump movement in the United States, in the first place because Donald himself called the “Brexit” decision a “great victory” at a press conference the day after the vote, and a few days later “praised the vote as a decision by British voters to ‘take back control of their economy, politics and borders,’” as The Guardian said Thursday. To the mainstream press, the similarity between the “Brexit” vote and Donald Trump’s candidacy is that—as Emmanuel Macron, France’s thirty-eight-year-old economy minister said about “Brexit”—both are a conflict between those “content with globalization” and those “who cannot find” themselves within the new order. Both Trump and “Brexiters” are, in other words, depicted as returns of—as Andrew Solomon put it in The New Yorker on Tuesday—“the Luddite spirit that led to the presumed arson at Albion Mills, in 1791, when angry millers attacked the automation that might leave them unemployed.” “Trumpettes” and “Brexiters” are depicted as wholly out of touch and stuck in the past—yet, as a contrast between Wimbledon and the Tour de France may help illuminate, it could also be argued that it is, in fact, precisely those who make sneering references both to Trump and to “Brexiters” who represent, not a smiling future, but instead the return of the ancien régime.

Before he outright won the Republican nomination through the primary process, after all, Trump repeatedly complained that the G.O.P.’s process was “rigged”: that is, it was hopelessly stacked against an outsider candidate. And while a great deal of what Trump has said over the past year has been, at best, ridiculously exaggerated when not simply outright lying, in that contention Trump has a great deal of evidence: as Josh Barro put it in Business Insider (not exactly a lefty rag) back in April, “the Republican nominating rules are designed to ignore the will of the voters.” Barro cites the example of Colorado’s Republican Party, which decided in 2015 “not to hold any presidential preference vote”—a decision that, as Barro rightly says, “took power away from regular voters and handed it to the sort of activists who would be likely … [to] participat[e] in party conventions.” And Colorado’s G.O.P. was hardly alone in making, quite literally, anti-democratic decisions about the presidential nominating process over the past year: North Dakota also decided against a primary or even a caucus, while Pennsylvania did hold a vote—but voters could only choose uncommitted delegates; i.e., without knowing to whom those delegates owed allegiance.

Still, as Mother Jones—which is a lefty rag—observed, also back in April, this is an argument that can easily be worked against as for Trump: in New York’s primary, for instance, “Kasich and Cruz won 40 percent of the vote but only 4 percent of the delegates,” while on Super Tuesday Trump’s opponents “won 66 percent of the vote but only 57 percent of the delegates.” And so on. Other critics have similarly attacked the details of Trump’s arguments: many, as Mother Jones’ Kevin Drum says, have argued that the details of the Republican nominating process could just as easily be used as evidence for “the way the Republican establishment is so obviously in the bag for Trump.” Those critics do have a point: investigating the whole process is exceedingly difficult because the trees overwhelm any sense of the forest.

Yet, such critics often use those details (about which they are right) to make an illicit turn. They have attacked, directly or indirectly, the premise of the point Trump tried to make in an op-ed piece in The Wall Street Journal this spring that—as Nate Silver paraphrased it on FiveThirtyEight—“the candidate who gets the most votes should be the Republican nominee.” In other words, they make an argumentative turn from the particulars of this year’s primary process to take a very disturbing swerve toward attacking the very premises of democratic government itself: by disputing this or that particular they obscure whether or not the will of the voters should be respected. Hence, even if Trump’s whole campaign is, at best, wholly misdirected, the point he is making—a point very similar to the one made by Bernie Sanders’ campaign—is not something to be treated lightly. But that, it seems, is something that elites are, despite their protests, skirting close to doing: which is to say that, despite the accusations directed at Trump that he is leading a fascistic movement, it is actually arguable that it is Trump’s supposedly “liberal” opponents who are far closer to authoritarianism than he is because they have no respect for sanctity of the ballot. Or, to put it another way, that it is Trump’s voters—and, by extension, those for “Brexit”—who have the cosmopolitan view, while it is his opponents who are, in fact, the provincialists.

The point, I think, can be seen by comparing the scoring rules between Wimbledon and the Tour de France. The Tour, as may or may not be known, is determined by the rider who—as Patrick Redford at Deadspin put it the other day in “The Casual Observer’s Guide to the Tour de France”—has “the lowest time over all 21 stages.” Although the race takes place over nearly the whole nation of France, and several more besides, and covers over 2,000 miles from the cobblestone flats of Flanders to the heights of the Alps and down to the streets of Paris, still the basic premise of the race is clear even to the youngest child: ride faster and win. Explaining Wimbledon however—like explaining the rules of the G.O.P. nominating process (or, for that matter, the Democratic nominating process)—is not so simple.

As I have noted before in this space, the rules of tennis are not like cycling—or even such familiar sports as baseball or football. In baseball and most other sports, including the Tour, the “score is cumulative throughout the contest … and whoever has the most points at the end wins,” as Allen Fox once described the difference between tennis and other games in Tennis magazine. But tennis is not like that: “The basic element of tennis scoring is the point,” as mathematician G. Edgar Parker has noted, “but tennis matches are won by the player who wins two out three (or three out of five) sets.” Sets are themselves accumulations of games, not points. During each game, points are won and lost until one player has not only won at least four points but also has a two-point advantage on the other; games go back and forth until one player does have that advantage. Then, at the set level, one player must have won at least six games (though the rules vary at some professional tournaments if that player also needs a two-game advantage to win the set). Finally, then, a player needs to win at least two, and—as at Wimbledon—sometimes three, sets to take a match.

If the Tour de France were won like Wimbledon is won, in other words, the winner would not be determined by whoever had the lowest overall time: the winner would be, at least at first analysis, whoever won the most number of stages. But even that comparison would be too simple: if the Tour winner were determined by the winner of the most stages, that would imply that each stage were equal—and it is certainly not the case that all points, games, or sets in tennis are equal. “If you reach game point and win it,” as Fox writes in Tennis, “you get the entire game while your opponent gets nothing—all of the points he or she won in the game are eliminated.” The points in one game don’t carry over to the next game, and previous games don’t carry over to the next set. That means that some points, some games, and some sets are more important than others: “game point,” “set point,” and “match point” are common tennis terms that mean “the point whose winner may determine the winner of the larger category.” If tennis’ type of scoring system were applied to the Tour, in other words, the winner of the Tour would not be the overall fastest cyclist, nor even the cyclist who won the most stages, but the cyclist who won certain stages, say—or perhaps even certain moments within stages.

Despite all the Sturm und Drang surrounding Donald Trump’s candidacy, then—the outright racism and sexism, the various moronic-seeming remarks concerning American foreign policy, not to mention the insistence that walls are more necessary to the American future than they even are to squash—there is one point about which he, like Bernie Sanders in the Democratic camp, is making cogent sense: the current process for selecting an American president is much more like a tennis match than it is like a bicycle race. After all, as Hendrik Hertzberg of The New Yorker once pointed out, Americans don’t elect their presidents “the same way we elect everybody else—by adding up all the voters’ votes and giving the job to the candidate who gets the most.” Instead, Americans have (as Ed Grabianowski puts it on the how stuff works website), “a whole bunch of separate state elections.” And while both of these comments were directed at the presidential general election, which depends on the Electoral College, they equally, if not more so, apply to the primary process: at least in the general election in November, each state’s rules are more or less the same.

The truth, and hence power, of Trump’s critique of this process can be measured by the vitriol of the response to it. A number of people, on both sides of the political aisle, have attacked Trump (and Sanders) for drawing attention to the fashion in which the American political process works: when Trump pointed out that Colorado had refused to hold a primary, for instance, Reince Priebus, chairman of the Republican National Committee, tweeted (i.e., posted on Twitter, for those of you unfamiliar with, you know, the future) “Nomination process known for a year + beyond. It’s the responsibility of the campaigns to understand it. Complaints now? Give us all a break.” In other words, Priebus was implying that the rules were the same for all candidates, and widely known before hand—so why the whining? Many on the Democratic side said the same about Sanders: as Albert Hunt put it in the Chicago Tribune back in April, both Trump and Sanders ought to shut up about the process: “Both [campaigns’] charges [about the process] are specious,” because “nobody’s rules have changed since the candidates entered the fray.” But as both Trump and Sanders’ campaigns have rightly pointed out, the rules of a contest do matter beyond just the bare fact that they are the same for every candidate: if the Tour de France were conducted under rules similar to tennis’, it seems likely that the race would be won by very different kinds of winners—sprinters, perhaps, who could husband their stamina until just the right moment. It’s very difficult not to think that the criticisms of Trump and Sanders as being “whiners” is disingenuous—an obvious attempt to protect a process that transparently benefits insiders.

Trump’s supporters, like Sanders’ and those who voted “Leave” in the “Brexit” referendum, have been labeled as “losers”—and while, to those who consider themselves “winners,” the thoughts of losers are (as the obnoxious phrase has it) like the thoughts of sheep to wolves, it seems indisputably true that the voters behind all three campaigns represent those for whom the global capitalism of the last several decades hasn’t worked so well. As Matt O’Brian noted in The Washington Post a few days ago, “the working class in rich countries have seen their real, or inflation-adjusted, incomes flatline or even fall since the Berlin Wall came down and they were forced to compete with all the Chinese, Indian, and Indonesian workers entering the global economy.” (Real economists would dispute O’Brian’s chronology here: at least in the United States, wages have not risen since the early 1970s, which far predates free trade agreements like the North American Free Trade Agreement signed by Bill Clinton in the 1990s. But O’Brian’s larger argument, as wrongheaded as it is in detail, instructively illustrates the muddleheadedness of the conventional wisdom.) In this fashion, O’Brian writes, “the West’s triumphant globalism” has “fuel[ed] a nationalist backlash”: “In the United States it’s Trump, in France it’s the National Front, in Germany it’s the Alternative for Germany, and, yes, in Britain it’s the Brexiters.” What’s astonishing about this, however, is that—despite not having, as so, so many articles decrying their horribleness have said, a middle-class senses of decorum—all of these movements stand for a principle that, you would think, the “intellectuals” of the world would applaud: the right of the people themselves to determine their own destiny.

It is they, in other words, who literally embody the principle enunciated by the opening words of the United States Constitution, “We the People,” or enunciated by the founding document of the French Revolution (which, by the by, began on a tennis court), The Declaration of the Rights of Man and the Citizen, whose first article holds that “Men are born and remain free and equal in rights.” In the world of this Declaration, in short, each person has—like every stage of the Tour de France, and unlike each point played during Wimbledon—precisely the same value. It’s a principle that Americans, especially, ought to remember this weekend of all weekends—a weekend that celebrates another Declaration, one whose opening lines reads “We hold these truths to be self-evident, that all men are created equal.” Americans, in other words, despite the success individual Americans like John McEnroe or Pete Sampras or Chris Evert, are not tennis players, as Donald Trump (and Bernie Sanders) have rightfully pointed out over the past year—a sport, as one history of the game has put it, “so clearly aligned with both The Church and Aristocracy.” Americans, as the first modern nation in the world, ought instead to be associated with a sport unknown to the ancients and unthinkable without modern technology.

We are bicycle riders.