Nunc Dimittis

Nunc dimittis servum tuum, Domine, secundum verbum tuum in pace:
Quia viderunt oculi mei salutare tuum
Quod parasti ante faciem omnium populorum:
Lumen ad revelationem gentium, et gloriam plebis tuae Israel.
—“The Canticle of Simeon.”
What appeared obvious was therefore rendered problematical and the question remains: why do most … species contain approximately equal numbers of males and females?
—Stephen Jay Gould. “Death Before Birth, or a Mite’s Nunc dimittis.”
    The Panda’s Thumb: More Reflections in Natural History. 1980.
 4199HWwzLWL._AC_US218_

Since last year the attention of most American liberals has been focused on the shenanigans of President Trump—but the Trump Show has hardly been the focus of the American right. Just a few days ago, John Nichols of The Nation observed that ALEC—the business-funded American Legislative Exchange Council that has functioned as a clearinghouse for conservative proposals for state laws—“is considering whether to adopt a new piece of ‘model legislation’ that proposes to do away with an elected Senate.” In other words, ALEC is thinking of throwing its weight behind the (heretofore) fringe idea of overturning the Seventeenth Amendment, and returning the right to elect U.S. Senators to state legislatures: the status quo of 1913. Yet, why would Americans wish to return to a period widely known to be—as the most recent reputable academic history, Wendy Schiller and Charles Stewart’s Electing the Senate: Indirect Democracy Before the Seventeenth Amendment has put the point—“plagued by significant corruption to a point that undermined the very legitimacy of the election process and the U.S. Senators who were elected by it?” The answer, I suggest, might be found in a history of the German higher educational system prior to the year 1933.

“To what extent”—asked Fritz K. Ringer in 1969’s The Decline of the German Mandarins: The German Academic Community, 1890-1933—“were the German mandarins to blame for the terrible form of their own demise, for the catastrophe of National Socialism?” Such a question might sound ridiculous to American ears, to be sure: as Ezra Klein wrote in the inaugural issue of Vox, in 2014, there’s “a simple theory underlying much of American politics,” which is “that many of our most bitter political battles are mere misunderstandings” that can be solved with more information, or education. To blame German professors, then, for the triumph of the Nazi Party sounds paradoxical to such ears: it sounds like blaming an increase in rats on a radio station. From that view, then, the Nazis must have succeeded because the German people were too poorly-educated to be able to resist Hitler’s siren song.

As one appraisal of Ringer’s work in the decades since Decline has pointed out, however, the pioneering researcher went on to compare biographical dictionaries between Germany, France, England and the United States—and found “that 44 percent of German entries were academics, compared to 20 percent or less elsewhere”; another comparison of such dictionaries found that a much-higher percentage of Germans (82%) profiled in such books had exposure to university classes than those of other nations. Meanwhile, Ringer also found that “the real surprise” of delving into the records of “late nineteenth-century German secondary education” is that it “was really rather progressive for its time”: a higher percentage of Germans found their way to a high school education than did their peers in France or England during the same period. It wasn’t, in other words, for lack of education that Germany fell under the sway of the Nazis.

All that research, however, came after Decline, which dared to ask the question, “Did the work of German academics help the Nazis?” To be sure, there were a number of German academics, like philosopher Martin Heidegger and legal theorist Carl Schmitt, who not only joined the party, but actively cheered the Nazis on in public. (Heidegger’s connections to Hitler have been explored by Victor Farias and Emannuel Faye; Schmitt has been called “the crown jurist of the Third Reich.”) But that question, as interesting as it is, is not Ringer’s; he isn’t interested in the culpability of academics in direct support of the Nazis, perhaps the culpability of elevator repairmen could as well be interrogated. Instead, what makes Ringer’s argument compelling is that he connects particular intellectual beliefs to a particular historical outcome.

While most examinations of intellectuals, in other words, bewail a general lack of sympathy and understanding on the part of the public regarding the significance of intellectual labor, Ringer’s book is refreshing insofar as it takes the opposite tack: instead of upbraiding the public for not paying attention to the intellectuals, it upbraids the intellectuals for not understanding just how much attention they were actually getting. The usual story about intellectual work and such, after all, is about just how terrible intellectuals have it—how many first novels, after all, are about young writers and their struggles? But Ringer’s research suggests, as mentioned, the opposite: an investigation of Germany prior to 1933 shows that intellectuals were more highly thought of there than virtually anywhere in the world. Indeed, for much of its history before the Holocaust Germany was thought of as a land of poets and thinkers, not the grim nation portrayed in World War II movies. In that sense, Ringer has documented just how good intellectuals can have it—and how dangerous that can be.

All of that said, what are the particular beliefs that, Ringer thinks, may have led to the installation of the Fürher in 1933? The “characteristic mental habits and semantic preferences” Ringer documents in his book include such items as “the underlying vision of learning as an empathetic and unique interaction with venerated texts,” as well as a “consistent repudiation of instrumental or ‘utilitarian’ knowledge.” Such beliefs are, to be sure, seemingly required of the departments of what are now—but weren’t then—thought of, at least in the United States, as “the humanities”: without something like such foundational assumptions, subjects like philosophy or literature could not remain part of the curriculum. But, while perhaps necessary for intellectual projects to leave the ground, they may also have some costs—costs like, say, forgetting why the Seventeenth Amendment was passed.

That might sound surprising to some—after all, aren’t humanities departments hotbeds of leftism? Defenders of “the humanities”—like Gregory Harpham, once Director of the National Endowment for the Humanities—sometimes go even further and make the claim—as Harpham did in his 2011 book, The Humanities and the Dream of America—that “the capacity to sympathize, empathize, or otherwise inhabit the experience of others … is clearly essential to democratic society,” and that this “kind of capacity … is developed by an education that includes the humanities.” Such views, however, make a nonsense of history: traditionally, after all, it’s been the sciences that have been “clearly essential to democratic society,” not “the humanities.” And, if anyone thinks about it closely, the very notion of democracy itself depends on an idea that, at base, is “scientific” in nature—and one that is opposed to the notion of “the humanities.”

That idea is called, in scientific circles, “the Law of Large Numbers”—a concept first written down formally two centuries ago by mathematician Jacob Bernoulli, but easily illustrated in the words of journalist Michael Lewis’ most recent book. “If you flipped a coin a thousand times,” Lewis writes in The Undoing Project, “you were more likely to end up with heads or tails roughly half the time than if you flipped it ten times.” Or as Bernoulli put it in 1713’s Ars Conjectandi, “it is not enough to take one or another observation for such a reasoning about an event, but that a large number of them are needed.” It is a restatement of the commonsensical notion that the more times a result is repeated, the more trustworthy it is—an idea hugely applicable to human life.

For example, the Law of Large Numbers is why, as publisher Nate Silver recently put it, if “you want to predict a pitcher’s win-loss record, looking at the number of strikeouts he recorded and the number of walks he yielded is more informative than looking at his W’s and L’s from the previous season.” It’s why, when financial analyst John Bogle examined the stock market, he decided that, instead of trying to chase the latest-and-greatest stock, “people would be better off just investing their money in the entire stock market for a very cheap price”—and thereby invented the index fund. It’s why, Malcolm Gladwell has noted, the labor movement has always endorsed a national health care system: because they “believed that the safest and most efficient way to provide insurance against ill health or old age was to spread the costs and risks of benefits over the biggest and most diverse group possible.” It’s why casinos have limits on the amounts bettors can wager. In all these fields, as well as more “properly” scientific ones, it’s better to amass large quantities of results, rather than depend on small numbers of them.

What is voting, after all, but an act of sampling of the opinion of the voters, an act thereby necessarily engaged with the Law of Large Numbers? So, at least, thought the eighteenth-century mathematician and political theorist the Marquis de Condorcet—who called the result “the miracle of aggregation.” Summarizing a great deal of contemporary research, Sean Richey of Georgia State University has noted that Condorcet’s idea was that (as one of Richey’s sources puts the point) “[m]ajorities are more likely to select the ‘correct’ alternative than any single individual when there is uncertainty about which alternative is in fact the best.” Or, as Richey describes how Condorcet’s process actually works more concretely puts it, the notion is that “if ten out of twelve jurors make random errors, they should split five and five, and the outcome will be decided by the two who vote correctly.” Just as, in sum, a “betting line” demarks the boundary of opinion between gamblers, Condorcet provides the justification for voting: Condorcet’s theory was that “the law of large numbers shows that this as-if rational outcome will be almost certain in any large election if the errors are randomly distributed.” Condorcet, thereby, proposed elections as a machine for producing truth—and, arguably, democratic governments have demonstrated that fact ever since.

Key to the functioning of Condorcet’s machine, in turn, is large numbers of voters: the marquis’ whole idea, in fact, is that—as David Austen-Smith and Jeffrey S. Banks put the French mathematician’s point in 1996—“the probability that a majority votes for the better alternative … approaches 1 [100%] as n [the number of voters] goes to infinity.” In other words, the point is that the more voters, the more likely an election is to reach the correct decision. The Seventeenth Amendment is, then, just such a machine: its entire rationale is that the (extremely large) pool of voters of a state is more likely to reach a correct decision than an (extremely small) pool voters consisting of the state legislature alone.

Yet the very thought that anyone could even know what truth is, of course—much less build a machine for producing it—is anathema to people in humanities departments: as I’ve mentioned before, Bruce Robbins of Columbia University has reminded everyone that such departments were “founded on … the critique of Enlightenment rationality.” Such departments have, perhaps, been at the forefront of the gradual change in Americans from what the baseball writer Bill James has called “an honest, trusting people with a heavy streak of rationalism and an instinctive trust of science,” with the consequence that they had “an unhealthy faith in the validity of statistical evidence,” to adopting “the position that so long as something was stated as a statistic it was probably false and they were entitled to ignore it and believe whatever they wanted to [believe].” At any rate, any comparison of the “trusting” 1950s America described by James by comparison to what he thought of as the statistically-skeptical 1970s (and beyond) needs to reckon with the increasingly-large bulge of people educated in such departments: as a report by the Association of American Colleges and Universities has pointed out, “the percentage of college-age Americans holding degrees in the humanities has increased fairly steadily over the last half-century, from little over 1 percent in 1950 to about 2.5 percent today.” That might appear to be a fairly low percentage—but as Joe Pinsker’s headline writer put the point of Pinsker’s article in The Atlantic, “Rich Kids Major in English.” Or as a study cited by Pinsker in that article noted, “elite students were much more likely to study classics, English, and history, and much less likely to study computer science and economics.” Humanities students are a small percentage of graduates, in other words—but historically they have been (and given the increasingly-documented decreasing social mobility of American life, are increasingly likely to be) the people calling the shots later.

Or, as the infamous Northwestern University chant had it: “That‘s alright, that’s okay—you’ll be working for us someday!” By building up humanities departments, the professoriate has perhaps performed useful labor by clearing the ideological ground for nothing less than the repeal of the Seventeenth Amendment—an amendment whose argumentative success, even today, depends upon an audience familiar not only with Condorcet’s specific proposals, but also with the mathematical ideas that underlay them. That would be no surprise, perhaps, to Fritz Ringer, who described how the German intellectual class of the late nineteenth century and early twentieth constructed an “a defense of the freedom of learning and teaching, a defense which is primarily designed to combat the ruler’s meddling in favor of a narrowly useful education.” To them, the “spirit flourishes only in freedom … and its achievements, though not immediately felt, are actually the lifeblood of the nation.” Such an argument is reproduced by such “academic superstar” professors of humanities as Judith Butler, Maxine Elliot Professor in the Departments of Rhetoric and Comparative Literature at (where else?) the University of California, Berkeley, who has argued that the “contemporary tradition”—what?—“of critical theory in the academy … has shown how language plays an important role in shaping and altering our common or ‘natural’ understanding of social and political realities.”

Can’t put it better.

Advertisements

The End Of The Beginning

The essential struggle in America … will be between city men and yokels.
The yokels hang on because the old apportionments give them unfair advantages. …
But that can’t last.
—H.L. Mencken. 23 July 1928.

 

“It’s as if,” the American philosopher Richard Rorty wrote in 1998, “the American Left could not handle more than one initiative at a time, as if it either had to ignore stigma in order to concentrate on money, or vice versa.” Penn State literature professor Michael Bérubé sneered at Rorty at the time, writing that Rorty’s problem is that he “construes leftist thought as a zero-sum game,” as if somehow

the United States would have passed a national health-care plan, implemented a family-leave policy, and abolished ‘right to work’ laws if only … left-liberals in the humanities hadn’t been wasting our time writing books on cultural hybridity and popular music.

Bérubé then essentially asked Rorty, “where’s the evidence?”—knowing, of course, that it is impossible to prove a counterfactual, i.e. what didn’t happen. But even in 1998, there was evidence to think that Rorty was not wrong: that, by focusing on discrimination rather than on inequality, “left-liberals” have, as Rorty accused then, effectively “collaborated with the Right.” Take, for example, what are called “majority-minority districts,” which are designed to increase minority representation, and thus combat “stigma”—but have the effect of harming minorities.

A “majority-minority district,” according to Ballotpedia, “is a district in which a minority group or groups comprise a majority of the district’s total population.” They were created in response to Section Two of the Voting Rights Act of 1965, which prohibited drawing legislative districts in a fashion that would “improperly dilute minorities’ voting power.”  Proponents of their use maintain that they are necessary in order to prohibit what’s sometimes called “cracking,” or diluting a constituency so as to ensure that it is not a majority in any one district. It’s also claimed that “majority-minority” districts are the only way to ensure minority representation in the state legislatures and Congress—and while that may or may not be true, it is certainly true that after drawing such districts there were more minority members of Congress than there were before: according to the Congressional Research Service, prior to 1969 (four years after passage) there were less than ten black members of Congress, a number that then grew until, after the 106th Congress (1999-01), there have consistently been between 39 and 44 African-American members of Congress. Unfortunately, while that may have been good for individual representatives, it may not be all that great for their constituents.

That’s because while “majority-minority” districts may increase the number of black and minority congressmen and women, they may also decrease the total numbers of Democrats in Congress. As The Atlantic put the point in 2013: after the redistricting process following the Census of 1990, the “drawing of majority-minority districts not only elected more minorities, it also had the effect of bleeding minority voters out of all the surrounding districts”—making them virtually impregnably Republican. In 2012, for instance, Barack Obama won 44 Congressional districts by more than 50 percent of the vote, while Mitt Romney won only eight districts by such a large percentage. Figures like these could seem overwhelmingly in favor of the Democrats, of course—until it is realized that, by winning congressional seats by such huge margins in some districts, Democrats are effectively losing votes in others.

That’s why—despite the fact that he lost the popular vote—in 2012 Romney’s party won 226 of 435 Congressional districts, while Obama’s party won 209. In this past election, as I’ve mention in past posts, Republicans won 55% of the seats (241) despite getting 49.9% of the vote, while Democrats won 44% of the seats despite getting 47.3% of the vote. That might not seem like a large difference, but it is suggestive when these percentages always point in a single direction: going back to 1994, the year of the “Contract With America,” Republicans have consistently outperformed their share of the popular vote, while Democrats have consistently underperformed theirs.

From the perspective of the Republican party, that’s just jake, despite being—according to a lawsuit filed by the NAACP in North Carolina—due to “an intentional and cynical use of race.” Whatever the ethics of the thing, it’s certainly had major results. “In 1949,” as Ari Berman pointed out in The Nation not long ago, “white Democrats controlled 103 of 105 House seats in the former Confederacy,” while the last white Southern congressman not named Steve Cohen exited the House in 2014. Considered all together, then, as “majority-minority districts” have increased, the body of Southern congressmen (and women) has become like an Oreo: a thin surface of brown Democrats on the outside, thickly white and Republican on the inside—and nothing but empty calories.

Nate Silver, to be sure, discounted all this worry as so much ado about nothing in 2013: “most people,” he wrote then, “are putting too much weight on gerrymandering and not enough on geography.” In other words, “minority populations, especially African-Americans, tend to be highly concentrated in certain geographic areas,” so much so that it would a Herculean task “not to create overwhelmingly minority (and Democratic) districts on the South Side of Chicago, in the Bronx or in parts of Los Angeles or South Texas.” Furthermore, even if that could be accomplished such districts would violate “nonpartisan redistricting principles like compactness and contiguity.” But while Silver is right on the narrow ground he contests, it merely begs the question: why should geography have anything to do with voting? Silver’s position essentially ensures that African-American and other minority votes count for less. “Majority minority districts” imply that minority votes do not have as much effect on policy as votes in other kinds of districts: they create, as if the United States were some corporation with common and preferred shares, two kinds of votes.

Like discussions about, for example, the Electoral College—in which a vote in Wyoming is much more valuable than one in California—Silver’s position in other words implies that minority votes will remain less valuable than other votes because a vote in a “majority-minority” district will have less probability of electing a congressperson who is a member of a majority in Congress. What does it matter to African-Americans if one of their number is elected to Congress, if Congress can do nothing for them?  To Silver, there isn’t any issue with majority-minority districts because they reflect their underlying proportions of people—but what matters is whether whoever’s elected can get policies that benefit them.

Right here, in other words, we get to the heart of the dispute between the deceased Rorty and his former student Bérubé: the difference between procedural and substantive justice. To some left-liberal types like Michael Bérubé, that might appear just swell: to coders in the Valley (represented by California’s 17th, the only majority-Asian district in the continental United States) or cultural-studies theorists in Boston, what might be important is simply the numbers of minority representatives, not the ability to pass a legislative agenda that’s fair for all Americans. It all might seem like no skin off their nose. (More ominously, it conceivably might even be in their economic interests: the humanities and the arts after all are intellectually well-equipped for a politics of appearances—but much less so for a politics of substance.) But ultimately this also affects them, and for a similar reason: urban professionals are, after all, urban—which means that their votes are, like majority-minority districts, similarly concentrated.

“Urban Democrat House members”—as The Atlantic also noted in 2013—“win with huge majorities, but winning a district with 80 percent doesn’t help the party gain any more seats than winning with 60 percent.” As Silver put the same point, “white voters in cities with high minority populations tend to be quite liberal, yielding more redundancy for Democrats.” Although these percentages might appear heartening to some of those within such districts, they ought to be deeply worrying: individual votes are not translating into actual political power. The more geographically concentrated Democrats are the less and less capable their party becomes of accomplishing its goals. While winning individual races by huge margins might be satisfying to some, no one cares about running up the score in a junior varsity game.

What “left-liberal” types ought to be contesting, in other words, isn’t whether Congress has enough black and other minority people in it, but instead the ridiculous, anachronistic idea that voting power should be tied to geography. “People, not land or trees or pastures vote,” Chief Justice of the Supreme Court Earl Warren wrote in 1964; in that case, Wesberry v. Sanders, the Supreme Court ruled that, as much as possible, “one man’s vote in a Congressional election is to be worth as much as another’s.” By shifting discussion to procedural issues of identity and stigma, “majority-minority districts” obscure that much more substantive question of power. Like some gaggle of left-wing Roy Cohns, people like Michael Bérubé want to talk about who people are. His opponents ought to reply by saying they’re interested in what people could be—and building a real road to get there.

Striking Out

When a man’s verses cannot be understood … it strikes a man more dead than a great reckoning in a little room.
As You Like It. III, iii.

 

There’s a story sometimes told by the literary critic Stanley Fish about baseball, and specifically the legendary early twentieth-century umpire Bill Klem. According to the story, Klem is working behind the plate one day. The pitcher throws a pitch; the ball comes into the plate, the batter doesn’t swing, and the catcher catches it. Klem doesn’t say anything. The batter turns around and says (Fish tells us),

“O.K., so what was it, a ball or a strike?” And Klem says, “Sonny, it ain’t nothing ’till I call it.” What the batter is assuming is that balls and strikes are facts in the world and that the umpire’s job is to accurately say which one each pitch is. But in fact balls and strikes come into being only on the call of an umpire.

Fish is expressing here what is now the standard view of American departments of the humanities: the dogma (a word precisely used) known as “social constructionism.” As Fish says elsewhere, under this dogma, “what is and is not a reason will always be a matter of faith, that is of the assumptions that are bedrock within a discursive system which because it rests upon them cannot (without self-destructing) call them into question.” To many within the academy, this view is inherently liberating: the notion that truth isn’t “out there” but rather “in here” is thought to be a sub rosa method of aiding the political change that, many have thought, has long been due in the United States. Yet, while joining the “social construction” bandwagon is certainly the way towards success in the American academy, it isn’t entirely obvious that it’s an especially good way to practice American politics: specifically, because the academy’s focus on the doctrines of “social constructionism” as a means of political change has obscured another possible approach—an approach also suggested by baseball. Or, to be more precise, suggested by the World Series of 1904 that didn’t happen.

“He’d have to give them,” wrote Will Hively, in Discover magazine in 1996, “a mathematical explanation of why we need the electoral college.” The article describes how one Alan Natapoff, a physicist at the Massachusetts Institute of Technology, became involved in the question of the Electoral College: the group, assembled once every four years, that actually elects an American president. (For those who have forgotten their high school civics lessons, the way an American presidential election works is that each American state elects a number of “electors” equal in number to that state’s representation  in Congress; i.e., the number of congresspeople each state is entitled to by population, plus two senators. Those electors then meet to cast their votes in what is the actual election.) The Electoral College has been derided for years: the House of Representatives introduced a constitutional amendment to abolish it in 1969, for instance, while at about the same time the American Bar Association called the college “archaic, undemocratic, complex, ambiguous, indirect, and dangerous.” Such criticisms have a point: as has been seen a number times in American history (most recently in 2000), the Electoral College makes it possible to elect a president without a majority of the votes. But to Natapoff, such criticisms fundamentally miss the point because, according to him, they misunderstood the math.

The example Natapoff turned to in order to support his argument for the Electoral College was drawn from baseball. As Anthony Ramirez wrote in a New York Times article about Natapoff and his argument, also from 1996, the physicist’s favorite analogy is to the World Series—a contest in which, as Natapoff says, “the team that scores the most runs overall is like a candidate who gets the most popular votes.” But scoring more runs than your opponent is not enough to win the World Series, as Natapoff goes on to say: in order to become the champion baseball team of the year, “that team needs to win the most games.” And scoring runs is not the same as winning games.

Take, for instance, the 1960 World Series: in that contest, as Lively says in Discover, “the New York Yankees, with the awesome slugging combination of Mickey Mantle, Roger Maris, and Bill ‘Moose’ Skowron, scored more than twice as many total runs as the Pittsburgh Pirates, 55 to 27.” Despite that difference in production, the Pirates won the last game of the series (in perhaps the most exciting game in Series history—the only one that has ever ended with a ninth-inning, walk-off home run) and thusly won the series, four games to three. Nobody would dispute, Natapoff’s argument runs, that the Pirates deserved to win the series—and so, similarly, nobody should dispute the legitimacy of the Electoral College.

Why? Because if, as Lively writes, in the World Series “[r]uns must be grouped in a way that wins games,” in the Electoral College “votes must be grouped in a way that wins states.” Take, for instance, the election of 1888—a famous case for political scientists studying the Electoral College. In that election, Democratic candidate Grover Cleveland gained over 5.5 million votes to Republican candidate Benjamin Harrison’s 5.4 million votes. But Harrison not only won more states than Cleveland, but also won states with more electoral votes: including New York, Pennsylvania, Ohio, and Illinois, each of whom had at least six more electoral votes than the most populous state Cleveland won, Missouri. In this fashion, Natapoff argues that Harrison is like the Pirates: although he did not win more votes than Cleveland (just as the Pirates did not score more runs than the Yankees), still he deserved to win—on the grounds that the total numbers of popular votes do not matter, but rather how those votes are spread around the country.

In this argument, then, games are to states just as runs are to votes. It’s an analogy that has an easy appeal to it: everyone feels they understand the World Series (just as everyone feels they understand Stanley Fish’s umpire analogy) and so that understanding appears to transfer easily to the matter of presidential elections. Yet, while clever, in fact most people do not understand the purpose of the World Series: although people think it is the task of the Series to identify the best baseball team in the major leagues, that is not what it is designed to do. It is not the purpose of the World Series to discover the best team in baseball, but instead to put on an exhibition that will draw a large audience, and thus make a great deal of money. Or so said the New York Giants, in 1904.

As many people do not know, there was no World Series in 1904. A World Series, as baseball fans do know, is a competition between the champions of the National League and the American League—which, because the American League was only founded in 1901, meant that the first World Series was held in 1903, between the Boston Americans (soon to become the Red Sox) and the same Pittsburgh Pirates also involved in Natapoff’s example. But that series was merely a private agreement between the two clubs; it created no binding precedent. Hence, when in 1904 the Americans again won their league and the New York Giants won the National League—each achieving that distinction by winning more games than any other team over the course of the season—there was no requirement that the two teams had to play each other. And the Giants saw no reason to do so.

As legendary Giants manager, John McGraw, said at the time, the Giants were the champions of the “only real major league”: that is, the Giants’ title came against tougher competition than the Boston team faced. So, as The Scrapbook History of Baseball notes, the Giants, “who had won the National League by a wide margin, stuck to … their plan, refusing to play any American League club … in the proposed ‘exhibition’ series (as they considered it).” The Giants, sensibly enough, felt that they could not gain much by playing Boston—they would be expected to beat the team from the younger league—and, conversely, they could lose a great deal. And mathematically speaking, they were right: there was no reason to put their prestige on the line by facing an inferior opponent that stood a real chance to win a series that, for that very reason, could not possibly answer the question of which was the better team.

“That there is,” writes Nate Silver and Dayn Perry in Baseball Between the Numbers: Why Everything You Know About the Game Is Wrong, “a great deal of luck involved in the playoffs is an incontrovertible mathematical fact.” But just how much luck is involved is something that the average fan hasn’t considered—though former Caltech physicist Leonard Mlodinow has. In Mlodinow’s book, The Drunkard’s Walk: How Randomness Rules Our Lives, the scientist writes that—just by virtue of doing the math—it can be concluded that “in a 7-game series there is a sizable chance that the inferior team will be crowned champion”:

For instance, if one team is good enough to warrant beating another in 55 percent of its games, the weaker team will nevertheless win a 7-game series about 4 times out of 10. And if the superior team could be expected to beat its opponent, on average, 2 out of each 3 times they meet, the inferior team will still win a 7-game series about once every 5 matchups.

What Mlodinow means is this: let’s say that, for every game, we roll a one-hundred sided die to determine whether the team with the 55 percent edge wins or not. If we do that four times, there’s still a good chance that the inferior team is still in the series: that is, that the superior team has not won all the games. In fact, there’s a real possibility that the inferior team might turn the tables, and instead sweep the superior team. Seven games, in short, is just not enough games to demonstrate conclusively that one team is better than another.

In fact, in order to eliminate randomness as much as possible—that is, make it as likely as possible for the better team to win—the World Series would have to be much longer than it currently is: “In the lopsided 2/3-probability case,” Mlodinow says, “you’d have to play a series consisting of at minimum the best of 23 games to determine the winner with what is called statistical significance, meaning the weaker team would be crowned champion 5 percent or less of the time.” In other words, even in a case where one team has a two-thirds likelihood of winning a game, it would still take 23 games to make the chance of the weaker team winning the series less than 5 percent—and even then, there would still be a chance that the weaker team could still win the series. Mathematically then, winning a seven-game series is meaningless—there have been just too few games to eliminate the potential for a lesser team to beat a better team.

Just how mathematically meaningless a seven-game series is can be demonstrated by the case of a team that is only five percent better than another team: “in the case of one team’s having only a 55-45 edge,” Mlodinow goes on to say, “the shortest statistically significant ‘world series’ would be the best of 269 games” (emp. added). “So,” Mlodinow writes, “sports playoff series can be fun and exciting, but being crowned ‘world champion’ is not a very reliable indication that a team is actually the best one.” Which, as a matter of fact about the history of the World Series, is simply a point that true baseball professionals have always acknowledged: the World Series is not a competition, but an exhibition.

What the New York Giants were saying in 1904 then—and Mlodinow more recently—is that establishing the real worth of something requires a lot of trials: many, many different repetitions. That’s something that, all of us, ought to know from experience: to learn anything, for instance, requires a lot of practice. (Even if the famous “10,000 hour rule” New Yorker writer Malcolm Gladwell concocted for this book, Outliers: The Story of Success, has been complicated by those who did the original research Gladwell based his research upon.) More formally, scientists and mathematicians call this the “Law of Large Numbers.”

What that law means, as the Encyclopedia of Mathematics defines it, is that “the frequency of occurence of a random event tends to become equal to its probability as the number of trials increases.” Or, to use the more natural language of Wikipedia, “the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.” What the Law of Large Numbers implies is that Natapoff’s analogy between the Electoral College and the World Series just might be correct—though for the opposite reason Natapoff brought it up. Namely, if the Electoral College is like the World Series, and the World Series is not designed to find the best team in baseball but instead be merely an exhibition, then that implies that the Electoral College is not a serious attempt to find the best president—because what the Law would appear to advise is that, in order to obtain a better result, it is better to gather more voters.

Yet the currently-fashionable dogma of the academy, it would seem, is expressly-designed to dismiss that possibility: if, as Fish says, “balls and strikes” (or just things in general) are the creations of the “umpire” (also known as a “discursive system”), then it is very difficult to confront the wrongheadedness of Natapoff’s defense of the Electoral College—or, for that matter, the wrongheadedness of the Electoral College itself. After all, what does an individual run matter—isn’t what’s important the game in which it is scored? Or, to put it another way, isn’t it more important where (to Natapoff, in which state; to Fish, less geographically inclined, in which “discursive system”) a vote is cast, rather than whether it was cast? The answer in favor of the former at the expense of the latter to many, if not most, literary-type intellectuals is clear—but as any statistician will tell you, it’s possible for any run of luck to continue for quite a bit longer than the average person might expect. (That’s one reason why it takes at least 23 games to minimize the randomness between two closely-matched baseball teams.) Even so, it remains difficult to believe—as it would seem that many today, both within and without the academy, do—that the umpire can continue to call every pitch a strike.

 

Lions For Lambs

And the remnant of Jacob shall be among the Gentiles in the midst of many people as a lion among the beasts of the forest, as a young lion among the flocks of sheep …
Micah 5:8

Micah was the first prophet to predict the downfall of Jerusalem. According to him, the city was doomed because its beautification was financed by dishonest business practices, which impoverished the city’s citizens. He also called to account the prophets of his day, whom he accused of accepting money for their oracles.
“Micah.” Wikipedia.

 

“Before long I’ll be dead, and you and your brother and your sister and all of her children, all of us dead, all of us rotting underground,” says the villainous patriarch of the aristocratic Lannister clan, Tywin, to his son Jaime in a conversation during the first season of the hit HBO show, Game of Thrones. “It’s the family name that lives on,” Tywin continues—a sentence that not only does much to explain the popularity of the show, but also overturns the usual explanation for that interest: the narrative uncertainty, or the way in which, at least in the first several seasons, it was never obvious which characters were the heroes, and so would survive to the end of the tale. But if Tywin is right, the attraction of the show isn’t that it is so unpredictable. It’s rather that the show’s uncertainty about the various characters’ fates is balanced by a matching certainty that they are in peril: either from the political machinations that end up destroying many of the characters the show had led us to think were protagonists (Ned and his son Robb Stark in particular)—or from the horror that, the opening minutes of the show’s very first episode display, has awakened in the frozen north of Thrones’ fictional world. Hence, the uncertainty about what is going to happen is mirrored by a certainty that something will happen—a certainty signified by the motto of the family to which many fan-favorite characters belong, House Stark: “Winter is Coming.” It’s that motto, I think, that furnishes much of the show’s power—because it is such a direct riposte to much of today’s conventional wisdom, a dogma that unites the supposed “radical left” of the contemporary university with their seeming ideological opposites: the financial elite of Wall Street.

To put it plainly, the relevant division in America today is not between Republicans and Democrats, but instead between those who (still) think the notion encapsulated by the phrase “Winter Is Coming” matters—and those who don’t. For the idea contained within the phrase “Winter Is Coming,” after all, is much older than George Martin’s series of fantasy novels. It is, for example, much the same as an idea expressed by the English writer George Orwell, author of 1984 and Animal Farm, in 1946:

… we are all capable of believing things which we know to be untrue, and then, when we are finally proved wrong, impudently twisting the facts so as to show that we were right. Intellectually, it is possible to carry on this process for an indefinite time: the only check on it is that sooner or later a false belief bumps up against solid reality, usually on a battlefield.

What Orwell expresses here, I’d say, is the Stark idea—the idea that, sooner or later, one’s beliefs run up against reality, whether that reality comes in the form of the weather or war or something else. It’s the notion that, sooner or later, things converge towards reality: a notion that many contemporary intellectuals have abandoned. To them, the view expressed by Orwell and the Starks is what’s known as “foundationalism”: something that all recent students in the humanities have been trained, over the past several generations, to boo and hiss.

“Foundationalism,” according to Pennsylvania State University literature professor Michael Bérubé, for example—a person I often refer to because, unlike the work of a lot others, he at least expresses what he’s saying clearly, and also because he represents a university well-known for its commitment to openness and transparency and occasionally less-than-enthusiastic opposition to child abuse—is the notion that there is a “principle that is independent of all human minds.” That is opposed, for people who think about this sort of thing, to “antifoundationalism”: the idea that a lot of stuff (maybe everything) is simply a matter of “human deliberation and consensus.” Also known as “social constructionism,” it’s an idea that Orwell, or the Starks, would have looked at slant-eyed: winter, for instance, doesn’t particularly care what people think about it, and while war is like both a seminar and a hurricane, the things that happen in war—like, say, having the technology to turn an entire city into a fireball—are not appreciably different from the impact of a tsunami.

Within the humanities however the “anti-foundationalist” or “social constructionist” idea has largely taken the field. “Notwithstanding,” as literature professor Mark Bauerlein of Emory University has remarked, “the diversity trumpeted by humanities departments these days, when it comes to conceptions of knowledge, one standpoint reigns supreme: social constructionism.” To those who hold it, it is a belief that straightforwardly powers what Bauerlein calls “a moral obligation to social justice”: in this view, either you are on the side of antifoundationalism, or you are a yahoo who thinks that the problem with the world is that there isn’t enough Donald Trump in it. Yet antifoundationalism, or the idea that everything is a matter of human discussion, is not necessarily so obviously on the side of good and not evil as the professors of the nation’s universities appear to believe.

In fact, while Bauerlein says that this dogma is “a party line, a tribal glue distinguishing humanities professors from their colleagues in the business school, the laboratory, the chapel, and the computing center, most of whom believe that at least some knowledge is independent of social conditions,” there’s actually good reason to think that a disbelief in an underlying reality isn’t all that unfamiliar to the business school. Arguably, there’s no portion of the university that pays more homage to the dogma of “social construction” than the business school.

Take, for instance, the idea Eugene Fama has built his career upon: the “random walk” theory of the stock market, also known as the “efficient market hypothesis.” Today, Fama is a Nobel Prize-laureate (well, winner of the Swedish National Bank’s Prize in Economic Sciences in Memory of Alfred Nobel, a prize not established by Alfred Nobel in his 1895 will), a professor at the University of Chicago’s Booth School of Business, and the so-called “Father of Finance, ” but in 1965 he was an obscure graduate student—at least, until he wrote the paper that established him within his profession that year, “The Behavior of Stock-Market Prices.” In that paper, Fama argued that “the future path of the price level of a security is no more predictable than the path of a series of cumulated random numbers,” which had the consequence that “the series of price changes has no memory.” (Which is what stock prospectuses mean when they say that “past performance cannot predict future performance.”) What Fama meant was that, no matter how many times he went back over the data, he could find no means by which to predict the future path of a particular stock. Hence he concluded that, when it comes to the market, “the past cannot be used to predict the future in any meaningful way”—an idea with some notably anti-foundationalist consequences.

Those consequences can be be viewed in such papers as Fama’s 2010 study with colleague Kenneth French: “Luck versus Skill in the Cross-Section of Mutual Fund Returns”—a study that set out to examine whether it was true that the managers of mutual funds can actually do what they claim they can do, and outperform the stock market. In “Luck versus Skill,” Fama and French say that the evidence shows those managers can’t: “For fund investors the … results are disheartening,” because “few active funds produce … returns that cover their costs.” Maybe there are really intelligent people out there who are smarter than the market, Fama is suggesting—but if there are, he can’t find them.

Now, so far Fama’s idea might sound pretty unexceptional: to readers of this blog, it might even sound like common sense. It’s a fairly close idea to the one explored, for instance, by psychologist Amos Tversky and his co-authors in the paper, “The Hot Hand in Basketball,” which was about how what appeared to be a “hot,” or “clutch,” basketball shooter was simply an effect of randomness: if your skill level is such that you expect to make a certain percentage of your shots, then—simply through the laws of probability—it is likely that you will make a certain number of baskets in a row. Similarly, if there are enough mutual funds in the market, some number of them will have gaudy track records to report: “Given the multitude of funds,” as Fama writes, “many have extreme returns by chance.” If there’s enough participants in any competition, some will be winners—or to put it another way, if a monkey throws enough shit at a wall, some of it will stick.

That, Fama might say, doesn’t mean that the monkey has somehow gotten in touch with Reality: if no one person can outperform the market, then there is nothing anyone can know that would help them to become a better stock-picker. What that must mean in turn is (as the Wikipedia article on the subject notes) that “market prices reflect all available information,” or that “stocks always trade at their fair value”—which is right about where that the work of seemingly-conservative professors in economics departments and business schools, and their seeming-liberal opponents in departments of the humanities begins to converge.

Fama, after all, denies the existence of what are known as “bubbles”: “speculative bubbles, market bubbles, price bubbles, financial bubbles, speculative manias or balloons” as Wikipedia terms them. “Bubbles” describe situations in which a given asset—like, I don’t know, a house—is traded “at a price or price range that strongly deviates from the corresponding asset’s intrinsic value.” The classic example is the Dutch tulip craze of the seventeenth century, during which a single tulip bulb might have sold for ten times the yearly wage of a workman. (Other instances might be closer to the reader’s mind than that.) But according to Fama there can be no such thing as a “bubble”: when John Cassidy of The New Yorker said to Fama in an interview that the chief problem during the financial crisis of 2008 was that “there was a credit bubble that inflated and ultimately burst,” Fama replied by saying, “I don’t know what a credit bubble means. I don’t even know what a bubble means. These words have become popular. I don’t think they have any meaning.” Although a careful reader might note that what Fama is saying here is something like that there is a bubble in the concept of bubbles, what he intends is to deny that there are bubbles, and thus that there is any “intrinsic value” to a given asset.

It’s at this point, I think, that the connection between Eugene Fama’s contention about the “efficient market hypothesis” and the doctrine in the humanities known as “antifoundationalism” becomes clear: both are denials of the Starks’ “Winter Is Coming” motto. After all, a bubble only makes sense if there is some kind of “intrinsic,” or “foundational,” value to something; similarly, a “foundationalist” thinks that there is some nonhuman reality. But why does this obscure and esoteric doctrinal dispute among a few intellectuals matter, aside from being the latest turn of the wheel of fashion within the walls of the academy?

Well, it matters because what they are really discussing—the real meaning of “intrinsic value”—is whether to allow ordinary people to have any say about the future of their lives.

Many liberals, for instance, have warned about the Republican assault on the right to vote in such matters as the Supreme Court’s 2013 ruling in Shelby County vs. Holder, which essentially gutted the Voting Rights Act of 1965, or the passage of “voter ID laws” in many states—sold as “protections” but in reality a means of preventing voting. What’s far less-often discussed, however, is that intellectuals of the supposed academic left have begun—quietly, to be sure—to question the very idea of voting.

Oxford don Mary Beard, for example—a scholar of the ancient world and avowed feminist—recently wrote a column for the London Review of Books concerning the “Brexit” referendum, in which the people of Great Britain decided whether to stay in the European Union or not. Beard’s sort—educated, with “progressive” opinions—thought that Britain ought to remain in the Union; when the results came in, however, the nation had decided to leave, or “Brexit.” “Handing us a referendum,” Beard wrote in response, “is not a way to reach a responsible decision”—“for God’s sake,” one can almost hear Beard lecturing, “how can you let an important decision be up to the [insert condescending adjective here] voters?” But while that might sound like a one-time response to a very particular situation, in fact many smart people who share Beard’s general views also share her distrust of elections.

What is an election, anyway, but an event analogous to a battle, or a hurricane? To people inclined to dismiss the significance of real events, it’s easy enough to dismiss the notion of elections. “Importantly”— wrote Princeton University’s Lawrance S. Rockefeller Professor of Politics, Stephen Macedo, recently—“majority rule is not a fundamental principle of either democracy or fairness, nor is it required by any basic principle of democracy or fairness.” According to Macedo, “the basic principle of democracy” isn’t elections, but instead “political equality,” or a “respect [for] minority rights and … fair and inclusive deliberation.” In other words, so long as “minority rights” are respected and there is “fair and inclusive deliberation,” it doesn’t matter if anyone votes or not—which is to say that to very many smart, and supposedly “liberal” or “leftist” people, the very notion that voting has any kind of “intrinsic value” to it at all has become irrelevant.

That, more or less, is what the characters on Game of Thrones think too. After all, as Tywin says to Jaime at one point during the conversation I began this essay with, a “lion doesn’t concern himself with the opinion of a sheep.” Which, one supposes, is not a very surprising sentiment on a show that, while it sometimes depicts depicts dragons and magic, mostly concerns the doings of a handful of aristocrats in a feudal age. What might be pretty surprising, however—depending on your level of distrust—is that, today, a great many of the people entrusted to be society’s shepherds appear to agree with them.

Human Events

Opposing the notion of minority rule, [Huger] argued that a majority was less likely to be wrong than a minority, and if this was not so “then republicanism must be a dangerous fallacy, and the sooner we return to the ‘divine rights’ of the kings the better.”
—Manisha Sinha. The Counterrevolution of Slavery. 2001.

Note that agreement [concordantia] is particularly required on matters of faith and the greater the agreement the more infallible the judgment.
—Nicholas of Cusa. Catholic Concordance. 1432.

 

It’s perhaps an irony, though a mild one, that the weekend of the celebrations of American independence the most notable sporting events are the Tour de France, soccer’s European Cup, and Wimbledon—maybe all the more so now that Great Britain has voted to “Brexit,” i.e., to leave the European Union.  A number of observers have explained that vote as at least somewhat analogous to the Donald Trump movement in the United States, in the first place because Donald himself called the “Brexit” decision a “great victory” at a press conference the day after the vote, and a few days later “praised the vote as a decision by British voters to ‘take back control of their economy, politics and borders,’” as The Guardian said Thursday. To the mainstream press, the similarity between the “Brexit” vote and Donald Trump’s candidacy is that—as Emmanuel Macron, France’s thirty-eight-year-old economy minister said about “Brexit”—both are a conflict between those “content with globalization” and those “who cannot find” themselves within the new order. Both Trump and “Brexiters” are, in other words, depicted as returns of—as Andrew Solomon put it in The New Yorker on Tuesday—“the Luddite spirit that led to the presumed arson at Albion Mills, in 1791, when angry millers attacked the automation that might leave them unemployed.” “Trumpettes” and “Brexiters” are depicted as wholly out of touch and stuck in the past—yet, as a contrast between Wimbledon and the Tour de France may help illuminate, it could also be argued that it is, in fact, precisely those who make sneering references both to Trump and to “Brexiters” who represent, not a smiling future, but instead the return of the ancien régime.

Before he outright won the Republican nomination through the primary process, after all, Trump repeatedly complained that the G.O.P.’s process was “rigged”: that is, it was hopelessly stacked against an outsider candidate. And while a great deal of what Trump has said over the past year has been, at best, ridiculously exaggerated when not simply outright lying, in that contention Trump has a great deal of evidence: as Josh Barro put it in Business Insider (not exactly a lefty rag) back in April, “the Republican nominating rules are designed to ignore the will of the voters.” Barro cites the example of Colorado’s Republican Party, which decided in 2015 “not to hold any presidential preference vote”—a decision that, as Barro rightly says, “took power away from regular voters and handed it to the sort of activists who would be likely … [to] participat[e] in party conventions.” And Colorado’s G.O.P. was hardly alone in making, quite literally, anti-democratic decisions about the presidential nominating process over the past year: North Dakota also decided against a primary or even a caucus, while Pennsylvania did hold a vote—but voters could only choose uncommitted delegates; i.e., without knowing to whom those delegates owed allegiance.

Still, as Mother Jones—which is a lefty rag—observed, also back in April, this is an argument that can easily be worked against as for Trump: in New York’s primary, for instance, “Kasich and Cruz won 40 percent of the vote but only 4 percent of the delegates,” while on Super Tuesday Trump’s opponents “won 66 percent of the vote but only 57 percent of the delegates.” And so on. Other critics have similarly attacked the details of Trump’s arguments: many, as Mother Jones’ Kevin Drum says, have argued that the details of the Republican nominating process could just as easily be used as evidence for “the way the Republican establishment is so obviously in the bag for Trump.” Those critics do have a point: investigating the whole process is exceedingly difficult because the trees overwhelm any sense of the forest.

Yet, such critics often use those details (about which they are right) to make an illicit turn. They have attacked, directly or indirectly, the premise of the point Trump tried to make in an op-ed piece in The Wall Street Journal this spring that—as Nate Silver paraphrased it on FiveThirtyEight—“the candidate who gets the most votes should be the Republican nominee.” In other words, they make an argumentative turn from the particulars of this year’s primary process to take a very disturbing swerve toward attacking the very premises of democratic government itself: by disputing this or that particular they obscure whether or not the will of the voters should be respected. Hence, even if Trump’s whole campaign is, at best, wholly misdirected, the point he is making—a point very similar to the one made by Bernie Sanders’ campaign—is not something to be treated lightly. But that, it seems, is something that elites are, despite their protests, skirting close to doing: which is to say that, despite the accusations directed at Trump that he is leading a fascistic movement, it is actually arguable that it is Trump’s supposedly “liberal” opponents who are far closer to authoritarianism than he is because they have no respect for sanctity of the ballot. Or, to put it another way, that it is Trump’s voters—and, by extension, those for “Brexit”—who have the cosmopolitan view, while it is his opponents who are, in fact, the provincialists.

The point, I think, can be seen by comparing the scoring rules between Wimbledon and the Tour de France. The Tour, as may or may not be known, is determined by the rider who—as Patrick Redford at Deadspin put it the other day in “The Casual Observer’s Guide to the Tour de France”—has “the lowest time over all 21 stages.” Although the race takes place over nearly the whole nation of France, and several more besides, and covers over 2,000 miles from the cobblestone flats of Flanders to the heights of the Alps and down to the streets of Paris, still the basic premise of the race is clear even to the youngest child: ride faster and win. Explaining Wimbledon however—like explaining the rules of the G.O.P. nominating process (or, for that matter, the Democratic nominating process)—is not so simple.

As I have noted before in this space, the rules of tennis are not like cycling—or even such familiar sports as baseball or football. In baseball and most other sports, including the Tour, the “score is cumulative throughout the contest … and whoever has the most points at the end wins,” as Allen Fox once described the difference between tennis and other games in Tennis magazine. But tennis is not like that: “The basic element of tennis scoring is the point,” as mathematician G. Edgar Parker has noted, “but tennis matches are won by the player who wins two out three (or three out of five) sets.” Sets are themselves accumulations of games, not points. During each game, points are won and lost until one player has not only won at least four points but also has a two-point advantage on the other; games go back and forth until one player does have that advantage. Then, at the set level, one player must have won at least six games (though the rules vary at some professional tournaments if that player also needs a two-game advantage to win the set). Finally, then, a player needs to win at least two, and—as at Wimbledon—sometimes three, sets to take a match.

If the Tour de France were won like Wimbledon is won, in other words, the winner would not be determined by whoever had the lowest overall time: the winner would be, at least at first analysis, whoever won the most number of stages. But even that comparison would be too simple: if the Tour winner were determined by the winner of the most stages, that would imply that each stage were equal—and it is certainly not the case that all points, games, or sets in tennis are equal. “If you reach game point and win it,” as Fox writes in Tennis, “you get the entire game while your opponent gets nothing—all of the points he or she won in the game are eliminated.” The points in one game don’t carry over to the next game, and previous games don’t carry over to the next set. That means that some points, some games, and some sets are more important than others: “game point,” “set point,” and “match point” are common tennis terms that mean “the point whose winner may determine the winner of the larger category.” If tennis’ type of scoring system were applied to the Tour, in other words, the winner of the Tour would not be the overall fastest cyclist, nor even the cyclist who won the most stages, but the cyclist who won certain stages, say—or perhaps even certain moments within stages.

Despite all the Sturm und Drang surrounding Donald Trump’s candidacy, then—the outright racism and sexism, the various moronic-seeming remarks concerning American foreign policy, not to mention the insistence that walls are more necessary to the American future than they even are to squash—there is one point about which he, like Bernie Sanders in the Democratic camp, is making cogent sense: the current process for selecting an American president is much more like a tennis match than it is like a bicycle race. After all, as Hendrik Hertzberg of The New Yorker once pointed out, Americans don’t elect their presidents “the same way we elect everybody else—by adding up all the voters’ votes and giving the job to the candidate who gets the most.” Instead, Americans have (as Ed Grabianowski puts it on the how stuff works website), “a whole bunch of separate state elections.” And while both of these comments were directed at the presidential general election, which depends on the Electoral College, they equally, if not more so, apply to the primary process: at least in the general election in November, each state’s rules are more or less the same.

The truth, and hence power, of Trump’s critique of this process can be measured by the vitriol of the response to it. A number of people, on both sides of the political aisle, have attacked Trump (and Sanders) for drawing attention to the fashion in which the American political process works: when Trump pointed out that Colorado had refused to hold a primary, for instance, Reince Priebus, chairman of the Republican National Committee, tweeted (i.e., posted on Twitter, for those of you unfamiliar with, you know, the future) “Nomination process known for a year + beyond. It’s the responsibility of the campaigns to understand it. Complaints now? Give us all a break.” In other words, Priebus was implying that the rules were the same for all candidates, and widely known before hand—so why the whining? Many on the Democratic side said the same about Sanders: as Albert Hunt put it in the Chicago Tribune back in April, both Trump and Sanders ought to shut up about the process: “Both [campaigns’] charges [about the process] are specious,” because “nobody’s rules have changed since the candidates entered the fray.” But as both Trump and Sanders’ campaigns have rightly pointed out, the rules of a contest do matter beyond just the bare fact that they are the same for every candidate: if the Tour de France were conducted under rules similar to tennis’, it seems likely that the race would be won by very different kinds of winners—sprinters, perhaps, who could husband their stamina until just the right moment. It’s very difficult not to think that the criticisms of Trump and Sanders as being “whiners” is disingenuous—an obvious attempt to protect a process that transparently benefits insiders.

Trump’s supporters, like Sanders’ and those who voted “Leave” in the “Brexit” referendum, have been labeled as “losers”—and while, to those who consider themselves “winners,” the thoughts of losers are (as the obnoxious phrase has it) like the thoughts of sheep to wolves, it seems indisputably true that the voters behind all three campaigns represent those for whom the global capitalism of the last several decades hasn’t worked so well. As Matt O’Brian noted in The Washington Post a few days ago, “the working class in rich countries have seen their real, or inflation-adjusted, incomes flatline or even fall since the Berlin Wall came down and they were forced to compete with all the Chinese, Indian, and Indonesian workers entering the global economy.” (Real economists would dispute O’Brian’s chronology here: at least in the United States, wages have not risen since the early 1970s, which far predates free trade agreements like the North American Free Trade Agreement signed by Bill Clinton in the 1990s. But O’Brian’s larger argument, as wrongheaded as it is in detail, instructively illustrates the muddleheadedness of the conventional wisdom.) In this fashion, O’Brian writes, “the West’s triumphant globalism” has “fuel[ed] a nationalist backlash”: “In the United States it’s Trump, in France it’s the National Front, in Germany it’s the Alternative for Germany, and, yes, in Britain it’s the Brexiters.” What’s astonishing about this, however, is that—despite not having, as so, so many articles decrying their horribleness have said, a middle-class senses of decorum—all of these movements stand for a principle that, you would think, the “intellectuals” of the world would applaud: the right of the people themselves to determine their own destiny.

It is they, in other words, who literally embody the principle enunciated by the opening words of the United States Constitution, “We the People,” or enunciated by the founding document of the French Revolution (which, by the by, began on a tennis court), The Declaration of the Rights of Man and the Citizen, whose first article holds that “Men are born and remain free and equal in rights.” In the world of this Declaration, in short, each person has—like every stage of the Tour de France, and unlike each point played during Wimbledon—precisely the same value. It’s a principle that Americans, especially, ought to remember this weekend of all weekends—a weekend that celebrates another Declaration, one whose opening lines reads “We hold these truths to be self-evident, that all men are created equal.” Americans, in other words, despite the success individual Americans like John McEnroe or Pete Sampras or Chris Evert, are not tennis players, as Donald Trump (and Bernie Sanders) have rightfully pointed out over the past year—a sport, as one history of the game has put it, “so clearly aligned with both The Church and Aristocracy.” Americans, as the first modern nation in the world, ought instead to be associated with a sport unknown to the ancients and unthinkable without modern technology.

We are bicycle riders.

Bait and Switch

Golf, Race, and Class: Robert Todd Lincoln, Oldest Son of President Abraham Lincoln, and President of the Chicago Golf Club
Golf, Race, and Class: Robert Todd Lincoln, Oldest Son of President Abraham Lincoln, and President of the Chicago Golf Club

But insiders also understand one unbreakable rule: They don’t criticize other insiders.
A Fighting Chance
    Senator Elizabeth Warren.

… cast out first the beam out of thine own eye …
Matthew 7:5

 

“Where are all the black golfers?” Golf magazine’s Michael Bamberger asked back in 2013: Tiger Wood’s 1997 victory at the Masters, Bamberger says, was supposed to open “the floodgates … to minority golfers in general and black golfers in particular.” But nearly two decades later Tiger is the only player on the PGA Tour to claim to be African-American. It’s a question likely to loom larger as time passes: Woods missed the cut at last week’s British Open, the first time in his career he has missed a cut in back-to-back majors, and FiveThirtyEight.com’s line from April about Woods (“What once seemed to be destiny—Woods’ overtaking of Nicklaus as the winningest major champion ever—now looks like a fool’s notion”) seems more prophetic than ever. As Woods’ chase for Nicklaus fades, almost certainly the question of Woods’ legacy will turn to the renaissance in participation Woods was supposedly going to unleash—a renaissance that never happened. But where will the blame fall? Once we exclude Woods’ from responsibility for playing Moses, is the explanation for why are there no black golfers, as Bamberger seems to suggest, because golf is racist? Or is it, as Bamberger’s own reporting shows, more likely due to the economy? And further, if we can’t blame Woods for not creating more golfers in his image, can we blame Bamberger for giving Americans the story they want instead of the story they need?

Consider, for instance, Bamberger’s mention of the “Tour caddie yard, once a beautiful example of integration”—and now, he writes, “so white it looks like Little Rock Central High School, circa 1955.” Or his description of how, in “Division I men’s collegiate golf … the golfers, overwhelmingly, are white kids from country-club backgrounds with easy access to range balls.” Surely, although Bamberger omits the direct reference, the rise of the lily-white caddie yard is likely not due to a racist desire to bust up the beautifully diverse caddie tableau Bamberger describes, just as it seems more likely that the presence of the young white golfers at the highest level of collegiate golf owes more to their long-term access to range balls than it does to the color of their skin. Surely the mysterious disappearance of the black professional golfer is more likely due—as the title of a story by Forbes contributor Bob Cook has it—to “How A Declining Middle Class Is Killing Golf” than golf’s racism. An ebbing tide lowers all boats.

“Golf’s high cost to entry and association with an older, moneyed elite has resulted in young people sending it to the same golden scrap heap as [many] formerly mass activities,” as Cook wrote in Forbes—and so, as “people [have] had less disposable income and time to play,” golf has declined among all Americans and not just black ones. But then, maybe that shouldn’t be surprising when, as Scientific American reported in March, the “top 20% of US households own more than 84% of the wealth, and the bottom 40% combine for a paltry 0.3%,” or when, as Time said two years ago, “the wages of median workers have remained essentially the same” for the past thirty years. So it seems likelier that the non-existent black golfer can be found at the bottom of the same hole to which many other once-real and now-imaginary Americans—like a unionized, skilled, and educated working-class—have been consigned.

The conjuring trick however whereby the disappearance of black professional golfers becomes a profound mystery, rather than a thoroughly understandable consequence of the well-documented overall decline in wages for all Americans over the past two generations, would be no surprise to Walter Benn Michaels of the University of Illinois at Chicago. “In 1947,” Michaels has pointed out for instance, repeating all the statistics, “the bottom fifth of wage-earners got 5 per cent of total income,” while “today it gets 3.4 per cent.” But the literature professor is aware not only that inequality is rising, but also that it’s long been a standard American alchemy to turn economic matters into racial ones.

Americans, Michaels has written, “love thinking that the differences that divide us are not the differences between those of us who have money and those who don’t but are instead the differences between those of us who are black and those who are white or Asian or Latino or whatever.” Why? Because if the differences between us are due to money, and the lack of it, then there’s a “need to get rid of inequality or to justify it”—while on the other hand, if those differences are racial, then there’s a simple solution: “appreciating our diversity.” In sum, if the problem is due to racism, then we can solve it with workshops and such—but if the problem is due to, say, an historic loss of the structures of middle-class life, then a seminar during lunch probably won’t cut it.

Still, it’s hard to blame Bamberger for refusing to see what’s right in front of him: Americans have been turning economic issues into racial ones for some time. Consider the argument advanced by the Southern Literary Messenger (the South’s most important prewar magazine) in 1862: the war, the magazine said, was due to “the history of racial strife” between “a supposedly superior race” that had unwisely married its fortune “with one it considered inferior, and with whom co-existence on terms of political equality was impossible.” According to this journal, the Civil War was due to racial differences, and not from any kind of clash between two different economic interests—one of which was getting incredibly wealthy by the simple expedient of refusing to pay their workers and then protecting their investment by making secret and large-scale purchases of government officials while being protected by bought-and-paid-for judges. (You know, not like today.)

Yet despite how ridiculous it sounds—because it is—the theory does have a certain kind of loopy logic. According to these Southern, and some Northern, minds, the two races were so widely divergent politically and socially that their deep, historical differences were the obvious explanation for the conflict between the two sections of the country—instead of that conflict being the natural result of allowing a pack of lying, thieving criminals to prey upon decent people. The identity of these two races—as surely you have already guessed, since the evidence is so readily apparent—were, as historian Christopher Hanlon graciously informs us: “the Norman and Saxon races.”

Duh.

Admittedly, the theory does sound pretty out there—though I suspect it sounds a lot more absurd now that you know what races these writers were talking about, rather than the ones I suspect you thought they were talking about. Still, it’s worth knowing something of the details if only to understand how these could have been considered rational arguments: to understand, in other words, how people can come to think of economic matters as racial, or cultural, ones.

In the “Normans vs. Saxons” version of this operation, the theory comes in two flavors. According to University of Georgia historian James Cobb, the Southern flavor of this racial theory held that Southerners were “descended from the Norman barons who conquered England in the 11th century and populated the upper classes of English society,” and were thus naturally equipped for leadership. Northern versions held much the same, but flipped the script: as Ralph Waldo Emerson wrote in the 1850s, the Normans were “greedy and ferocious dragoons, sons of greedy and ferocious pirates” who had, as Conlon says, “imposed serfdom on their Saxon underlings.” To both sides then the great racial conflagration, the racial apocalypse, destined to set the continent alight would be fought between Southern white people … and Northern white people.

All of which is to say that Americans have historically liked to make their economic conflicts about race, and they haven’t always been particular about which ones—which might seem like downer news. But there is, perhaps, a bright spot to all this: whereas the Civil War-era writers treated “race” as a real description of a natural kind—as if their descriptions of “Norman” or “Saxon” had as much validity as a description of a great horned toad or Fraser’s eagle owl—nowadays Americans like to “dress race up as culture,” as Michaels says. This current orthodoxy holds that “the significant differences between us are cultural, that such differences should be respected, that our cultural heritages should be perpetuated, [and] that there’s a value in making sure that different cultures survive.” Nobody mentions that substituting “race” and “racial” for “culture” and “cultural” doesn’t change the sentence’s meaning in any important respects.

Still, it certainly has had an effect on current discourse: it’s what caused Bamberger to write that Tiger Woods “seems about as culturally black as John Boehner.” The phrase “culturally black” is arresting, because it implies that “race” may not be a biological category, as it was for the “Normans vs. Saxons” theorists. And certainly, that’s a measure of progress: just a generation or two ago it was possible to refer unselfconsciously to race in an explicitly biological way. So in that sense, it might be possible to think that because a Golf writer feels it necessary to clarify that “blackness” is a cultural, and not a biological, category, that constitutes a victory.

The credit for that victory surely goes to what the “heirs of the New Left and the Sixties have created, within the academy” as Stanford philosopher Richard Rorty wrote before his death—“a cultural Left.” The victories of that Left have certainly been laudable—they’ve even gotten a Golf magazine writer to talk about a “cultural,” instead of biological, version of whatever “blackness” is! But there’s also a cost, as Rorty also wrote: this “cultural Left,” he said, “thinks more about stigma than money, more about deep and hidden psychosexual motivations than about shallow and evident greed.” Seconding Rorty’s point, University of Chicago philosopher Martha Nussbaum has written that academia today is characterized by “the virtually complete turning away from the material side of life, toward a type of verbal and symbolic politics”—a “cultural Left” that thinks “the way to do … politics is to use words in a subversive way, in academic publications of lofty obscurity and disdainful abstractness,” and that “instructs its members that there is little room for large-scale social change, and maybe no room at all.” So, while it might be slightly better that mainstream publications now think of race in cultural, instead biological, terms, this might not be the triumph it’s sometimes said to be given the real facts of economic life in the United States.

Yet the advice of the American academy is that what the United States needs is more talk about culture, rather than a serious discussion about political economy. Their argument is a simple one, summarized by the recently deceased historical novelist E.L. Doctorow in an essay called “Notes on the History of Fiction”: there, the novelist argues that while there is a Richard III Society in England attempting to “recover the reputation of their man from the damage done to it by the calumnies of Shakespeare’s play,” all their efforts are useless—“there is a greater truth for the self-reflection of all mankind in the Shakespearean vision of his life than any simple set of facts can summon.” What matters, Doctorow is arguing, isn’t the real Richard III—coincidentally, the man apparently recently dug up in an English parking lot—but rather Shakespeare’s approximation of him, just in the same way that some Civil War-era writers argued that what mattered was “race” instead of the economics of slavery, or how Michael Bamberger fails to realize that the presence of the real white golfers that are in front of him explains the absence of the imaginary black golfers that aren’t fairly easily. What Doctorow then is really saying, and thus by extension what the “cultural Left” is really saying, is that the specific answer to the question of where the black golfers are is irrelevant, because dead words matter more than live people—an idea, however, that seems difficult to square with the notion that, as the slogan has it, black lives matter.

Golfers or not.