Striking Out

When a man’s verses cannot be understood … it strikes a man more dead than a great reckoning in a little room.
As You Like It. III, iii.

 

There’s a story sometimes told by the literary critic Stanley Fish about baseball, and specifically the legendary early twentieth-century umpire Bill Klem. According to the story, Klem is working behind the plate one day. The pitcher throws a pitch; the ball comes into the plate, the batter doesn’t swing, and the catcher catches it. Klem doesn’t say anything. The batter turns around and says (Fish tells us),

“O.K., so what was it, a ball or a strike?” And Klem says, “Sonny, it ain’t nothing ’till I call it.” What the batter is assuming is that balls and strikes are facts in the world and that the umpire’s job is to accurately say which one each pitch is. But in fact balls and strikes come into being only on the call of an umpire.

Fish is expressing here what is now the standard view of American departments of the humanities: the dogma (a word precisely used) known as “social constructionism.” As Fish says elsewhere, under this dogma, “what is and is not a reason will always be a matter of faith, that is of the assumptions that are bedrock within a discursive system which because it rests upon them cannot (without self-destructing) call them into question.” To many within the academy, this view is inherently liberating: the notion that truth isn’t “out there” but rather “in here” is thought to be a sub rosa method of aiding the political change that, many have thought, has long been due in the United States. Yet, while joining the “social construction” bandwagon is certainly the way towards success in the American academy, it isn’t entirely obvious that it’s an especially good way to practice American politics: specifically, because the academy’s focus on the doctrines of “social constructionism” as a means of political change has obscured another possible approach—an approach also suggested by baseball. Or, to be more precise, suggested by the World Series of 1904 that didn’t happen.

“He’d have to give them,” wrote Will Hively, in Discover magazine in 1996, “a mathematical explanation of why we need the electoral college.” The article describes how one Alan Natapoff, a physicist at the Massachusetts Institute of Technology, became involved in the question of the Electoral College: the group, assembled once every four years, that actually elects an American president. (For those who have forgotten their high school civics lessons, the way an American presidential election works is that each American state elects a number of “electors” equal in number to that state’s representation  in Congress; i.e., the number of congresspeople each state is entitled to by population, plus two senators. Those electors then meet to cast their votes in what is the actual election.) The Electoral College has been derided for years: the House of Representatives introduced a constitutional amendment to abolish it in 1969, for instance, while at about the same time the American Bar Association called the college “archaic, undemocratic, complex, ambiguous, indirect, and dangerous.” Such criticisms have a point: as has been seen a number times in American history (most recently in 2000), the Electoral College makes it possible to elect a president without a majority of the votes. But to Natapoff, such criticisms fundamentally miss the point because, according to him, they misunderstood the math.

The example Natapoff turned to in order to support his argument for the Electoral College was drawn from baseball. As Anthony Ramirez wrote in a New York Times article about Natapoff and his argument, also from 1996, the physicist’s favorite analogy is to the World Series—a contest in which, as Natapoff says, “the team that scores the most runs overall is like a candidate who gets the most popular votes.” But scoring more runs than your opponent is not enough to win the World Series, as Natapoff goes on to say: in order to become the champion baseball team of the year, “that team needs to win the most games.” And scoring runs is not the same as winning games.

Take, for instance, the 1960 World Series: in that contest, as Lively says in Discover, “the New York Yankees, with the awesome slugging combination of Mickey Mantle, Roger Maris, and Bill ‘Moose’ Skowron, scored more than twice as many total runs as the Pittsburgh Pirates, 55 to 27.” Despite that difference in production, the Pirates won the last game of the series (in perhaps the most exciting game in Series history—the only one that has ever ended with a ninth-inning, walk-off home run) and thusly won the series, four games to three. Nobody would dispute, Natapoff’s argument runs, that the Pirates deserved to win the series—and so, similarly, nobody should dispute the legitimacy of the Electoral College.

Why? Because if, as Lively writes, in the World Series “[r]uns must be grouped in a way that wins games,” in the Electoral College “votes must be grouped in a way that wins states.” Take, for instance, the election of 1888—a famous case for political scientists studying the Electoral College. In that election, Democratic candidate Grover Cleveland gained over 5.5 million votes to Republican candidate Benjamin Harrison’s 5.4 million votes. But Harrison not only won more states than Cleveland, but also won states with more electoral votes: including New York, Pennsylvania, Ohio, and Illinois, each of whom had at least six more electoral votes than the most populous state Cleveland won, Missouri. In this fashion, Natapoff argues that Harrison is like the Pirates: although he did not win more votes than Cleveland (just as the Pirates did not score more runs than the Yankees), still he deserved to win—on the grounds that the total numbers of popular votes do not matter, but rather how those votes are spread around the country.

In this argument, then, games are to states just as runs are to votes. It’s an analogy that has an easy appeal to it: everyone feels they understand the World Series (just as everyone feels they understand Stanley Fish’s umpire analogy) and so that understanding appears to transfer easily to the matter of presidential elections. Yet, while clever, in fact most people do not understand the purpose of the World Series: although people think it is the task of the Series to identify the best baseball team in the major leagues, that is not what it is designed to do. It is not the purpose of the World Series to discover the best team in baseball, but instead to put on an exhibition that will draw a large audience, and thus make a great deal of money. Or so said the New York Giants, in 1904.

As many people do not know, there was no World Series in 1904. A World Series, as baseball fans do know, is a competition between the champions of the National League and the American League—which, because the American League was only founded in 1901, meant that the first World Series was held in 1903, between the Boston Americans (soon to become the Red Sox) and the same Pittsburgh Pirates also involved in Natapoff’s example. But that series was merely a private agreement between the two clubs; it created no binding precedent. Hence, when in 1904 the Americans again won their league and the New York Giants won the National League—each achieving that distinction by winning more games than any other team over the course of the season—there was no requirement that the two teams had to play each other. And the Giants saw no reason to do so.

As legendary Giants manager, John McGraw, said at the time, the Giants were the champions of the “only real major league”: that is, the Giants’ title came against tougher competition than the Boston team faced. So, as The Scrapbook History of Baseball notes, the Giants, “who had won the National League by a wide margin, stuck to … their plan, refusing to play any American League club … in the proposed ‘exhibition’ series (as they considered it).” The Giants, sensibly enough, felt that they could not gain much by playing Boston—they would be expected to beat the team from the younger league—and, conversely, they could lose a great deal. And mathematically speaking, they were right: there was no reason to put their prestige on the line by facing an inferior opponent that stood a real chance to win a series that, for that very reason, could not possibly answer the question of which was the better team.

“That there is,” writes Nate Silver and Dayn Perry in Baseball Between the Numbers: Why Everything You Know About the Game Is Wrong, “a great deal of luck involved in the playoffs is an incontrovertible mathematical fact.” But just how much luck is involved is something that the average fan hasn’t considered—though former Caltech physicist Leonard Mlodinow has. In Mlodinow’s book, The Drunkard’s Walk: How Randomness Rules Our Lives, the scientist writes that—just by virtue of doing the math—it can be concluded that “in a 7-game series there is a sizable chance that the inferior team will be crowned champion”:

For instance, if one team is good enough to warrant beating another in 55 percent of its games, the weaker team will nevertheless win a 7-game series about 4 times out of 10. And if the superior team could be expected to beat its opponent, on average, 2 out of each 3 times they meet, the inferior team will still win a 7-game series about once every 5 matchups.

What Mlodinow means is this: let’s say that, for every game, we roll a one-hundred sided die to determine whether the team with the 55 percent edge wins or not. If we do that four times, there’s still a good chance that the inferior team is still in the series: that is, that the superior team has not won all the games. In fact, there’s a real possibility that the inferior team might turn the tables, and instead sweep the superior team. Seven games, in short, is just not enough games to demonstrate conclusively that one team is better than another.

In fact, in order to eliminate randomness as much as possible—that is, make it as likely as possible for the better team to win—the World Series would have to be much longer than it currently is: “In the lopsided 2/3-probability case,” Mlodinow says, “you’d have to play a series consisting of at minimum the best of 23 games to determine the winner with what is called statistical significance, meaning the weaker team would be crowned champion 5 percent or less of the time.” In other words, even in a case where one team has a two-thirds likelihood of winning a game, it would still take 23 games to make the chance of the weaker team winning the series less than 5 percent—and even then, there would still be a chance that the weaker team could still win the series. Mathematically then, winning a seven-game series is meaningless—there have been just too few games to eliminate the potential for a lesser team to beat a better team.

Just how mathematically meaningless a seven-game series is can be demonstrated by the case of a team that is only five percent better than another team: “in the case of one team’s having only a 55-45 edge,” Mlodinow goes on to say, “the shortest statistically significant ‘world series’ would be the best of 269 games” (emp. added). “So,” Mlodinow writes, “sports playoff series can be fun and exciting, but being crowned ‘world champion’ is not a very reliable indication that a team is actually the best one.” Which, as a matter of fact about the history of the World Series, is simply a point that true baseball professionals have always acknowledged: the World Series is not a competition, but an exhibition.

What the New York Giants were saying in 1904 then—and Mlodinow more recently—is that establishing the real worth of something requires a lot of trials: many, many different repetitions. That’s something that, all of us, ought to know from experience: to learn anything, for instance, requires a lot of practice. (Even if the famous “10,000 hour rule” New Yorker writer Malcolm Gladwell concocted for this book, Outliers: The Story of Success, has been complicated by those who did the original research Gladwell based his research upon.) More formally, scientists and mathematicians call this the “Law of Large Numbers.”

What that law means, as the Encyclopedia of Mathematics defines it, is that “the frequency of occurence of a random event tends to become equal to its probability as the number of trials increases.” Or, to use the more natural language of Wikipedia, “the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.” What the Law of Large Numbers implies is that Natapoff’s analogy between the Electoral College and the World Series just might be correct—though for the opposite reason Natapoff brought it up. Namely, if the Electoral College is like the World Series, and the World Series is not designed to find the best team in baseball but instead be merely an exhibition, then that implies that the Electoral College is not a serious attempt to find the best president—because what the Law would appear to advise is that, in order to obtain a better result, it is better to gather more voters.

Yet the currently-fashionable dogma of the academy, it would seem, is expressly-designed to dismiss that possibility: if, as Fish says, “balls and strikes” (or just things in general) are the creations of the “umpire” (also known as a “discursive system”), then it is very difficult to confront the wrongheadedness of Natapoff’s defense of the Electoral College—or, for that matter, the wrongheadedness of the Electoral College itself. After all, what does an individual run matter—isn’t what’s important the game in which it is scored? Or, to put it another way, isn’t it more important where (to Natapoff, in which state; to Fish, less geographically inclined, in which “discursive system”) a vote is cast, rather than whether it was cast? The answer in favor of the former at the expense of the latter to many, if not most, literary-type intellectuals is clear—but as any statistician will tell you, it’s possible for any run of luck to continue for quite a bit longer than the average person might expect. (That’s one reason why it takes at least 23 games to minimize the randomness between two closely-matched baseball teams.) Even so, it remains difficult to believe—as it would seem that many today, both within and without the academy, do—that the umpire can continue to call every pitch a strike.

 

Advertisements

Beams of Enlightenment

And why thou beholdest thou the mote that is in thy brother’s eye, but considerest not the beam that is in thine own eye?
Matthew 7:3

 

“Do you know what Pied Piper’s product is?” the CEO of the company, Jack Barker, asks his CTO, Richard, during a scene in HBO’s series Silicon Valley—while two horses do, in the background, what Jack is (metaphorically) doing to Richard in the foreground. Jack is the experienced hand brought in to run the company Richard founded as a young programmer; on the other hand, Richard is so ingenuous that Jack has to explain to him the real point of everything they are doing: “The product isn’t the platform, and the product isn’t your algorithm either, and it’s not even the software. … Pied Piper’s product is its stock. Whatever makes the value of that stock go up, that is what we’re going to make.” With that, the television show effectively dramatizes the case many on the liberal left have been trying to make for decades: that the United States is in trouble because of something called “financialization”—or what Kevin Phillips (author of 1969’s The Emerging Republican Majority) has called, in one of the first uses of the term, “a prolonged split between the divergent real and financial economies.” Yet few on that side of the political aisle have considered how their own arguments about an entirely different subject are, more or less, the same as those powering “financialization”—how, in other words, the argument that has enhanced Wall Street at the expense of Main Street—Eugene Fama’s “efficient market hypothesis”—is precisely the same as the liberal left’s argument against the SAT.

That the United States has turned from an economy largely centered around manufacturing to one that centers on services, especially financial ones, can be measured by such data as the fact that the total fraction of America’s Gross Domestic Product consumed by the financial industry is now, according to economist Thomas Philippon of New York University, “around 9%,” while just more than a century ago it was under two. Most appear to agree that this is a bad thing: “Our economic illness has a name: financialization,” Time magazine columnist Rana Foroohar argues in her Makers and Takers: The Rise of Finance and the Fall of American Business, while Bruce Bartlett, who worked in both the Reagan and George H.W. Bush Administrations (which is to say that he is not exactly the stereotypical lefty), claimed in the New York Times in 2013 that “[f]inancialization is also an important factor in the growth of income inequality.” In a 2007 Bloomberg News article, Lawrence E. Mitchell—a professor of law at George Washington Law School—denounced how “stock market considerations” have come “to trump those that improve the actual workings of a business.” The consensus view appears to be that it is bad for a business to be, as Jack is on Silicon Valley, more concerned with its stock price than on what it actually does.

Still, if it is such a bad idea, why do companies do it? One possible answer might be found in the timing, which seems to have happened some time after the 1960s: as John Bellamy Foster put it in a 2007 Monthly Review article entitled “The Financialization of Capitalism,” the “fundamental issue of a gravitational shift toward finance in capitalism as a whole … has been around since the late 1960s.” Undoubtedly, that turn was conditioned by numerous historical forces, but it’s also true that it was during the 1960s that the “efficient market hypothesis,” pioneered above all by the research of Eugene Fama of the University of Chicago, became the dominant intellectual force in the study of economics and in business schools—the incubators of the corporate leaders of today. And Fama’s argument was—and is—an intellectual cruise missile aimed at the very idea that the value of a company might be separate from its stock price.

As I have discussed previously (“Lions For Lambs”), Eugene Fama’s 1965 paper “The Behavior of Stock Market Prices” demonstrated that “the future path of the price level of a security is no more predictable than the path of a series of cumulated random numbers”—or in other words, that there was no rational way to beat the stock market. Also known as the “efficient market hypothesis,” the idea is largely that—as Fama’s intellectual comrade Burton Malkiel observed in his book, A Random Walk Down Wall Street (which has gone through more than five editions since its first publication in 1973),“the evidence points mainly toward the efficiency of the market in adjusting so rapidly to new information that it is impossible to devise successful trading strategies on the basis of such news announcements.” Translated, that means that it’s essentially impossible to do better than the market by paying close attention to what investors call a company’s “fundamental value.”

Yet, if there’s never a divergence between a company’s real worth and the price of its stock, that means that there’s no other means to measuring a company’s real worth than by its stock. From Fama’s or Malkiel’s perspective, “stock market considerations” simply are “the actual workings of a business.” They argued against the very idea that there even could be such a distinction: that there could be something about a company that is not already reflected in its price.

To a lot of educated people on the liberal-left, of course, such an argument will affirm many of their prejudices: against the evils of usury, and the like. At the same time, however, many of them might be taken aback if it’s pointed out that Eugene Fama’s case against fundamental economic analysis is the same as the case many educators make, when it comes to college admissions, against the SAT. Take, for example, a 1993 argument made in The Atlantic by Stanley Fish, former chairman of the English Department at Duke University and dean of the humanities at the University of Illinois at Chicago.

In “Reverse Racism, or, How the Pot Got to Call the Kettle Black,” the Miltonist argued against noted conservative Dinesh D’Souza’s contention, in 1991’s Illiberal Education, that affirmative-action in college admissions tends “‘to depreciate the importance of merit criteria.’” The evidence that D’Souza used to advance that thesis is, Fish tells us, the “many examples of white or Asian students denied admission to colleges and universities even though their SAT scores were higher than the scores of some others—often African-Americans—who were admitted to the same institution.” But, Fish says, the SAT has been attacked as a means of college admissions for decades.

Fish cites David Owen’s None of the Above: Behind the Myth of Scholastic Aptitude as an example. There, Owen says that the

correlation between SAT scores and college grades … is lower than the correlation between height and weight; in other words, you would have a better chance of predicting a person’s height by looking at his weight than you would of predicting his freshman grades by looking only at his SAT scores.”

As Fish intimates, most educational professionals these days would agree that the only way to judge a student these days is not by SAT, but by GPA—grade point average.

To judge students by grade point average, however, is just what the SAT was designed to avoid: as Nicholas Lemann describes in copious detail in The Big Test: The Secret History of the American Meritocracy, the whole purpose of the SAT was to discover students whose talents couldn’t be discerned by any other method. The premise of the test’s designers, in short, was that students possessed, as Lemann says, “innate abilities”—and that the SAT could suss those abilities out. What the SAT was designed to do, then, was to find those students stuck in, say, some lethargic, claustrophobic small town whose public schools could not, perhaps, do enough for them intellectually and who stagnated as a result—and put those previously-unknown abilities to work in the service of the nation.

Now, as Lemann remarked in an interview with PBS’ Frontline,  James Conant (president of Harvard and chief proponent of the SAT at the time it became prominent in American life, in the early 1950s) “believed that you would look out across America and you would find just out in the middle of nowhere, springing up from the good American soil, these very intelligent, talented people”—if, that is, America adopted the SAT to do the “looking out.” The SAT would enable American universities to find students that grade point averages did not—a premise that, necessarily, entails believing that a student’s worth could be more than (and thus distinguishable from) her GPA. That’s what, after all, “aptitude” means: “potential ability,” not “proven ability.” That’s why Conant sometimes asked those constructing the test, “Are you sure this is a pure aptitude test, pure intelligence? That’s what I want to measure, because that is the way I think we can give poor boys the best chance and take away the advantage of rich boys.” The Educational Testing Service (the company that administered the SAT), in sum, believed that there could be something about a student that was not reflected in her grades.

To use an intellectual’s term, that means that the argument against the SAT is isomorphic with the “efficient market hypothesis.” In biology, two structures are isomorphic with each other if they share a form or structure: a human eye is isomorphic with an insect’s eye because they both take in visual information and transmit it to the brain, even though they have different origins. Hence, as biologist Stephen Jay Gould once remarked, two arguments are isomorphic if they are “structurally similar point for point, even though the subject matter differs.” Just as Eugene Fama argued that a company could not be valued other than by its stock price—which has had the effective consequence of meaning that a company’s product is now not whatever superficial business it is supposedly in, but its stock price—educational professionals have argued that the only way to measure a student’s value is to look at her grades.

Now, does that mean that the “financialization” of the United States’ economy is the fault of the liberal left, instead of the usual conservative suspects? Or, to put it more provocatively, is the rise of the 1% at the expense of the 99% the fault of affirmative action? The short answer, obviously, is that I don’t have the slightest idea. (But then, neither do you.) What it does mean, I think, is that at least some of what’s happened to the United States in the past several decades is due to patterns of thought common to both sides of the American political congregation: most perniciously, in the related notions that all value is always and everywhere visible, or that it takes no time and patience for value to manifest itself—and that at least some of the erosion of those ideas is due to the efforts of those who meant well. Granted, it’s always hardest to admit wrongdoing when not only were your intentions pure, but even the immediate effects were also—but it’s also very much more powerful. The point, anyway, is that if you are trying to persuade, it’s probably best to avoid that other four-lettered word associated with horses.

 

 

Lions For Lambs

And the remnant of Jacob shall be among the Gentiles in the midst of many people as a lion among the beasts of the forest, as a young lion among the flocks of sheep …
Micah 5:8

Micah was the first prophet to predict the downfall of Jerusalem. According to him, the city was doomed because its beautification was financed by dishonest business practices, which impoverished the city’s citizens. He also called to account the prophets of his day, whom he accused of accepting money for their oracles.
“Micah.” Wikipedia.

 

“Before long I’ll be dead, and you and your brother and your sister and all of her children, all of us dead, all of us rotting underground,” says the villainous patriarch of the aristocratic Lannister clan, Tywin, to his son Jaime in a conversation during the first season of the hit HBO show, Game of Thrones. “It’s the family name that lives on,” Tywin continues—a sentence that not only does much to explain the popularity of the show, but also overturns the usual explanation for that interest: the narrative uncertainty, or the way in which, at least in the first several seasons, it was never obvious which characters were the heroes, and so would survive to the end of the tale. But if Tywin is right, the attraction of the show isn’t that it is so unpredictable. It’s rather that the show’s uncertainty about the various characters’ fates is balanced by a matching certainty that they are in peril: either from the political machinations that end up destroying many of the characters the show had led us to think were protagonists (Ned and his son Robb Stark in particular)—or from the horror that, the opening minutes of the show’s very first episode display, has awakened in the frozen north of Thrones’ fictional world. Hence, the uncertainty about what is going to happen is mirrored by a certainty that something will happen—a certainty signified by the motto of the family to which many fan-favorite characters belong, House Stark: “Winter is Coming.” It’s that motto, I think, that furnishes much of the show’s power—because it is such a direct riposte to much of today’s conventional wisdom, a dogma that unites the supposed “radical left” of the contemporary university with their seeming ideological opposites: the financial elite of Wall Street.

To put it plainly, the relevant division in America today is not between Republicans and Democrats, but instead between those who (still) think the notion encapsulated by the phrase “Winter Is Coming” matters—and those who don’t. For the idea contained within the phrase “Winter Is Coming,” after all, is much older than George Martin’s series of fantasy novels. It is, for example, much the same as an idea expressed by the English writer George Orwell, author of 1984 and Animal Farm, in 1946:

… we are all capable of believing things which we know to be untrue, and then, when we are finally proved wrong, impudently twisting the facts so as to show that we were right. Intellectually, it is possible to carry on this process for an indefinite time: the only check on it is that sooner or later a false belief bumps up against solid reality, usually on a battlefield.

What Orwell expresses here, I’d say, is the Stark idea—the idea that, sooner or later, one’s beliefs run up against reality, whether that reality comes in the form of the weather or war or something else. It’s the notion that, sooner or later, things converge towards reality: a notion that many contemporary intellectuals have abandoned. To them, the view expressed by Orwell and the Starks is what’s known as “foundationalism”: something that all recent students in the humanities have been trained, over the past several generations, to boo and hiss.

“Foundationalism,” according to Pennsylvania State University literature professor Michael Bérubé, for example—a person I often refer to because, unlike the work of a lot others, he at least expresses what he’s saying clearly, and also because he represents a university well-known for its commitment to openness and transparency and occasionally less-than-enthusiastic opposition to child abuse—is the notion that there is a “principle that is independent of all human minds.” That is opposed, for people who think about this sort of thing, to “antifoundationalism”: the idea that a lot of stuff (maybe everything) is simply a matter of “human deliberation and consensus.” Also known as “social constructionism,” it’s an idea that Orwell, or the Starks, would have looked at slant-eyed: winter, for instance, doesn’t particularly care what people think about it, and while war is like both a seminar and a hurricane, the things that happen in war—like, say, having the technology to turn an entire city into a fireball—are not appreciably different from the impact of a tsunami.

Within the humanities however the “anti-foundationalist” or “social constructionist” idea has largely taken the field. “Notwithstanding,” as literature professor Mark Bauerlein of Emory University has remarked, “the diversity trumpeted by humanities departments these days, when it comes to conceptions of knowledge, one standpoint reigns supreme: social constructionism.” To those who hold it, it is a belief that straightforwardly powers what Bauerlein calls “a moral obligation to social justice”: in this view, either you are on the side of antifoundationalism, or you are a yahoo who thinks that the problem with the world is that there isn’t enough Donald Trump in it. Yet antifoundationalism, or the idea that everything is a matter of human discussion, is not necessarily so obviously on the side of good and not evil as the professors of the nation’s universities appear to believe.

In fact, while Bauerlein says that this dogma is “a party line, a tribal glue distinguishing humanities professors from their colleagues in the business school, the laboratory, the chapel, and the computing center, most of whom believe that at least some knowledge is independent of social conditions,” there’s actually good reason to think that a disbelief in an underlying reality isn’t all that unfamiliar to the business school. Arguably, there’s no portion of the university that pays more homage to the dogma of “social construction” than the business school.

Take, for instance, the idea Eugene Fama has built his career upon: the “random walk” theory of the stock market, also known as the “efficient market hypothesis.” Today, Fama is a Nobel Prize-laureate (well, winner of the Swedish National Bank’s Prize in Economic Sciences in Memory of Alfred Nobel, a prize not established by Alfred Nobel in his 1895 will), a professor at the University of Chicago’s Booth School of Business, and the so-called “Father of Finance, ” but in 1965 he was an obscure graduate student—at least, until he wrote the paper that established him within his profession that year, “The Behavior of Stock-Market Prices.” In that paper, Fama argued that “the future path of the price level of a security is no more predictable than the path of a series of cumulated random numbers,” which had the consequence that “the series of price changes has no memory.” (Which is what stock prospectuses mean when they say that “past performance cannot predict future performance.”) What Fama meant was that, no matter how many times he went back over the data, he could find no means by which to predict the future path of a particular stock. Hence he concluded that, when it comes to the market, “the past cannot be used to predict the future in any meaningful way”—an idea with some notably anti-foundationalist consequences.

Those consequences can be be viewed in such papers as Fama’s 2010 study with colleague Kenneth French: “Luck versus Skill in the Cross-Section of Mutual Fund Returns”—a study that set out to examine whether it was true that the managers of mutual funds can actually do what they claim they can do, and outperform the stock market. In “Luck versus Skill,” Fama and French say that the evidence shows those managers can’t: “For fund investors the … results are disheartening,” because “few active funds produce … returns that cover their costs.” Maybe there are really intelligent people out there who are smarter than the market, Fama is suggesting—but if there are, he can’t find them.

Now, so far Fama’s idea might sound pretty unexceptional: to readers of this blog, it might even sound like common sense. It’s a fairly close idea to the one explored, for instance, by psychologist Amos Tversky and his co-authors in the paper, “The Hot Hand in Basketball,” which was about how what appeared to be a “hot,” or “clutch,” basketball shooter was simply an effect of randomness: if your skill level is such that you expect to make a certain percentage of your shots, then—simply through the laws of probability—it is likely that you will make a certain number of baskets in a row. Similarly, if there are enough mutual funds in the market, some number of them will have gaudy track records to report: “Given the multitude of funds,” as Fama writes, “many have extreme returns by chance.” If there’s enough participants in any competition, some will be winners—or to put it another way, if a monkey throws enough shit at a wall, some of it will stick.

That, Fama might say, doesn’t mean that the monkey has somehow gotten in touch with Reality: if no one person can outperform the market, then there is nothing anyone can know that would help them to become a better stock-picker. What that must mean in turn is (as the Wikipedia article on the subject notes) that “market prices reflect all available information,” or that “stocks always trade at their fair value”—which is right about where that the work of seemingly-conservative professors in economics departments and business schools, and their seeming-liberal opponents in departments of the humanities begins to converge.

Fama, after all, denies the existence of what are known as “bubbles”: “speculative bubbles, market bubbles, price bubbles, financial bubbles, speculative manias or balloons” as Wikipedia terms them. “Bubbles” describe situations in which a given asset—like, I don’t know, a house—is traded “at a price or price range that strongly deviates from the corresponding asset’s intrinsic value.” The classic example is the Dutch tulip craze of the seventeenth century, during which a single tulip bulb might have sold for ten times the yearly wage of a workman. (Other instances might be closer to the reader’s mind than that.) But according to Fama there can be no such thing as a “bubble”: when John Cassidy of The New Yorker said to Fama in an interview that the chief problem during the financial crisis of 2008 was that “there was a credit bubble that inflated and ultimately burst,” Fama replied by saying, “I don’t know what a credit bubble means. I don’t even know what a bubble means. These words have become popular. I don’t think they have any meaning.” Although a careful reader might note that what Fama is saying here is something like that there is a bubble in the concept of bubbles, what he intends is to deny that there are bubbles, and thus that there is any “intrinsic value” to a given asset.

It’s at this point, I think, that the connection between Eugene Fama’s contention about the “efficient market hypothesis” and the doctrine in the humanities known as “antifoundationalism” becomes clear: both are denials of the Starks’ “Winter Is Coming” motto. After all, a bubble only makes sense if there is some kind of “intrinsic,” or “foundational,” value to something; similarly, a “foundationalist” thinks that there is some nonhuman reality. But why does this obscure and esoteric doctrinal dispute among a few intellectuals matter, aside from being the latest turn of the wheel of fashion within the walls of the academy?

Well, it matters because what they are really discussing—the real meaning of “intrinsic value”—is whether to allow ordinary people to have any say about the future of their lives.

Many liberals, for instance, have warned about the Republican assault on the right to vote in such matters as the Supreme Court’s 2013 ruling in Shelby County vs. Holder, which essentially gutted the Voting Rights Act of 1965, or the passage of “voter ID laws” in many states—sold as “protections” but in reality a means of preventing voting. What’s far less-often discussed, however, is that intellectuals of the supposed academic left have begun—quietly, to be sure—to question the very idea of voting.

Oxford don Mary Beard, for example—a scholar of the ancient world and avowed feminist—recently wrote a column for the London Review of Books concerning the “Brexit” referendum, in which the people of Great Britain decided whether to stay in the European Union or not. Beard’s sort—educated, with “progressive” opinions—thought that Britain ought to remain in the Union; when the results came in, however, the nation had decided to leave, or “Brexit.” “Handing us a referendum,” Beard wrote in response, “is not a way to reach a responsible decision”—“for God’s sake,” one can almost hear Beard lecturing, “how can you let an important decision be up to the [insert condescending adjective here] voters?” But while that might sound like a one-time response to a very particular situation, in fact many smart people who share Beard’s general views also share her distrust of elections.

What is an election, anyway, but an event analogous to a battle, or a hurricane? To people inclined to dismiss the significance of real events, it’s easy enough to dismiss the notion of elections. “Importantly”— wrote Princeton University’s Lawrance S. Rockefeller Professor of Politics, Stephen Macedo, recently—“majority rule is not a fundamental principle of either democracy or fairness, nor is it required by any basic principle of democracy or fairness.” According to Macedo, “the basic principle of democracy” isn’t elections, but instead “political equality,” or a “respect [for] minority rights and … fair and inclusive deliberation.” In other words, so long as “minority rights” are respected and there is “fair and inclusive deliberation,” it doesn’t matter if anyone votes or not—which is to say that to very many smart, and supposedly “liberal” or “leftist” people, the very notion that voting has any kind of “intrinsic value” to it at all has become irrelevant.

That, more or less, is what the characters on Game of Thrones think too. After all, as Tywin says to Jaime at one point during the conversation I began this essay with, a “lion doesn’t concern himself with the opinion of a sheep.” Which, one supposes, is not a very surprising sentiment on a show that, while it sometimes depicts depicts dragons and magic, mostly concerns the doings of a handful of aristocrats in a feudal age. What might be pretty surprising, however—depending on your level of distrust—is that, today, a great many of the people entrusted to be society’s shepherds appear to agree with them.

Double Vision

Ill deeds are doubled with an evil word.
The Comedy of Errors. III, ii

The century just past had been both one of the most violent ever recorded—and also perhaps the highest flowering of civilized achievement since Roman times. A great war had just ended, and the danger of starvation and death had receded for millions; new discoveries in agriculture meant that many more people were surviving into adulthood. Trade was becoming more than a local matter; a pioneering Westerner had just re-established a direct connection with China. As well, although most recent contact with Europe’s Islamic neighbors had been violent, there were also signs that new intellectual contacts were being made; new ideas were circulating from foreign sources, putting in question truths that had been long established. Under these circumstances a scholar from one of the world’s most respected universities made—or said something that allowed his enemies to make it appear he had made—a seemingly-astonishing claim: that philosophy, reason, and science taught one kind of truth, and religion another, and that there was no need to reconcile the two. A real intellect, he implied, had no obligation to be correct: he or she had only to be interesting. To many among his audience that appeared to be the height of both sheer brainpower and politically-efficacious intellectual work—but then, none of them were familiar with either the history of German auto-making, or the practical difficulties of the office of the United States Attorney for the Southern District of New York.

Some literary scholars of a previous generation, of course, will get the joke: it’s a reference to then-Johns Hopkins University Miltonist Stanley Fish’s assertion, in his 1976 essay “Interpreting ‘Interpreting the Variorum,’” that, as an interpreter, he has no “obligation to be right,” but “only that [he] be interesting.” At the time, the profession of literary study was undergoing a profound struggle to “open the canon” to a wide range of previously-neglected writers, especially members of minority groups like African-Americans, women, and homosexuals. Fish’s remark, then, was meant to allow literary scholars to study those writers—many of whom would have been judged “wrong” according to previous notions of literary correctness. By suggesting that the proper frame of reference was not “correct/incorrect,” or “right/wrong,” Fish implied that the proper standard was instead something less rigid: a criteria that thusly allowed for the importation of new pieces of writing and new ideas to flourish. Fish’s method, in other words, might appear to be an elegant strategy that allowed for, and resulted in, an intellectual flowering in recent decades: the canon of approved books has been revamped, and a lot of people who probably would not have been studied—along with a lot of people who might not have done the studying—entered the curriculum who might not have had the change of mind Fish’s remark signified not have become standard in American classrooms.

I put things in the somewhat cumbersome way I do in the last sentence because of course Fish’s line did not arrive in a vacuum: the way had been prepared in American thought long before 1976. Forty years prior, for example, F. Scott Fitzgerald had claimed, in his essay “The Crack-Up” for Esquire, that “the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” In 1949 Fitzgerald’s fellow novelist, James Baldwin, similarly asserted that “literature and sociology are not the same.” And thirty years after Fish’s essay, the notion had become so accepted that American philosopher Richard Rorty could casually say that the “difference between intellectuals and the masses is the difference between those who can remember and use different vocabularies at the same time, and those who can remember only one.” So when Fish wrote what he wrote, he was merely putting down something that a number of American intellectuals had been privately thinking for some time—a notion that has, sometime between now and then, become American conventional wisdom.

Even some scientists have come to accept some version of the idea: before his death, the biologist Stephen Jay Gould promulgated the notion of what he called “non-overlapping magisteria”: the idea that while science might hold to one version of truth, religion might hold another. “The net of science,” Gould wrote in 1997, “covers the empirical universe,” while the “net of religion extends over questions of moral meaning and value.” Or, as Gould put it more flippantly, “we [i.e., scientists] study how the heavens go, and they [i.e., theologians] determine how to go to heaven.” “Science,” as medical doctor (and book reviewer) John Carmody put the point in The Australian earlier this year, “is our attempt to understand the physical and biological worlds of which we are a part by careful observation and measurement, followed by rigorous analysis of our findings,” while religion “and, indeed, the arts are, by contrast, our attempts to find fulfilling and congenial ways of living in our world.” The notion then that there are two distinct “realms” of truth is a well-accepted one: nearly every thinking, educated person alive today subscribes to some version of it. Indeed, it’s a belief that appears necessary to the pluralistic, tolerant society that many envision the United States is—or should be.

Yet, the description with which I began this essay, although it does in some sense apply to Stanley Fish’s United States of the 1970s, also applies—as the learned knew, but did not say, at the time of Fish’s 1976 remark—to another historical era: Europe’s thirteenth century. At that time, just as during Fish’s, the learned of the world were engaged in trying to expand the curriculum: in this case, they were attempting to recoup the work of Aristotle, largely lost to the West since the fall of Rome. But the Arabs had preserved Aristotle’s work: “In 832,” as Arthur Little, of the Jesuits, wrote in 1947, “the Abbaside Caliph, Almamun,” had the Greek’s work translated “into Arabic, roughly but not inaccurately,” in which language Aristotle’s works “spread through the whole Moslem world, first to Persia in the hand of Avicenna, then to Spain where its greatest exponent was Averroes, the Cordovan Moor.” In order to read and teach Aristotle without interference from the authorities, Little tells us, Averroes (Ibn Rushd) decided that “Aristotle’s doctrine was the esoteric doctrine of the Koran in opposition to the vulgar doctrine of the Koran defended by the orthodox Moslem priests”—that is, the Arabic scholar decided that there was one “truth” for the masses and another, far more subtle, for the learned. Averroes’ conception was, in turn, imported to the West along with the works of Aristotle: if the ancient Greek was at times referred to as the Master, his Arabic disciple was referred to as the Commentator.

Eventually, Aristotle’s works reached Paris, and the university there, sometime towards the end of the twelfth century. Gerard of Cremona, for example, had translated the Physics into Latin from the Arabic of the Spanish Moors sometime before he died in 1187; others had translated various parts of Aristotle’s Greek corpus either just before or just afterwards. For some time, it seems, they circulated in samizdat fashion among the young students of Paris: not part of the regular curriculum, but read and argued over by the brightest, or at least most well-read. At some point, they encountered a young man who would become known to history as Siger of Brabant—or perhaps rather, he encountered them. And like many other young, studious people, Siger fell in love with these books.

It’s a love story, in other words—and one that, like a lot of other love stories, has a sad, if not tragic, ending. For what Siger was learning by reading Aristotle—and Averroes’ commentary on Aristotle—was nearly wholly incompatible with what he was learning in his other studies through the rest of the curriculum—an experience that he was not, as the experience of Averroes before him had demonstrated, alone in having. The difference, however, is that whereas most other readers and teachers of the learned Greek sought to reconcile him to Christian beliefs (despite the fact that Aristotle long predated Christianity), Siger—as Richard E. Rubenstein puts it in his Aristotle’s Children—presented “Aristotle’s ideas about nature and human nature without attempting to reconcile them with traditional Christian beliefs.” And even more: as Rubenstein remarks, “Siger seemed to relish the discontinuities between Aristotelian scientia and Christian faith.” At the same time, however, Siger also held—as he wrote—that people ought not “try to investigate by reason those things which are above reason or to refute arguments for the contrary position.” But assertions like this also left Siger vulnerable.

Vulnerable, that is, to the charge that what he and his friends were teaching was what Rubenstein calls “the scandalous doctrine of Double Truth.” Or, in other words, the belief that “a proposition [that] could be true scientifically but false theologically, or the other way round.” Whether Siger and his colleagues did, or did not, hold to such a doctrine—there have been arguments about the point for centuries now— isn’t really material, however: as one commenter, Vincent P. Benitez, has put it, either way Siger’s work highlighted just how the “partitioning of Christian intellectual life in the thirteenth century … had become rather pronounced.” So pronounced, in fact, that it suggested that many supposed “intellectuals” of the day “accepted contradictories as simultaneously true.” And that—as it would not to F. Scott Fitzgerald later—posed a problem to the medievals, because it ran up against a rule of logic.

And not just any rule of logic: it’s one that Aristotle himself said was the most essential to any rational thought whatever. That rule of logic is usually known by the name the Law of Non-Contradiction, usually placed as the second of the three classical rules of logic in the ancient world. (The others being the Law of Identity—A is A—and the Law of the Excluded Middle—either A is A or it is not-A.) As Aristotle himself put it, the “most certain of all basic principles is that contradictory propositions are not true simultaneously.” Or—as another of Aristotle’s Arabic commenters, Avicenna (Ibn-Sina) put it in one of its most famous formulations—that rule goes like this: “Anyone who denies the law of non-contradiction should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned is not the same as not to be burned.” In short, a thing cannot be both true and not true at the same time.

Put in Avicenna’s way, of course, the Law of Non-Contradiction will sound distinctly horrible to most American undergraduates, perhaps particularly those who attend the most exclusive colleges: it sounds like—and, like a lot of things, has been—a justification for the worst kind of authoritarian, even totalitarian, rule, and even torture. In that sense, it might appear that attacking the law of non-contradiction could be the height of oppositional intellectual work: the kind of thing that nearly every American undergraduate attracted to the humanities aspires to do. Who is not, aside from members of the Bush Administration legal team (for that matter, nearly every regime known to history) and viewers of the television show 24, against torture? Who does not know that black-and-white morality is foolish, that the world is composed of various “shades of gray,” that “binary oppositions” can always be dismantled, and that it is the duty of the properly educated to instruct the lower orders in the world’s real complexity? Such views might appear obvious—especially if one is unfamiliar with the recent history of Volkswagen.

In mid-September of 2015, the Environmental Protection Agency of the United States issued a violation notice to the German automaker Volkswagen. The EPA had learned that, although the diesel engines Volkswagen built were passing U.S. emissions tests, they were doing it on the sly: each car’s software could detect when the car’s engine was being tested by government monitors, and if so could reduce the pollutants that engine was emitting. Just more than six months later, Volkswagen agreed to pay a settlement of 15.3 billion dollars in the largest auto-related class-action lawsuit in the history of the United States. That much, at least, is news; what interests me, however,  about this story in relation to this talk about academics and monks was a curious article put out by The New Yorker in October of 2015. Entitled “An Engineering Theory of the Volkswagen Scandal,” Paul Kedrosky—perhaps significantly—“a venture investor and a former equity analyst,” explains these events as perhaps not the result of “engineers … under orders from management to beat the tests by any means necessary.” Instead, the whole thing may simply have been the result of an “evolution” of technology that “subtly and stealthily, even organically, subverted the rules.” In other words, Kedrosky wishes us to entertain the possibility that the scandal ought to be understood in terms of the undergraduate’s idea of shades of gray.

Kedrosky takes his theory from a book by sociologist Diane Vaughn, about the Challenger space shuttle disaster of 1986. In her book, Vaughn describes how, over nine launches from 1983 onwards, the space shuttle organization had launched Challenger under colder and colder temperatures, until NASA’s engineers had “effectively declared the mildly abnormal normal,” Kedrosky says—and until, one very frigid January morning in Florida, the shuttle blew into thousands of pieces moments after liftoff. Kedrosky’s attempt at an analogy is that maybe the Volkswagen scandal developed similarly: “Perhaps it started with tweaks that optimized some aspect of diesel performance and then evolved over time.” If so, then “at no one step would it necessarily have felt like a vast, emissions-fixing conspiracy by Volkswagen engineers.” Instead—as this story goes—it would have felt like Tuesday.

The rest of Kedrosky’s thrust is relatively easy to play out, of course—because we have heard a similar story before. Take, for instance, another New Yorker story; this one, a profile of the United States Attorney for the Southern District of New York, Preet Bharara. Mr. Bharara, as the representative of the U.S. Justice Department in New York City, is in charge of prosecuting Wall Street types; because he took office in 2009, at the crest of the financial crisis that began in 2007, many thought he would end up arresting and charging a number of executives as a result of the widely-acknowledged chicaneries involved in creating the mess. But as Jeffrey Toobin laconically observes in his piece, “No leading executive was prosecuted.” Even more notable, however, is the reasoning Bharara gives for his inaction.

“Without going into specifics,” Toobin reports, Bharara told him “that his team had looked at Wall Street executives and found no evidence of criminal behavior.” Sometimes, Bharara went on to explain, “‘when you see a bad thing happen, like you see a building go up in flames, you have to wonder if there’s arson’”—but “‘sometimes it’s not arson, it’s an accident.’” In other words, to Bharara, it’s entirely plausible to think of the entire financial meltdown of 2007-8, which ended three giant Wall Street firms (Bear Stearns, Merrill Lynch, and Lehman Brothers) and two arms of the United States government (Fannie Mae and Freddie Mac), and is usually thought to have been caused by predatory lending practices driven by Wall Street’s appetite for complex financial instruments, as essentially analogous to Diane Vaughn’s view of the Challenger disaster—or Kedrosky’s view of Volkswagen’s cavalier thoughts about environmental regulation. To put it in another way, both Kedrosky and Bharara must possess, in Fitzgerald’s terms, “first-rate intelligences”: in Kedrosky’s version of Volkswagen’s actions or Bharara’s view of Wall Street, crimes were committed, but nobody committed them. They were both crimes and not-crimes at the same time.

These men can, in other words, hold opposed ideas in their head simultaneously. To many, that makes these men modern—or even, to some minds, “post-modern.” Contemporary intellectuals like to cite examples—like the “rabbit-duck” illusion referred to by Wittgenstein, which can be seen as either a rabbit or a duck, or the “Schroedinger’s Cat” thought experiment, whereby the cat is neither dead nor alive until the box is opened, or the fact that light is both a wave and a particle—designed to show how out-of-date the Law of Noncontradiction is. In that sense, we might as easily blame contemporary physics as contemporary work in the humanities for Kedrosky or Bharara’s difficulties in saying whether an act was a crime or not—and for that matter, maybe the similarity between Stanley Fish and Siger of Brabant is merely a coincidence. Still, in the course of reading for this piece I did discover another apparent coincidence in Arthur Little’s same article I previously cited. “Unlike Thomas Aquinas,” the Jesuit wrote 1947, “whose sole aim was truth, Siger desired most of all to find the world interesting.” The similarity to Stanley Fish’s 1976 remarks about himself—that he has no obligation to be right, only to be interesting—are, I think, striking. Like Bharara, I cannot demonstrate whether Fish knew of this article of Little’s, written thirty years before his own.

But then again, if I have no obligation to be right, what does it matter?

Human Events

Opposing the notion of minority rule, [Huger] argued that a majority was less likely to be wrong than a minority, and if this was not so “then republicanism must be a dangerous fallacy, and the sooner we return to the ‘divine rights’ of the kings the better.”
—Manisha Sinha. The Counterrevolution of Slavery. 2001.

Note that agreement [concordantia] is particularly required on matters of faith and the greater the agreement the more infallible the judgment.
—Nicholas of Cusa. Catholic Concordance. 1432.

 

It’s perhaps an irony, though a mild one, that the weekend of the celebrations of American independence the most notable sporting events are the Tour de France, soccer’s European Cup, and Wimbledon—maybe all the more so now that Great Britain has voted to “Brexit,” i.e., to leave the European Union.  A number of observers have explained that vote as at least somewhat analogous to the Donald Trump movement in the United States, in the first place because Donald himself called the “Brexit” decision a “great victory” at a press conference the day after the vote, and a few days later “praised the vote as a decision by British voters to ‘take back control of their economy, politics and borders,’” as The Guardian said Thursday. To the mainstream press, the similarity between the “Brexit” vote and Donald Trump’s candidacy is that—as Emmanuel Macron, France’s thirty-eight-year-old economy minister said about “Brexit”—both are a conflict between those “content with globalization” and those “who cannot find” themselves within the new order. Both Trump and “Brexiters” are, in other words, depicted as returns of—as Andrew Solomon put it in The New Yorker on Tuesday—“the Luddite spirit that led to the presumed arson at Albion Mills, in 1791, when angry millers attacked the automation that might leave them unemployed.” “Trumpettes” and “Brexiters” are depicted as wholly out of touch and stuck in the past—yet, as a contrast between Wimbledon and the Tour de France may help illuminate, it could also be argued that it is, in fact, precisely those who make sneering references both to Trump and to “Brexiters” who represent, not a smiling future, but instead the return of the ancien régime.

Before he outright won the Republican nomination through the primary process, after all, Trump repeatedly complained that the G.O.P.’s process was “rigged”: that is, it was hopelessly stacked against an outsider candidate. And while a great deal of what Trump has said over the past year has been, at best, ridiculously exaggerated when not simply outright lying, in that contention Trump has a great deal of evidence: as Josh Barro put it in Business Insider (not exactly a lefty rag) back in April, “the Republican nominating rules are designed to ignore the will of the voters.” Barro cites the example of Colorado’s Republican Party, which decided in 2015 “not to hold any presidential preference vote”—a decision that, as Barro rightly says, “took power away from regular voters and handed it to the sort of activists who would be likely … [to] participat[e] in party conventions.” And Colorado’s G.O.P. was hardly alone in making, quite literally, anti-democratic decisions about the presidential nominating process over the past year: North Dakota also decided against a primary or even a caucus, while Pennsylvania did hold a vote—but voters could only choose uncommitted delegates; i.e., without knowing to whom those delegates owed allegiance.

Still, as Mother Jones—which is a lefty rag—observed, also back in April, this is an argument that can easily be worked against as for Trump: in New York’s primary, for instance, “Kasich and Cruz won 40 percent of the vote but only 4 percent of the delegates,” while on Super Tuesday Trump’s opponents “won 66 percent of the vote but only 57 percent of the delegates.” And so on. Other critics have similarly attacked the details of Trump’s arguments: many, as Mother Jones’ Kevin Drum says, have argued that the details of the Republican nominating process could just as easily be used as evidence for “the way the Republican establishment is so obviously in the bag for Trump.” Those critics do have a point: investigating the whole process is exceedingly difficult because the trees overwhelm any sense of the forest.

Yet, such critics often use those details (about which they are right) to make an illicit turn. They have attacked, directly or indirectly, the premise of the point Trump tried to make in an op-ed piece in The Wall Street Journal this spring that—as Nate Silver paraphrased it on FiveThirtyEight—“the candidate who gets the most votes should be the Republican nominee.” In other words, they make an argumentative turn from the particulars of this year’s primary process to take a very disturbing swerve toward attacking the very premises of democratic government itself: by disputing this or that particular they obscure whether or not the will of the voters should be respected. Hence, even if Trump’s whole campaign is, at best, wholly misdirected, the point he is making—a point very similar to the one made by Bernie Sanders’ campaign—is not something to be treated lightly. But that, it seems, is something that elites are, despite their protests, skirting close to doing: which is to say that, despite the accusations directed at Trump that he is leading a fascistic movement, it is actually arguable that it is Trump’s supposedly “liberal” opponents who are far closer to authoritarianism than he is because they have no respect for sanctity of the ballot. Or, to put it another way, that it is Trump’s voters—and, by extension, those for “Brexit”—who have the cosmopolitan view, while it is his opponents who are, in fact, the provincialists.

The point, I think, can be seen by comparing the scoring rules between Wimbledon and the Tour de France. The Tour, as may or may not be known, is determined by the rider who—as Patrick Redford at Deadspin put it the other day in “The Casual Observer’s Guide to the Tour de France”—has “the lowest time over all 21 stages.” Although the race takes place over nearly the whole nation of France, and several more besides, and covers over 2,000 miles from the cobblestone flats of Flanders to the heights of the Alps and down to the streets of Paris, still the basic premise of the race is clear even to the youngest child: ride faster and win. Explaining Wimbledon however—like explaining the rules of the G.O.P. nominating process (or, for that matter, the Democratic nominating process)—is not so simple.

As I have noted before in this space, the rules of tennis are not like cycling—or even such familiar sports as baseball or football. In baseball and most other sports, including the Tour, the “score is cumulative throughout the contest … and whoever has the most points at the end wins,” as Allen Fox once described the difference between tennis and other games in Tennis magazine. But tennis is not like that: “The basic element of tennis scoring is the point,” as mathematician G. Edgar Parker has noted, “but tennis matches are won by the player who wins two out three (or three out of five) sets.” Sets are themselves accumulations of games, not points. During each game, points are won and lost until one player has not only won at least four points but also has a two-point advantage on the other; games go back and forth until one player does have that advantage. Then, at the set level, one player must have won at least six games (though the rules vary at some professional tournaments if that player also needs a two-game advantage to win the set). Finally, then, a player needs to win at least two, and—as at Wimbledon—sometimes three, sets to take a match.

If the Tour de France were won like Wimbledon is won, in other words, the winner would not be determined by whoever had the lowest overall time: the winner would be, at least at first analysis, whoever won the most number of stages. But even that comparison would be too simple: if the Tour winner were determined by the winner of the most stages, that would imply that each stage were equal—and it is certainly not the case that all points, games, or sets in tennis are equal. “If you reach game point and win it,” as Fox writes in Tennis, “you get the entire game while your opponent gets nothing—all of the points he or she won in the game are eliminated.” The points in one game don’t carry over to the next game, and previous games don’t carry over to the next set. That means that some points, some games, and some sets are more important than others: “game point,” “set point,” and “match point” are common tennis terms that mean “the point whose winner may determine the winner of the larger category.” If tennis’ type of scoring system were applied to the Tour, in other words, the winner of the Tour would not be the overall fastest cyclist, nor even the cyclist who won the most stages, but the cyclist who won certain stages, say—or perhaps even certain moments within stages.

Despite all the Sturm und Drang surrounding Donald Trump’s candidacy, then—the outright racism and sexism, the various moronic-seeming remarks concerning American foreign policy, not to mention the insistence that walls are more necessary to the American future than they even are to squash—there is one point about which he, like Bernie Sanders in the Democratic camp, is making cogent sense: the current process for selecting an American president is much more like a tennis match than it is like a bicycle race. After all, as Hendrik Hertzberg of The New Yorker once pointed out, Americans don’t elect their presidents “the same way we elect everybody else—by adding up all the voters’ votes and giving the job to the candidate who gets the most.” Instead, Americans have (as Ed Grabianowski puts it on the how stuff works website), “a whole bunch of separate state elections.” And while both of these comments were directed at the presidential general election, which depends on the Electoral College, they equally, if not more so, apply to the primary process: at least in the general election in November, each state’s rules are more or less the same.

The truth, and hence power, of Trump’s critique of this process can be measured by the vitriol of the response to it. A number of people, on both sides of the political aisle, have attacked Trump (and Sanders) for drawing attention to the fashion in which the American political process works: when Trump pointed out that Colorado had refused to hold a primary, for instance, Reince Priebus, chairman of the Republican National Committee, tweeted (i.e., posted on Twitter, for those of you unfamiliar with, you know, the future) “Nomination process known for a year + beyond. It’s the responsibility of the campaigns to understand it. Complaints now? Give us all a break.” In other words, Priebus was implying that the rules were the same for all candidates, and widely known before hand—so why the whining? Many on the Democratic side said the same about Sanders: as Albert Hunt put it in the Chicago Tribune back in April, both Trump and Sanders ought to shut up about the process: “Both [campaigns’] charges [about the process] are specious,” because “nobody’s rules have changed since the candidates entered the fray.” But as both Trump and Sanders’ campaigns have rightly pointed out, the rules of a contest do matter beyond just the bare fact that they are the same for every candidate: if the Tour de France were conducted under rules similar to tennis’, it seems likely that the race would be won by very different kinds of winners—sprinters, perhaps, who could husband their stamina until just the right moment. It’s very difficult not to think that the criticisms of Trump and Sanders as being “whiners” is disingenuous—an obvious attempt to protect a process that transparently benefits insiders.

Trump’s supporters, like Sanders’ and those who voted “Leave” in the “Brexit” referendum, have been labeled as “losers”—and while, to those who consider themselves “winners,” the thoughts of losers are (as the obnoxious phrase has it) like the thoughts of sheep to wolves, it seems indisputably true that the voters behind all three campaigns represent those for whom the global capitalism of the last several decades hasn’t worked so well. As Matt O’Brian noted in The Washington Post a few days ago, “the working class in rich countries have seen their real, or inflation-adjusted, incomes flatline or even fall since the Berlin Wall came down and they were forced to compete with all the Chinese, Indian, and Indonesian workers entering the global economy.” (Real economists would dispute O’Brian’s chronology here: at least in the United States, wages have not risen since the early 1970s, which far predates free trade agreements like the North American Free Trade Agreement signed by Bill Clinton in the 1990s. But O’Brian’s larger argument, as wrongheaded as it is in detail, instructively illustrates the muddleheadedness of the conventional wisdom.) In this fashion, O’Brian writes, “the West’s triumphant globalism” has “fuel[ed] a nationalist backlash”: “In the United States it’s Trump, in France it’s the National Front, in Germany it’s the Alternative for Germany, and, yes, in Britain it’s the Brexiters.” What’s astonishing about this, however, is that—despite not having, as so, so many articles decrying their horribleness have said, a middle-class senses of decorum—all of these movements stand for a principle that, you would think, the “intellectuals” of the world would applaud: the right of the people themselves to determine their own destiny.

It is they, in other words, who literally embody the principle enunciated by the opening words of the United States Constitution, “We the People,” or enunciated by the founding document of the French Revolution (which, by the by, began on a tennis court), The Declaration of the Rights of Man and the Citizen, whose first article holds that “Men are born and remain free and equal in rights.” In the world of this Declaration, in short, each person has—like every stage of the Tour de France, and unlike each point played during Wimbledon—precisely the same value. It’s a principle that Americans, especially, ought to remember this weekend of all weekends—a weekend that celebrates another Declaration, one whose opening lines reads “We hold these truths to be self-evident, that all men are created equal.” Americans, in other words, despite the success individual Americans like John McEnroe or Pete Sampras or Chris Evert, are not tennis players, as Donald Trump (and Bernie Sanders) have rightfully pointed out over the past year—a sport, as one history of the game has put it, “so clearly aligned with both The Church and Aristocracy.” Americans, as the first modern nation in the world, ought instead to be associated with a sport unknown to the ancients and unthinkable without modern technology.

We are bicycle riders.