Nunc Dimittis

Nunc dimittis servum tuum, Domine, secundum verbum tuum in pace:
Quia viderunt oculi mei salutare tuum
Quod parasti ante faciem omnium populorum:
Lumen ad revelationem gentium, et gloriam plebis tuae Israel.
—“The Canticle of Simeon.”
What appeared obvious was therefore rendered problematical and the question remains: why do most … species contain approximately equal numbers of males and females?
—Stephen Jay Gould. “Death Before Birth, or a Mite’s Nunc dimittis.”
    The Panda’s Thumb: More Reflections in Natural History. 1980.
 4199HWwzLWL._AC_US218_

Since last year the attention of most American liberals has been focused on the shenanigans of President Trump—but the Trump Show has hardly been the focus of the American right. Just a few days ago, John Nichols of The Nation observed that ALEC—the business-funded American Legislative Exchange Council that has functioned as a clearinghouse for conservative proposals for state laws—“is considering whether to adopt a new piece of ‘model legislation’ that proposes to do away with an elected Senate.” In other words, ALEC is thinking of throwing its weight behind the (heretofore) fringe idea of overturning the Seventeenth Amendment, and returning the right to elect U.S. Senators to state legislatures: the status quo of 1913. Yet, why would Americans wish to return to a period widely known to be—as the most recent reputable academic history, Wendy Schiller and Charles Stewart’s Electing the Senate: Indirect Democracy Before the Seventeenth Amendment has put the point—“plagued by significant corruption to a point that undermined the very legitimacy of the election process and the U.S. Senators who were elected by it?” The answer, I suggest, might be found in a history of the German higher educational system prior to the year 1933.

“To what extent”—asked Fritz K. Ringer in 1969’s The Decline of the German Mandarins: The German Academic Community, 1890-1933—“were the German mandarins to blame for the terrible form of their own demise, for the catastrophe of National Socialism?” Such a question might sound ridiculous to American ears, to be sure: as Ezra Klein wrote in the inaugural issue of Vox, in 2014, there’s “a simple theory underlying much of American politics,” which is “that many of our most bitter political battles are mere misunderstandings” that can be solved with more information, or education. To blame German professors, then, for the triumph of the Nazi Party sounds paradoxical to such ears: it sounds like blaming an increase in rats on a radio station. From that view, then, the Nazis must have succeeded because the German people were too poorly-educated to be able to resist Hitler’s siren song.

As one appraisal of Ringer’s work in the decades since Decline has pointed out, however, the pioneering researcher went on to compare biographical dictionaries between Germany, France, England and the United States—and found “that 44 percent of German entries were academics, compared to 20 percent or less elsewhere”; another comparison of such dictionaries found that a much-higher percentage of Germans (82%) profiled in such books had exposure to university classes than those of other nations. Meanwhile, Ringer also found that “the real surprise” of delving into the records of “late nineteenth-century German secondary education” is that it “was really rather progressive for its time”: a higher percentage of Germans found their way to a high school education than did their peers in France or England during the same period. It wasn’t, in other words, for lack of education that Germany fell under the sway of the Nazis.

All that research, however, came after Decline, which dared to ask the question, “Did the work of German academics help the Nazis?” To be sure, there were a number of German academics, like philosopher Martin Heidegger and legal theorist Carl Schmitt, who not only joined the party, but actively cheered the Nazis on in public. (Heidegger’s connections to Hitler have been explored by Victor Farias and Emannuel Faye; Schmitt has been called “the crown jurist of the Third Reich.”) But that question, as interesting as it is, is not Ringer’s; he isn’t interested in the culpability of academics in direct support of the Nazis, perhaps the culpability of elevator repairmen could as well be interrogated. Instead, what makes Ringer’s argument compelling is that he connects particular intellectual beliefs to a particular historical outcome.

While most examinations of intellectuals, in other words, bewail a general lack of sympathy and understanding on the part of the public regarding the significance of intellectual labor, Ringer’s book is refreshing insofar as it takes the opposite tack: instead of upbraiding the public for not paying attention to the intellectuals, it upbraids the intellectuals for not understanding just how much attention they were actually getting. The usual story about intellectual work and such, after all, is about just how terrible intellectuals have it—how many first novels, after all, are about young writers and their struggles? But Ringer’s research suggests, as mentioned, the opposite: an investigation of Germany prior to 1933 shows that intellectuals were more highly thought of there than virtually anywhere in the world. Indeed, for much of its history before the Holocaust Germany was thought of as a land of poets and thinkers, not the grim nation portrayed in World War II movies. In that sense, Ringer has documented just how good intellectuals can have it—and how dangerous that can be.

All of that said, what are the particular beliefs that, Ringer thinks, may have led to the installation of the Fürher in 1933? The “characteristic mental habits and semantic preferences” Ringer documents in his book include such items as “the underlying vision of learning as an empathetic and unique interaction with venerated texts,” as well as a “consistent repudiation of instrumental or ‘utilitarian’ knowledge.” Such beliefs are, to be sure, seemingly required of the departments of what are now—but weren’t then—thought of, at least in the United States, as “the humanities”: without something like such foundational assumptions, subjects like philosophy or literature could not remain part of the curriculum. But, while perhaps necessary for intellectual projects to leave the ground, they may also have some costs—costs like, say, forgetting why the Seventeenth Amendment was passed.

That might sound surprising to some—after all, aren’t humanities departments hotbeds of leftism? Defenders of “the humanities”—like Gregory Harpham, once Director of the National Endowment for the Humanities—sometimes go even further and make the claim—as Harpham did in his 2011 book, The Humanities and the Dream of America—that “the capacity to sympathize, empathize, or otherwise inhabit the experience of others … is clearly essential to democratic society,” and that this “kind of capacity … is developed by an education that includes the humanities.” Such views, however, make a nonsense of history: traditionally, after all, it’s been the sciences that have been “clearly essential to democratic society,” not “the humanities.” And, if anyone thinks about it closely, the very notion of democracy itself depends on an idea that, at base, is “scientific” in nature—and one that is opposed to the notion of “the humanities.”

That idea is called, in scientific circles, “the Law of Large Numbers”—a concept first written down formally two centuries ago by mathematician Jacob Bernoulli, but easily illustrated in the words of journalist Michael Lewis’ most recent book. “If you flipped a coin a thousand times,” Lewis writes in The Undoing Project, “you were more likely to end up with heads or tails roughly half the time than if you flipped it ten times.” Or as Bernoulli put it in 1713’s Ars Conjectandi, “it is not enough to take one or another observation for such a reasoning about an event, but that a large number of them are needed.” It is a restatement of the commonsensical notion that the more times a result is repeated, the more trustworthy it is—an idea hugely applicable to human life.

For example, the Law of Large Numbers is why, as publisher Nate Silver recently put it, if “you want to predict a pitcher’s win-loss record, looking at the number of strikeouts he recorded and the number of walks he yielded is more informative than looking at his W’s and L’s from the previous season.” It’s why, when financial analyst John Bogle examined the stock market, he decided that, instead of trying to chase the latest-and-greatest stock, “people would be better off just investing their money in the entire stock market for a very cheap price”—and thereby invented the index fund. It’s why, Malcolm Gladwell has noted, the labor movement has always endorsed a national health care system: because they “believed that the safest and most efficient way to provide insurance against ill health or old age was to spread the costs and risks of benefits over the biggest and most diverse group possible.” It’s why casinos have limits on the amounts bettors can wager. In all these fields, as well as more “properly” scientific ones, it’s better to amass large quantities of results, rather than depend on small numbers of them.

What is voting, after all, but an act of sampling of the opinion of the voters, an act thereby necessarily engaged with the Law of Large Numbers? So, at least, thought the eighteenth-century mathematician and political theorist the Marquis de Condorcet—who called the result “the miracle of aggregation.” Summarizing a great deal of contemporary research, Sean Richey of Georgia State University has noted that Condorcet’s idea was that (as one of Richey’s sources puts the point) “[m]ajorities are more likely to select the ‘correct’ alternative than any single individual when there is uncertainty about which alternative is in fact the best.” Or, as Richey describes how Condorcet’s process actually works more concretely puts it, the notion is that “if ten out of twelve jurors make random errors, they should split five and five, and the outcome will be decided by the two who vote correctly.” Just as, in sum, a “betting line” demarks the boundary of opinion between gamblers, Condorcet provides the justification for voting: Condorcet’s theory was that “the law of large numbers shows that this as-if rational outcome will be almost certain in any large election if the errors are randomly distributed.” Condorcet, thereby, proposed elections as a machine for producing truth—and, arguably, democratic governments have demonstrated that fact ever since.

Key to the functioning of Condorcet’s machine, in turn, is large numbers of voters: the marquis’ whole idea, in fact, is that—as David Austen-Smith and Jeffrey S. Banks put the French mathematician’s point in 1996—“the probability that a majority votes for the better alternative … approaches 1 [100%] as n [the number of voters] goes to infinity.” In other words, the point is that the more voters, the more likely an election is to reach the correct decision. The Seventeenth Amendment is, then, just such a machine: its entire rationale is that the (extremely large) pool of voters of a state is more likely to reach a correct decision than an (extremely small) pool voters consisting of the state legislature alone.

Yet the very thought that anyone could even know what truth is, of course—much less build a machine for producing it—is anathema to people in humanities departments: as I’ve mentioned before, Bruce Robbins of Columbia University has reminded everyone that such departments were “founded on … the critique of Enlightenment rationality.” Such departments have, perhaps, been at the forefront of the gradual change in Americans from what the baseball writer Bill James has called “an honest, trusting people with a heavy streak of rationalism and an instinctive trust of science,” with the consequence that they had “an unhealthy faith in the validity of statistical evidence,” to adopting “the position that so long as something was stated as a statistic it was probably false and they were entitled to ignore it and believe whatever they wanted to [believe].” At any rate, any comparison of the “trusting” 1950s America described by James by comparison to what he thought of as the statistically-skeptical 1970s (and beyond) needs to reckon with the increasingly-large bulge of people educated in such departments: as a report by the Association of American Colleges and Universities has pointed out, “the percentage of college-age Americans holding degrees in the humanities has increased fairly steadily over the last half-century, from little over 1 percent in 1950 to about 2.5 percent today.” That might appear to be a fairly low percentage—but as Joe Pinsker’s headline writer put the point of Pinsker’s article in The Atlantic, “Rich Kids Major in English.” Or as a study cited by Pinsker in that article noted, “elite students were much more likely to study classics, English, and history, and much less likely to study computer science and economics.” Humanities students are a small percentage of graduates, in other words—but historically they have been (and given the increasingly-documented decreasing social mobility of American life, are increasingly likely to be) the people calling the shots later.

Or, as the infamous Northwestern University chant had it: “That‘s alright, that’s okay—you’ll be working for us someday!” By building up humanities departments, the professoriate has perhaps performed useful labor by clearing the ideological ground for nothing less than the repeal of the Seventeenth Amendment—an amendment whose argumentative success, even today, depends upon an audience familiar not only with Condorcet’s specific proposals, but also with the mathematical ideas that underlay them. That would be no surprise, perhaps, to Fritz Ringer, who described how the German intellectual class of the late nineteenth century and early twentieth constructed an “a defense of the freedom of learning and teaching, a defense which is primarily designed to combat the ruler’s meddling in favor of a narrowly useful education.” To them, the “spirit flourishes only in freedom … and its achievements, though not immediately felt, are actually the lifeblood of the nation.” Such an argument is reproduced by such “academic superstar” professors of humanities as Judith Butler, Maxine Elliot Professor in the Departments of Rhetoric and Comparative Literature at (where else?) the University of California, Berkeley, who has argued that the “contemporary tradition”—what?—“of critical theory in the academy … has shown how language plays an important role in shaping and altering our common or ‘natural’ understanding of social and political realities.”

Can’t put it better.

Advertisements

Lex Majoris

The first principle of republicanism is that the lex majoris partis is the fundamental law of every society of individuals of equal rights; to consider the will of the society enounced by the majority of a single vote, as sacred as if unanimous, is the first of all lessons in importance, yet the last which is thoroughly learnt. This law once disregarded, there is no other but that of force, which ends necessarily in military despotism.
—Thomas Jefferson. Letter to Baron von Humboldt. 13 June 1817.

Since Hillary Clinton lost the 2016 American presidential election, many of her supporters have been quick to cry “racism” on the part of voters for her opponent, Donald Trump. According to Vox’s Jenée Desmond-Harris, for instance, Trump won the election “not despite but because he expressed unfiltered disdain toward racial and religious minorities in the country.” Aside from being the easier interpretation, because it allows Clinton voters to ignore the role their own economic choices may have played in the broad support Trump received throughout the country, such accusations are counterproductive even on their own terms because—only seemingly paradoxically—they reinforce many of the supports racism still receives in the United States: above all, because they weaken the intellectual argument for a national direct election for the presidency. By shouting “racism,” in other words, Hillary Clinton’s supporters may end up helping to continue racism’s institutional support.

That institutional support begins with the method by which Americans elect their president: the Electoral College—a method that, as many have noted, is not used in any other industrialized democracy. Although many scholars and others have advanced arguments for the existence of the college through the centuries, most of these “explanations” are, in fact, intellectually incoherent: while the most common of the traditional “explanations” concerns the differences between the “large states” and the “small,” for instance, in the actual United States—as James Madison, known as the “Father of the Constitution,” noted at the time—there had not then, and has not ever been since, a situation in American history that involved a conflict between larger-population and smaller-population states. Meanwhile, the other “explanations” for the Electoral College do not even rise to this level of incoherence.

In reality there is only one explanation for the existence of the college, and that explanation has been most forcefully and clearly made by law professor Paul Finkelman, now serving as a Senior Fellow at the University of Pennsylvania after spending much of his career at obscure law schools like the University of Tulsa College of Law, the Cleveland-Marshall College of Law, and the Albany Law School. As Finkelman has been arguing for decades (his first papers on the subject were written in the 1980s), the Electoral College was originally invented by the delegates to the Constitutional Convention of 1787 in order to protect slavery. That such was the purpose of the College can be known, most obviously, because the delegates to the convention said so.

When the means of electing a president were first debated, it’s important to remember that the convention had already decided, for the purposes of representation in the newly-created House of Representatives, to count black slaves by the means of the infamous three-fifths ratio. That ratio, in turn, had its effect when discussing the means of electing a president: delegates like James Madison argued, as Finkelman notes, that the existence of such a college—whose composition would be based on each state’s representation in the House of Representatives—would “guarantee that the nonvoting slaves could nevertheless influence the presidential election.” Or as Hugh Williamson, a delegate from North Carolina, observed during the convention, if American presidents were elected by direct national vote the South would be shut out of electing a national executive because “her slaves will have no suffrage”—that is, because in a direct vote all that would matter is the number of voters, the Southern states would lose the advantage the three-fifths ratio gave them in the House. Hence, the existence of the Electoral College is directly tied to the prior decision to grant Southern slave states an advantage in Congress, and so the Electoral College is another in a string of institutional decisions made by convention delegates to protect domestic slavery.

Yet, assuming that Finkelman’s case for the racism of the Electoral College is true, how can decrying the racism of the American voter somehow inflict harm on the case for abolishing the Electoral College? The answer goes back to the very justifications of, not only presidential elections, but elections in general—the gradual discovery, during the eighteenth century Enlightenment, of what is today known as the Law of Large Numbers.

Putting the law in capital letters, I admit, tends to mystify it, but anyone who buys insurance already understands the substance of the concept. As New Yorker writer Malcolm Gladwell once explained insurance, “the safest and most efficient way to provide insurance” is “to spread the costs and risks of benefits over the biggest and most diverse group possible.” In other words, the more people participating in an insurance plan, the greater the possibility that the plan’s members will be protected. The Law of Large Numbers explains why that is.

That reason is the same as the reason that, as Peter Bernstein remarks in Against the Gods: The Remarkable Story of Risk, if we toss a coin enough times that “will correspondingly increase the probability that the ratio of heads thrown to total throws” will decrease. Or, the reason that—as physicist Leonard Mlodinow has pointed out—in order really to tell which baseball team is better than another a World Series would have to be at least 23 games long (if one team were much better than the other), and possibly as long as 269 games (between two closely-matched opponents). Only by playing so many games can random chance be confidently excluded: as Carl Bialik of FiveThirtyEight once pointed out, usually “in sports, the longer the contest, the greater the chance that the favorite prevails.” Or, as Israeli psychologists Daniel Kahneman and Amos Tversky put the point in 1971, “the law of large numbers guarantees that very large samples will indeed be representative”: it’s what scientists rely upon to know that, if they have performed enough experiments or poured over enough data, they know enough to exclude idiosyncratic results. The Law of Large Numbers asserts, in short, that the more times we repeat something, the closer we will approach its true value.

It’s for just that reason that many have noted the connection between science and democratic government: “Science and democracy are powerful partners,” as the website for the Union of Concerned Scientists has put it. What makes these two objects such “powerful” partners is that the Law of Large Numbers is what underlies the act of holding elections: as James Surowiecki put the point in his book, The Wisdom of Crowds, the theory of democracy is that “the larger the group, the more reliable its judgment will be.” Just as scientists think that, by replicating an experiment, they can more readily trust in its results, so too does a democratic government implicitly think that, by including more people in the decision-making process, the government can the more readily arrive at the “correct” solution: as James Madison put it in The Federalist No. 10, if you “take in a greater variety of parties and interests,” then “you make it less probable that a majority of the whole will have a common motive for invading the rights of other citizens.” Without such a belief, after all, there would be no reason not to trust, say, a ruling caste to make decisions for society—or even a single, perhaps orange-toned, individual. Without some concept of the Law of Large Numbers—some belief that increasing the numbers of trials, or increasing the number of inputs, will make for better results—there is no reason for democratic government at all.

That’s why, when people criticize the Electoral College, they are implicitly invoking the Law of Large Numbers. The Electoral College divides the pool of American voters into fifty smaller pools, but a national popular vote would collect all Americans into a single lump—a point that some defenders of the College sometimes seek to make into a virtue, instead of the vice it is. In the wake of the 2000 election, for example, Senator Mitch McConnell wrote that the “Electoral College served to center the post-election battles in Florida,” preventing the “vote recounts and court battles in nearly every state of the Union” that, McConnell assures us, would have occurred in the college’s absence. But as Timothy Noah pointed out in The New Republic in 2012, what McConnell’s argument “fails to realize is that when you’re assembling one big count rather than a lot of little ones it’s a lot less clear what’s to be gained from rigging any of the little ones.” If what matters is the popular vote, what happens in any one location doesn’t matter so much; hence, stealing votes in downstate Illinois won’t allow you to steal the entire state—just as, with enough samples or experiments run, the fact that the lab assistant was drowsy at the time she recorded one set of results won’t matter so much. Or why deliberately losing a single game in July hardly matters so much as tanking a game of the World Series.

Put in such a way, it’s hard to see how anyone without a vested stake in the construction of the present system could defend the Electoral College—yet, as I suspect we are about to see, the very people now ascribing Donald Trump’s victory to the racism of the American voter will soon be doing just that. The reason will be precisely the same reason that such advocates want to blame racism, rather than the ongoing thievery of economic elites, for the rejection of Clinton: because racism is a “cultural” phenomenon, and most left-wing critics of the United States now obtain credentials in “cultural,” rather than scientific, disciplines.

If, in other words, Donald Trump’s victory was due to a complex series of renegotiations of the global contract between capital and labor, then that would require experts in economic and other, similar, disciplines to explain it; if his victory was due to racism, however—racism being considered a cultural phenomenon—then that will call forth experts in “cultural” fields. Because those with “liberal” or “leftist” political leanings now tend to gather in “cultural” fields, those with those political leanings will (indeed, must) now attempt to shift the battleground towards their areas of expertise. That shift, I would wager, will in turn lead those who argue for “cultural” explanations for the rise of Trump against arguments for the elimination of the Electoral College.

The reason is not difficult to understand: it isn’t too much to say, in fact, that one way to define the study of the humanities is to say it comprises the disciplines that largely ignore, or even oppose, the Law of Large Numbers both as a practical matter and as a philosophic one. As literary scholar Franco Moretti, now of Stanford, observed in his Atlas of the European Novel, 1800-1900, just as “silver fork novels”—a genre published in England between the 1820s and the 1840s—do not “show ‘London,’ but only a small, monochrome portion of it,” so too does the average student of literature not really study her ostensible subject matter. “I work on west European narrative between 1790 and 1930, and already feel like a charlatan outside of Britain and France,” Moretti confesses in an essay entitled “Distant Reading”—and even then, he only works “on its canonical fraction, which is not even 1 percent of published literature.” As Joshua Rothman put the point in a New Yorker profile of Moretti a few years ago, Moretti instead insists that “if you really want to understand literature, you can’t just read a few books or poems over and over,” but instead “you have to work with hundreds or even thousands of texts at a time”—that is, he insists on the significance of the Law of Large Numbers in his field, an insistence whose very novelty demonstrates how literary study is a field that has historically resisted precisely that recognition.

In order to proceed, in other words, disciplines like literary study or art history—or even history itself—must argue for the representativeness of a given body of work: usually termed, at least in literary study, “the Canon.” Such disciplines are already, simply by their very nature, committed to the idea that it is not necessary to read all of what Moretti says is the “thirty thousand nineteenth-century British novels out there” in order to arrive at conclusions about the nineteenth-century British novel: in the first place, “no one really knows” how many there really are (there could easily be twice as many), and in the second “no one has read them [all], [and] no one ever will.” In order to get off the ground, such disciplines must necessarily deny the Law of Large Numbers: as Moretti says, “you invest so much in individual texts only if you think that very few of them really matter”—a belief with an obvious political corollary. Rejection of the Law of Large Numbers is thusly, as Moretti also observes, “an unconscious and invisible premiss” for most who study such fields—which is to say that although students of the humanities often make claims for the political utility of their work, they sometimes forget that the enabling presuppositions of their fields are inherently those of the pre-Enlightenment ancien régime.

Perhaps that’s why—as Joe Pinsker observed in a fascinating, but short, article for The Atlantic several years ago—studies of college students find that those “from lower-income families tend toward ‘useful’ majors, such as computer science, math, and physics,” while students “whose parents make more money flock to history, English, and the performing arts”: the baseline assumptions of those disciplines are, no matter the particular predilections of a given instructor, essentially aristocratic, not democratic. To put it most baldly, the disciplines of the humanities must reject the premise of the Law of Large Numbers, which says that as more examples are added, the closer we approach to the truth—a point that can be directly witnessed when, for instance, English professor Michael Bérubé of Pennsylvania State University observes that the “humanists at [his] end of the [academic] hallway roundly dismissed” Harvard biologist E.O. Wilson’s book, Consilience: The Unity of Knowledge for arguing that “all human knowledge can and eventually will be unified under the rubric of the natural sciences.” Rejecting the Law of Large Numbers is foundational to the very operation of the humanities: without making that rejection, they cannot exist.

In recent decades, of course, presumably Franco Moretti has not been the only professor of the humanities to realize that their disciplines stood on a collision course with the Law of Large Numbers—it may perhaps explain why disciplines like literature and others have, for years, been actively recruiting among members of minority groups. The institutional motivations of such hiring, in other words, ought to be readily apparent: by making such hires, departments of the humanities could insulate themselves from charges from the political left—while at the same time continuing the practices that, without such cover, might have appeared increasingly anachronistic in a democratic age. Minority hiring, that is, may not be so politically “progressive” as its defenders sometimes argue: it may, in fact, have prevented the intellectual reforms within the humanities urged by people like Franco Moretti for a generation or more. Of course, by joining such departments, members of minority groups also may have, consciously or not, tied their own fortunes to a philosophic rejection of concepts like the Law of Large Numbers—as African-American sportswriter Michael Wilbon, of ESPN fame, wrote this past May, black people supposedly have some kind of allergy to statistical analysis: “in ‘BlackWorld,’” Wilbon solemnly intoned, “never is heard an advanced analytical word.” I suspect then that many who claim to be on the political left will soon come out to defend the Electoral College. If that happens, then in one last cruel historical irony the final defenders of American slavery may end up being precisely those slavery meant to oppress.

This Doubtful Strife

Let me be umpire in this doubtful strife.
Henry VI. Act IV, Scene 1.

 

“Mike Carey is out as CBS’s NFL rules analyst,” wrote Claire McNear recently for (former ESPN writer and Grantland founder) Bill Simmons’ new website, The Ringer, “and we are one step closer to having robot referees.” McNear is referring to Carey and CBS’s “mutual agreement” to part last week: the former NFL referee, with 24 years of on-field experience, was not able to translate those years into an ability to convey rules decisions to CBS’s audience. McNear goes on to argue that Carey’s firing/resignation is simply another milestone on the path to computerized refereeing—a march that, she says, reached another milestone just days earlier, when the NBA released “Last Two Minute reports, which detail the officiating crew’s internal review of game calls.” About that release, it seems, the National Basketball Referees Association said it encourages “the idea that perfection in officiating is possible,” a standard that the association went on to say “is neither possible nor desirable” because “if every possible infraction were to be called, the game would be unwatchable.” It’s an argument that will appear familiar for many with experience in the humanities: at least since William Blake’s “dark satanic mills,” writers and artists have opposed the impact of science and technology—usually for reasons advertised as “political.” Yet, at least with regard to the recent history of the United States, that’s a pretty contestable proposition: it’s more than questionable, in other words, whether the humanities’ opposition to the sciences hasn’t had pernicious rather than beneficial effects. The work of the humanities, that is, by undermining the role of science, may not be helping to create the better society its proponents often say will result. Instead, the humanities may actually be helping to create a more unequal society.

That the humanities, that supposed bastion of “political correctness” and radical leftism, could in reality function as the chief support of the status quo might sound surprising at first, of course—according to any number of right-wing publications, departments of the humanities are strongholds of radicalism. But any real look around campus shouldn’t find it that confounding to think of the humanities as, in reality, something else : as Joe Pinsker reported for The Atlantic last year, data from the National Center for Education Statistics demonstrates that “the amount of money a college student’s parents make does correlate with what that person studies.” That is, while kids “from lower-income families tend toward ‘useful’ majors, such as computer science, math, and physics,” those “whose parents make more money flock to history, English, and the performing arts.” It’s a result that should not be that astonishing: as Pinsker observes, not only is it so that “the priciest, top-tier schools don’t offer Law Enforcement as a major,” it’s a point that cuts across national boundaries; Pinsker also reports that Greg Clark of the University of California found recently that students with “rare, elite surnames” at Great Britain’s Cambridge University “were much more likely to study classics, English, and history, and much less likely to study computer science and economics.” Far from being the hotbeds of far-left thought they are often portrayed as, in other words, departments of the humanities are much more likely to house the most elite, most privileged student body on campus.

It’s in those terms that the success of many of the more fashionable doctrines on American college campuses over the past several decades might best be examined: although deconstruction and many more recent schools of thought have long been thought of as radical political movements, they could also be thought of as intellectual weapons designed in the first place—long before they are put to any wider use—to keep the sciences at bay. That might explain just why, far from being the potent tools for social justice they are often said to be, these anti-scientific doctrines often produce among their students—as philosopher Martha Nussbaum of the University of Chicago remarked some two decades ago—a “virtually complete turning from the material side of life, toward a type of verbal and symbolic politics.” Instead of an engagement with the realities of American political life, in other words, many (if not all) students in the humanities prefer to practice politics by using “words in a subversive way, in academic publications of lofty obscurity and disdainful abstractness.” In this way, “one need not engage with messy things such as legislatures and movements in order to act daringly.” Even better, it is only in this fashion, it is said, that the conceptual traps of the past can be escaped.

One of the justifications for this entire practice, as it happens, was once laid out by the literary critic, Stanley Fish. The story goes that Bill Klem, a legendary umpire, was once behind the plate plying his trade:

The pitcher winds up, throws the ball. The pitch comes. The batter doesn’t swing. Klem for an instant says nothing. The batter turns around and says “O.K., so what was it, a ball or a strike?” And Klem says, “Sonny, it ain’t nothing ’till I call it.”

The story, Fish says, is illustrative of the notion that “of course the world is real and independent of our observations but that accounts of the world are produced by observers and are therefore relative to their capacities, education, training, etc.” It’s by these means, in other words, that academic pursuits like “cultural studies” and the like have come into being: means by which sociologists of science, for example, show how the productions of science may be the result not merely of objects in the world, but also the predilections of scientists to look in one direction and not another. Cancer or the planet Saturn, in other words, are not merely objects, but also exist—perhaps chiefly—by their place within the languages with which people describe them: an argument that has the great advantage of preserving the humanities against the tide of the sciences.

But, isn’t that for the best? Aren’t the humanities preserving an aspect of ourselves incapable of being captured by the net of the sciences? Or, as the union of professional basketball referees put it in their statement, don’t they protect, at the very least, that which “would cease to exist as a form of entertainment in this country” by their ministrations? Perhaps. Yet, as ought to be apparent, if the critics of science can demonstrate that scientists have their blind spots, then so too do the humanists—for one thing, an education devoted entirely to reading leaves out a rather simple lesson in economics.

Correlation is not causation, of course, but it is true that as the theories of academic humanists became politically wilder, the gulf between haves and have-nots in America became greater. As Nobel Prize-winning economist Joseph Stiglitz observed a few years ago, “inequality in America has been widening for decades”; to take one of Stiglitz’s examples, “the six heirs to the Walmart empire”—an empire that only began in the early 1960s—now “possess a combined wealth of some $90 billion, which is equivalent to the wealth of the entire bottom 30 percent of U.S. society.” To put the facts another way—as Christopher Ingraham pointed out in the Washington Post last year—“the wealthiest 10 percent of U.S. households have captured a whopping 76 percent of all the wealth in America.” At the same time, as University of Illinois at Chicago literary critic Walter Benn Michaels has noted, “social mobility” in the United States is now “lower than in both France and Germany”—so much so, in fact, that “[a]nyone born poor in Chicago has a better chance of achieving the American Dream by learning German and moving to Berlin.” (A point perhaps highlighted by the fact that Germany has made its universities free to any who wish to attend them.) In any case, it’s a development made all the more infuriating by the fact that diagnosing the harm of it involves merely the most remedial forms of mathematics.

“When too much money is concentrated at the top of society,” Stiglitz continued not long ago, “spending by the average American is necessarily reduced.” Although—in the sense that it is a creation of human society—what Stiglitz is referring to is “socially constructed,” it is also simply a fact of nature that would exist whether the economy in question involved Aztecs or ants. In whatever underlying substrate, it is simply the case that those at the top of a pyramid will spend less than those near the bottom. “Consider someone like Mitt Romney”—Stiglitz asks—“whose income in 2010 was $21.7 million.” Even were Romney to become even more flamboyant than Donald Trump, “he would spend only a fraction of that sum in a typical year to support himself and his wife in their several homes.” “But,” Stiglitz continues, “take the same amount of money and divide it among 500 people—say, in the form of jobs paying $43,400 apiece—and you’ll find that almost all of the money gets spent.” In other words, by dividing the money more equally, more economic activity is generated—and hence the more equal society is also the more prosperous society.

Still, to understand Stiglitz’ point requires understanding a sequence of connected, ideas—among them a basic understanding of mathematics, a form of thinking that does not care who thinks it. In that sense, then, the humanities’ opposition to scientific, mathematical thought takes on rather a different sense than it is often cracked up to be. By training its students to ignore the evidence—and more significantly, the manner of argument—of mathematics and the sciences, the humanities are raising up a generation (or several) to ignore the evidence of impoverishment that is all around us here in 21st century America. Even worse, it fails to give students a means of combatting that impoverishment: an education without an understanding of mathematics cannot cope with, for instance, the difference between $10,000 and $10 billion—and why that difference might have a greater significance than simply being “unfair.” Hence, to ignore the failures of today’s humanities is also to ignore just how close the United States is … to striking out.