The Commanding Heights

The enemy increaseth every day; 
We, at the height, are ready to decline.
Julius Caesar. Act IV, Scene 3.

 

“It’s Toasted”: the two words that began the television series Mad Men. The television show’s protagonist, Don Draper, comes up with them in a flash of inspiration during a meeting with the head of Draper’s advertising firm’s chief client, cigarette brand Lucky Strikes: like all cigarette companies, Luckies have to come up with a new campaign in the wake of a warning from the Surgeon General regarding the health risks of smoking. Don’s solution is elegant: by simply describing the manufacturing process of making Luckies—a process that is essentially the same as all other cigarettes—the brand does not have to make any kind of claim about smokers’ health at all, and thusly can bypass any consideration of scientific evidence. It’s a great way to introduce a show about the advertising business, as well as one of the great conflicts of that business: the opposition between reality, as represented by the Surgeon General’s report, and rhetoric, as represented by Draper’s inspirational flash. It’s also what makes Mad Men a work of historical fiction: in the first place, as documented by Thomas Frank’s The Conquest of Cool: Business Culture, Counterculture, and the Rise of Hip Consumerism, there really was, during the 1950s and 60s, a conflict in the advertising industry between those who trusted in a “scientific” approach to advertising and those who, in Frank’s words, “deplored conformity, distrusted routine, and encouraged resistance to established power.” But that conflict also enveloped more than the advertising field: in those years many rebelled against a “scientism” that was thought confining—a rebellion that in many ways is with us still. Yet, though that rebellion may have been liberating in some senses, it may also have had certain measurable costs to the United States. Among those costs, it seems, might be height.

Height, or a person’s stature, of course is a thing that most people regard as something that is akin to the color of the sky or the fact of gravity: a baseline foundation to the world incapable of change. In the past, such results that lead one person to tower over others—or look up to them in turn—might have been ascribed to God; today some might view height as the inescapable result of genetics. In one sense, this is true: as Burkhard Bilger says in the New Yorker story that inspired my writing here, the work of historians, demographers and dietitians have shown that with regard to height, “variations within a population are largely genetic.” But while height differences within a population are, in effect, a matter of genetic chance, that is not so when it comes to comparing different populations to each other.

“Height,” says Bilger, “is a kind of biological shorthand: a composite code for all the factors that make up a society’s well-being.” In other words, while you might be a certain height, and your neighbor down the street might be taller or shorter, both of you will tend to be taller or shorter than people from a different country—and the degree of shortness or tallness can be predicted by what sort of country you live in. That doesn’t mean that height is independent of genetics, to be sure: all human bodies are genetically fixed to grow at only three different stages in our lives—infancy, between the ages of six and eight, and as adolescents. But as Bilger notes, “take away any one of forty-five or fifty essential nutrients”—at any of these stages—“and the body stops growing.” (Like iodine, which can also have an effect on mental development.) What that means is that when large enough populations are examined, it can be seen whether a population as a whole is getting access to those nutrients—which in turn means it’s possible to get a sense of whether a given society is distributing resources widely … or not.

One story Bilger tells, about Guatemala’s two main ethnic groups, illustrates the point: one of them, the Ladinos, who claim descent from the Spanish colonizers of Central America, were averagely tall. But the other group, the Maya, who are descended from indigenous people, “were so short that some scholars called them the pygmies of Central America: the men averaged only five feet two, the women four feet eight.” Since the two groups shared the same (small) country, with essentially the same climate and natural resources, researchers initially assumed that the difference between them was genetic. But that assumption turned out to be false: when anthropologist Barry Bogin measured Mayans who had emigrated to the United States, he found that they were “about as tall as Guatemalan Ladinos.” The difference between the two ethnicities was not genetic: “The Ladinos,” Bilger writes, “who controlled the government, had systematically forced the Maya into poverty”—and poverty, because it can limit access to the nutrients essential during growth spurts, is systemically related to height.

It’s in that sense that height can literally be a measurement of the degree of freedom a given society enjoys: historically, Guatemala has been a hugely stratified country, with a small number of landowners presiding over a great number of peasants. (Throughout the twentieth century, in fact, the political class was engaged in a symbiotic relationship with the United Fruit Company, an American company that possessed large-scale banana plantations in the country—hence the term “banana republic.”) Short people are, for the most part, oppressed people; tall people, conversely, are mostly free people: it’s not an accident that as citizens of one of the freest countries in the world, the Netherlands, Dutch people are also the tallest.

Americans, at one time, were the tallest people in the world: in the eighteenth century, Bilger reports, Americans were “a full three inches taller than the average European.” Even so late as the First World War, he also says, “the average American soldier was still two inches taller than the average German.” Yet, a little more than a generation later, that relation began to change: “sometime around 1955 the situation began to reverse.” Since then all Europeans have been growing, as have Asians: today “even the Japanese—once the shortest industrialized people on earth—have nearly caught up with us, and Northern Europeans are three inches taller and rising.” Meanwhile, American men are “less than an inch taller than the average soldier during the Revolutionary War.” And that difference, it seems, is not due to the obvious source: immigration.

The people that work in this area are obviously aware that, because the United States is a nation of immigrants, that might skew the height data: clearly, if someone grows up in, say, Guatemala and then moves to the United States, that could conceivably warp the results. But the researchers Bilger consulted have considered the point: one only includes native-born, English-speaking Americans in his studies, for example, while another says that, because of the changes to immigration law during the twentieth century, the United States now takes in far too few immigrants to bias the figures. But if not immigration, then what?

For my own part, I find the coincidence of 1955 too much to ignore: it’s around the mid-1950s that Americans began to question a previous view of the sciences that had grown up a few generations previously. In 1898, for example, the American philosopher John Dewey could reject “the idea of a dualism between the cosmic and the ethical,” and suggested that “the spiritual life … [gets] its surest and most ample guarantees when it is learned that the laws and conditions of righteousness are implicated in the working processes of the universe.” Even so late as 1941, intellectual magazine The New Republic could publish an obituary of the famed novelist James Joyce—author of what many people feel is the finest novel in the history of the English language, Ulysses—that proclaimed Joyce “the great research scientist of letters, handling words with the same freedom and originality that Einstein handles mathematical symbols.” “Literature as pure art,” the magazine then said, “approaches the nature of pure science”—suggesting, as Dewey said, that reality and its study did not need to be opposed to some other force, whether that be considered to be religion and morality or art and beauty. But just a few years later, elite opinion began to change.

In 1949, for instance, the novelist James Baldwin would insist, against the idea of The New Republic’s obituary, that “literature and sociology are not the same,” while a few years later, in 1958, the philosopher and political scientist Leo Strauss would urge that the “indispensable condition of ‘scientific’ analysis is then moral obtuseness”—an obtuseness that, Strauss would go on to say, “is not identical with depravity, but […] is bound to strengthen the forces of depravity.” “By the middle of the 1950s,” as Thomas Frank says, “talk of conformity, of consumerism, and of the banality of mass-produced culture were routine elements of middle-class American life”—so that “the failings of capitalism were not so much exploitation and deprivation as they were materialism, wastefulness, and soul-deadening conformity”: a sense that Frank argues provided fuel for the cultural fires of the 1960s that were to come, and that the television show Mad Men documents. In other words, during the 1950s and afterwards, Americans abandoned a scientific outlook, and meanwhile, Americans also have grown shorter—at least relative to the rest of the world. Correlation, as any scientist will tell you, does not imply causation, but it does imply that Lucky Strikes might not be unique any more—though as any ad man would tell you, “America: It’s Toast!” is not a winning slogan.

Advertisements

Miracles Alone

They say miracles are past; and we have our
philosophical persons, to make modern and familiar, things supernatural and causeless.
All’s Well That Ends Well Act II, scene 3  

“If academic writing is to become expansive again,” wrote Joshua Rothman in The New Yorker a year ago, in one of the more Marxist sentences to appear in a mainstream publication lately, “academia will probably have to expand first.” What Rothman is referring to was the minor controversy set off by a piece by Nicholas Kristof in the New York Times entitled “Professors, We Need You!”—a rant attacking the “unintelligibility” of contemporary academic writing blah blah blah. Rothman’s take on the business—as a former graduate student himself—is that the increasing obscurity of the superstructure of academic writing is the result of an ever-smaller base: “the audience for academic work has been shrinking,” he says, and so building “a successful academic career” requires “serially impress[ing] very small groups of people,” like journal editors, hiring committees, etc. So, to Rothman, turning academic writing around would mean an expanding university system: that is, one in which it wasn’t terribly difficult to get a job. To put it another way, it’s to say that in order to make academics visible to the people, it would probably help to allow the people to become academics.

To very many current academics, however, that’s precisely off the table, because their work involves questioning the assumption necessary to power Rothman’s whole proposal: to write for large numbers of people requires the writing not to need some enormous amount of training in order to be read. A lot of academics in today’s humanities departments would “historicize” that assumption by saying that it only came into being with the Protestant Reformation at the beginning of the modern era, which held that the Bible could be read, and understood, by anyone—not just a carefully chosen set of acolytes capable of translating the holy mysteries to the laity, as in Roman Catholic practice. Academics of this sort might then make reference, as Benedict Anderson did in his Imagined Communities, to “print capitalism”—how the growth of newspapers and other printed materials demonstrated how writing untethered from a clerical caste could generate huge profits. And so on.

The defenses of obscure and difficult writing offered by such academics as Judith Butler, however, do not always take that turn: very often, difficult writing is defended on the grounds that such esoteric kinds of efforts “can help point the way to a more socially just world,” because “language plays an important role in shaping and altering our common or ‘natural’ understanding of social and political realities.” That, one supposes, might be true—and it’s certainly true that what’s known as the “cultural left” has, as the philosopher Richard Rorty once remarked, made all of us more sensitive to the peculiar ways in which language can influence the ways in which people perceive other people. But it’s also true that such a kind of thinking fails to think through the entire meaning of standing against intelligibility.

Most obviously, though this point is often obscured, it means standing against the idea of what is known as the doctrine of “naturalism,” a notion defined by the Stanford Encyclopedia of Philosophy as “asserting that reality has no place for ‘supernatural’ or other ‘spooky’ kinds of entity.” At least since Mark Twain adopted naturalism to literature by saying that “the personages of a tale shall confine themselves to possibilities and let miracles alone,” a baseline belief in naturalism has been what created the kind of widely literate public Kristof’s piece requires. Mysteries, that is, can only be understood by someone initiated into them: hence, to proceed without initiates requires outlawing mystery.

As should be obvious but apparently isn’t, it’s only absent a belief in mystery that anyone could, in Richard Rorty’s words, “think of American citizenship as an opportunity for action”—rather than, as Rorty laments so much of this so-called “cultural left” has become, possessed by the “spirit of detached spectatorship.” Difficult writing, in other words, might be able to do something for small groups, but it cannot, by definition, help larger ones—which is to say that it is probably no accident that Judith Butler should have left just what she meant by “socially just” undefined, because by the logic of her argument it almost certainly does not include the vast majority of America’s, or the world’s, people.

“In the early decades of” the twentieth century, Richard Rorty once wrote, “when an intellectual stepped back from his or her country’s history and looked at it through skeptical eyes, the chances were that he or she was about to propose a new political initiative.” That tradition is, it seems, nearly lost: today’s “academic Left,” Rorty wrote then, “has no projects to propose to America, no vision of a country to be achieve by building a consensus on the need for specific reforms.” For Rorty, however, that seems blamable on the intellectuals themselves—a kind of “blaming the victim” or traison des clercs that is itself a betrayal of the insights of naturalism: according to those notions, it’s no more possible that large numbers of smart people should have inexplicably given up on their political efforts completely than a flaming shrubbery could talk.

It’s that possibility that the British literary critic Terry Eagleton appears to have considered when, in his The Illusions of Postmodernism, he suggests that the gesture of denying that “there is any significant distinction between discourse and reality”—a denial specifically aimed at naturalism’s attempt to rule out the mysterious—may owe more to “the deadlocked political situation of a highly specific corner of the globe” than it does to the failures of the intellectuals. What I presume Eagleton is talking about is what Eric Alterman, writing in The Atlantic, called “the conundrum of a system that, as currently constructed, gives the minority party no strategic stake in sensible governance.” Very many of the features of today’s American government, that is, are designed not to produce good government, but rather to enable a minority to obstruct the doings of the majority—the famous “checks and balances.”

While American civic discourse often celebrates those supposed features, as I’ve written before the work of historians like Manisha Sinha and Leonard Richards shows that in fact they are due, not to the foresight of the Founding Fathers, but instead in order to protect the richest minority of the then-newborn republic: the slaveowners. It isn’t any accident that, as Alterman says, it “has become easier and easier for a determined minority to throw sand in the gears of the legislative process”: the very structure of the Senate, for example, allows “the forty Republican senators … [who] represent barely a third of the US population” to block any legislation, even excluding the more obscure senatorial tools, like the filibuster and the hold. These devices, as the work of historians shows, were originally developed in order to protect slavery; as Lawrence Goldstone put the point in the New Republic recently, during the Constitutional Convention of 1787, “slaveholders won a series of concessions,” among them “the makeup of the Senate” and the method of electing a president. These hangovers linger on, defending interests perhaps less obviously evil than the owners of slaves, but interests by and large not identical with those of the average citizen: today, those features are all check and no balance.

Such an explanation, I think, is more likely than Rorty’s stance of casting blame on people like Judith Butler, as odious as her beliefs really are. It might explain better how for instance, as the writer Seymour Krim described in his essay, “The American Novel Made Me,” intellectuals began “in the mid 50s [1950s] to regard the novel as a used-up medium,” so that the “same apocalyptic sense of possibility that we once felt in the U.S. novel now went into its examination”: what Krim calls “the game” of “literary criticism.” In that game, what matters isn’t the description of reality itself, but rather the methods of description by which “reality” is recorded: in line with Rorty’s idea of the intellectual turn against reality, not so much the photograph so much as the inner workings of the camera. Yet while that pursuit might appear to  some as a ridiculous and objectively harmful pursuit, blaming people, even smart people, for having become involved in such efforts because you have blocked their real path to advancement is like blaming butter for melting in the sun.

What all of this may show, in other words, is that for academic writing to become expansive again, as Joshua Rothman wishes, it may require far more than just academia to expand, though almost certainly that may be part of it. What it will also require is a new band of writers and politicians, recommitted to the tenets of naturalism and determined, as Krim said about “the American realistic novel of the mid to late 1930s,” to be “‘truthful’ in recreating American life.” To Kristof or Rothman, that’s a task unlikely even to be undertaken in our lifetimes, much less accomplished. Yet it ought to be acknowledged that Kristof and Rothman’s own efforts imply that a hunger exists that may not know its name—that a wanderer is abroad, holding aloft a lantern flickering not because of a rising darkness, but an onrushing dawn.

 

The Oldest Mistake

Monte Ward traded [Willie] Keeler away for almost nothing because … he made the oldest mistake in management: he focused on what the player couldn’t do, rather than on what he could.
The New Bill James Historical Baseball Abstract

 

 

What does an American “leftist” look like? According to academics and the inhabitants of Brooklyn and its spiritual suburbs, there are means of tribal recognition: unusual hair or jewelry; a mode of dress either strikingly old-fashioned or futuristic; peculiar eyeglasses, shoes, or other accessories. There’s a deep concern about food, particularly that such food be the product of as small, and preferably foreign, an operation as possible—despite a concomitant enmity of global warming. Their subject of study at college was at minimum one of the humanities, and possibly self-designed. If they are fans of sports at all, it is either extremely obscure, obscenely technical, and does not involve a ball—think bicycle racing—or it is soccer. And so on. Yet, while each of us has exactly a picture of such a person in mind—probably you know at least a few, or are one yourself—that is not what a real American leftist looks like at the beginning of the twenty-first century. In reality, a person of the actual left today drinks macro-, not micro-, brews, studied computer science or some other such discipline at university, and—above all—is a fan of either baseball or football. And why is that? Because such a person understands statistics intuitively—and the great American political battle of the twenty-first century will be led by the followers of Strabo, not Pyrrho.

Each of those two men were Greeks: the one, a geographer, the other a philosopher—the latter often credited with being one of the first “Westerners” to visit India. “Nothing really exists,” Pyrrho reportedly held, “but human life is governed by convention”—a philosophy very like that of the current American “cultural left,” governed as it is by the notion, as put by American literary critic Stanley Fish, that “norms and standards and rules … are in every instance a function or extension of history, convention, and local practice.” Arguably, most of the “political” work of the American academy over the past several generations has been done under that rubric: as Fish and others have admitted in recent years, it’s only by acceding to some version of that doctrine that anyone can work as an American academic in the humanities these days.

Yet while “official” leftism has prospered in the academy under a Pyrrhonian rose, in the meantime enterprises like fantasy football and above all, sabermetrics, have expanded as a matter of “entertainment.” But what an odd form of relaxation! It’s an bizarre kind of escapism that requires a familiarity with both acronyms and the formulas used to compute them: WAR, OPS, DIPS, and above all (with a nod to Greek antecedents), the “Pythagorean expectation.” Yet the work on these matters has, mainly, been undertaken as a purely amateur endeavor—Bill James spent decades putting out his baseball work without any remuneration, until finally being hired latterly by the Boston Red Sox in 2003 (the same year that Michael Lewis published Moneyball, a book about how the Oakland A’s were using methods pioneered by James and his disciples). Still, all of these various methods of computing the value of both a player and a team have a perhaps-unintended effect: that of training the mind in the principle of Greek geographer, Strabo.

“It is proper to derive our explanations from things which are obvious,” Strabo wrote two thousand years ago, in a line that would later be adopted by the Englishman who constructed geology, Charles Lyell. In Lyell’s Principles of Geology (which largely founded the field) Lyell held—in contrast to the mysteriousness of Pyrrho—that the causes of things are likely to those already around us, and not due to unique, unrepeatable events. Similarly, sabermetricians—as opposed to the old-school scouts depicted in the film version of Moneyball—judge players based on their performance on the field, not on their nebulous “promise” or “intangibles.” (In Moneyball scouts were said to judge players on such qualities as the relative attractiveness of their girlfriends, which was said to signify the player’s own confidence in his ability.) Sabermetricians disregard such “methods” of analysis in favor of examination of the acts performed by the player as recorded by statistics.

Why, however, would that methodological commitment lead sabermetricians to be politically “liberal”—or for that matter, why would it lead in a political direction at all? The answer to the latter question is, I suspect, inevitable: sabermetrics, after all, is a discipline well-suited for the purpose of discovering how to run a professional sports team—and in its broadest sense, managing organizations simply is what “politics” is. The Greek philosopher Aristotle, for that reason, defined politics as a “practical science”—as the discipline of organizing human beings for particular purposes. It seems inevitable then that at least some people who have spent time wondering about, say, how to organize a baseball team most effectively might turn their imaginations towards some other end.

Still, even were that so, why “liberalism,” however that is defined, as opposed to some other kind political philosophy? Going by anecdotal evidence, after all, the most popular such doctrine among sports fans might be libertarianism. Yet, beside the fact that libertarianism is the philosophy of twelve-year-old boys (not necessarily a knockdown argument against its success), it seems to me that anyone following the methods of sabermetrics will be led towards positions usually called “liberal” in today’s America because from that sabermetrical, Strabonian perspective, certain key features of the American system will nearly instantly jump out.

The first of those features will be that, as it now stands, the American system is designed in a fashion contrary to the first principle of sabermetrical analysis: the Pythagorean expectation. As Charles Hofacker described it in a 1983 article for Baseball Analyst, the “Pythagorean equation was devised by Bill James to predict winning percentage from … the critical difference between runs that [a team] scores and runs that it allows.” By comparing these numbers—the ratio of a team’s runs scored and runs allowed versus the team’s actual winning percentage—James found that a rough approximation of a team’s real value could be determined: generally, a large difference between those two sets of numbers means that something fluky is happening.

If a team scores a lot of runs while also preventing its opponents from scoring, in other words, and yet somehow isn’t winning as many games as those numbers would suggest, then that suggests that that team is either tremendously unlucky or there is some hidden factor preventing success. Maybe, for instance, that team is scoring most of its runs at home because its home field is particularly friendly to the type of hitters the team has … and so forth. A disparity between runs scored/runs allowed and actual winning percentage, in short, compels further investigation.

Weirdly however the American system regularly produces similar disparities—and yet while, in the case of a baseball team, that would set off alerts for a sabermetrician, no such alarms are set off in the case of the so-called “official” American left, which apparently has resigned itself to the seemingly inevitable. In fact, instead of being the subject of curiosity and even alarm, many of the features of the U.S. constitution, like the Senate and the Electoral College—not to speak of the Supreme Court itself—are expressly designed to thwart what Chief Justice Earl Warren said was “the clear and strong command of our Constitution’s Equal Protection Clause”: the idea that “Legislators represent people … [and] are elected by voters, not farms or cities or economic interests.” Whereas a professional baseball team, in the post-James era, would be remiss if it were to ignore a difference between its ratio of runs scored and allowed and its games won and lost, under the American political system the difference between the will of the electorate as expressed by votes cast and the actual results of that system as expressed by legislation passed is not only ignored, but actively encouraged.

“The existence of the United States Senate”—for example wrote Justice Harlan in his dissent to the 1962 case of Baker v. Carr—“is proof enough” that “those who have the responsibility for devising a system of representation may permissibly consider that factors other than bare numbers should be taken into account.” That is, the existence of the U.S. Senate, which sends two senators from each state regardless of each state’s population, is support enough for those who believe—as the American “cultural left” does—in the importance of factors like “history” or the like in political decisions, as opposed to, say, the will of the American voters as expressed by the tally of all American votes.

As Jonathan Cohn remarked in The New Republic not long ago, in the Senate “predominantly rural, thinly populated states like Arkansas and North Dakota have the exact same representation as more urban, densely populated states like California and New York”—meaning that voters in those rural states have more effective political power than voters in the urban ones do. In sum, the Senate is, as Cohn says, one of Constitution’s “levers for thwarting the majority.” Or to put it in sabermetrical terms, it is a means of hiding a severe disconnect in America’s Pythagorean expectation.

Some will defend that disconnect, as Justice Harlan did over fifty years ago, on the grounds of terms familiar to the “cultural left”: that of “history” and “local practice” and so forth. In other words, that is how the Constitution originally constructed the American state. Yet, attempting (in Cohn’s words) to “prevent majorities from having the power to determine election outcomes” is a dangerous undertaking; as the Atlantic’s Ta Nehisi-Coates wrote recently about certain actions taken by the Republican party designed to discourage voting, to “see the only other major political party in the country effectively giving up on convincing voters, and instead embarking on a strategy of disenfranchisement, is a bad sign for American democracy.” In baseball, the sabermetricians know, a team with a high difference between its “Pythagorean expectation” and its win-loss record will usually “snap back” to the mean. In politics, as everyone since before Aristotle has known, such a “snap back” is usually a bit more costly than, say, the price of a new pitcher—which is to say that, if you see any American revolutionaries around you right now, he or she is likely wearing, not a poncho or a black turtleneck, but an Oakland A’s hat.        

Joe Maddon and the Fateful Lightning 

All things are an interchange for fire, and fire for all things,
just like goods for gold and gold for goods.
—Heraclitus

Chicago Cubs logo
Chicago Cubs Logo

Last month, one of the big stories about presidential candidate and Wisconsin governor Scott Walker was his plan not only to cut the state’s education budget, but also to change state law in order to allow, according to The New Republic, “tenured faculty to be laid off at the discretion of the chancellors and Board of Regents.” Given that Wisconsin was the scene of the Ely case of 1894—which ended with the board of trustees of the University of Wisconsin issuing the ringing declaration: “Whatever may be the limitations which trammel inquiry elsewhere we believe the great state University of Wisconsin should ever encourage that continual and fearless sifting and winnowing by which alone truth can be found”—Walker’s attempt is a threat to the entire system of tenure. Yet it may be that American academia in general, if not Wisconsin academics in particular, are not entirely blameless—not because, as American academics might smugly like to think, because they are so totally radical, dude, but on the contrary because they have not been radical enough: to the point that, as I will show, probably the most dangerous, subversive and radical thinker on the North American continent at present is not an academic, nor even a writer, at all. His name is Joe Maddon, and he is the manager of the Chicago Cubs.

First though, what is Scott Walker attempting to do, and why is it a big deal? Specifically, Walker wants to change Section 39 of the relevant Wisconsin statute so that Wisconsin’s Board of Regents could, “with appropriate notice, terminate any faculty or academic staff appointment when such an action is deemed necessary … instead of when a financial emergency exists as under current law.” In other words, Walker’s proposal would more or less allow Wisconsin’s Board of Regents to fire anyone virtually at will, which is why the American Association of University Professors “has already declared that the proposed law would represent the loss of a viable tenure system,” as reported by TNR.

The rationale given for the change is the usual one of allowing for more “flexibility” on the part of campus leaders: by doing so, supposedly, Wisconsin’s university system can better react to the fast-paced changes of the global economy … feel free to insert your own clichés of corporate speak here. The seriousness with which Walker takes the university’s mission as a searcher for truth might perhaps be discerned by the fact that he appointed the son of his campaign chairman to the Board of Regents—nepotism apparently being, in Walker’s view, a sure sign of intellectual probity.

The tenure system was established, of course, exactly to prevent political appointee yahoos from having anything to say about the production of truth—a principle that, one might think, ought to be sacrosanct, especially in the United States, where every American essentially exists right now, today, on the back of intellectual production usually conducted in a university lab. (For starters, it was the University of Chicago that gave us what conservatives seem to like to think of as the holy shield of the atomic bomb.) But it’s difficult to blame “conservatives” for doing what’s in, as the scorpion said to the frog, their nature: what’s more significant is that academics ever allowed this to happen in the first place—and while it is surely the case that all victims everywhere wish to hold themselves entirely blameless for whatever happens to them, it’s also true that no one is surprised when somebody hits a car driving the wrong way.

A clue toward how American academia has been driving the wrong way can be found in a New Yorker story from last October, where Maria Konnikova described a talk moral psychologist Jonathan Haidt gave to the Society for Personality and Social Psychology. The thesis of the talk? That psychology, as a field, had “a lack of political diversity that was every bit as dangerous as a lack of, say, racial or religious or gender diversity.” In other words, the whole field was inhabited by people who were at least liberal, and many who were radicals, on the ideological spectrum, and very few conservatives.

To Haidt, this was a problem because it “introduced bias into research questions [and] methodology,” particularly concerning “politicized notions, like race, gender, stereotyping, and power and inequality.” Yet a follow-up study surveying 800 social psychologists found something interesting: actually, these psychologists were only markedly left-of-center compared to the general population when it came to something called “the social-issues scale.” Whereas in economic matters or foreign affairs, these professors tilted left at about a sixty to seventy percent clip, when it came to what sometimes are called “culture war” issues the tilt was in the ninety percent range. It’s the gap between those measures, I think, that Scott Walker is able to exploit.

In other words, while it ought to be born in mind that this is merely one study of a narrow range of professors, the study doesn’t disprove Professor Walter Benn Michaels’ generalized assertion that American academia has largely become the “human resources department of the right”: that is, the figures seem to say that, sure, economic inequality sorta bothers some of these smart guys and gals—but really to wind them up you’d best start talking about racism or abortion, buster. And what that might mean is that the rise of so-called “tenured radicals” since the 1960s hasn’t really been the fearsome beast the conservative press likes to make it out to be: in fact, it might be so that—like some predator/prey model from ecological study—the more left the professoriate turns, the more conservative the nation becomes.

That’s why it’s Joe Maddon of the Chicago Cubs, rather than any American academic, who is the most radical man in America right now. Why? Because Joe Maddon is doing something interesting in these days of American indifference to reality: he is paying attention to what the world is telling him, and doing something about it in a manner that many, if not most, academics could profit by examining.

What Joe Maddon is doing is batting the pitcher eighth.

That might, obviously, sound like small beer when the most transgressive of American academics are plumbing the atomic secrets of the universe, or questioning the existence of the biological sexes, or any of the other surely fascinating topics the American academy are currently investigating. In fact, however, there is at present no more important philosophical topic of debate anywhere in America, from the literary salons of New York City to the programming pits of Northern California, than the one that has been ongoing throughout this mildest of summers on the North Side of the city of Chicago.

Batting the pitcher eighth is a strategy that has been tried before in the history of American baseball: in 861 games since 1914. But twenty percent of those games, reports Grantland, “have come in 2015,” this season, and of those games, 112 and counting, have been those played by the Chicago Cubs—because in every single game the Cubs have played in this year, the pitcher has batted in the eighth spot. That’s something that no major league baseball team has ever done—and the reasons Joe Maddon has for tossing aside baseball orthodoxy like so many spit cups of tobacco juice is the reason why, eggheads and corporate lackeys aside, Joe Maddon is at present the most screamingly dangerous man in America.

Joe Maddon is dangerous because he saw something in a peculiarity in the rule of baseball, something that most fans are so inured to they have become unconscious to its meaning. That peculiarity is this: baseball has history. It’s a phrase that might sound vague and sentimental, but that’s not the point at all: what it refers to is that, with every new inning, a baseball lineup does not begin again at the beginning, but instead jumps to the next player after the last batter of the previous inning. This is important because, traditionally, pitchers bat in the ninth spot in a given lineup because they are usually the weakest batters on any team by a wide margin, which means that by batting them last, a manager usually ensures that they do not bat until at least the second, or even third, inning at the earliest. Batting the pitcher ninth enables a manager to hide his weaknesses and emphasize his strengths.

That has been orthodox doctrine since the beginnings of the sport: the tradition is so strong that when Babe Ruth, who first played in the major leagues as a pitcher, came to Boston he initially batted in the ninth spot. But what Maddon saw was that while the orthodox theory does minimize the numbers of plate appearances on the part of the pitcher, that does not in itself necessarily maximize the overall efficiency of the offense—because, as Russell Carleton put it for FoxSports, “in baseball, a lot of scoring depends on stringing a couple of hits together consecutively before the out clock runs out.” In other words, while batting the pitcher ninth does hide that weakness as much as possible, that strategy also involves giving up an opportunity: in the words of Ben Lindbergh of Grantland, by “hitting a position player in the 9-hole as a sort of second leadoff man,” a manager could “increase the chances of his best hitter(s) batting with as many runners on base as possible.” Because baseball lineups do not start at the beginning with every new inning, batting the weakest hitter last means that a lineup’s best players—usually the one through three spots—do not have as many runners on base as they might otherwise.

Now, the value of this move of putting the pitcher eighth is debated by baseball statisticians: “Study after study,” says Ben Lindbergh of Grantland, “has shown that the tactic offers at best an infinitesimal edge: two or three runs per season in the right lineup, or none in the wrong one.” In other words, Maddon may very well be chasing a will-o’-the-wisp, a perhaps-illusionary advantage: as Lindbergh says, “it almost certainly isn’t going to make or break the season.” Yet, in an age in which runs are much scarcer than they were in the juiced-up steroid era of the 1990s, and simultaneously the best teams in the National League (the American League, which does not allow pitchers to bat, is immune to the problem) are separated in the standings by only a few games, a couple of runs over the course of a season may be exactly what allows one team to make the playoffs and, conversely, prevents another from doing the same: “when there’s so little daylight separating the top teams in the standings,” as Lindbergh also remarked, “it’s more likely that a few runs—which, once in a while, will add an extra win—could actually account for the different between making and missing the playoffs.” Joe Maddon, in other words, is attempting to squeeze every last run he can from his players with every means at his disposal—even if it means taking on a doctrine that has been part of baseball nearly since its beginnings.

Yet, why should that matter at all, much less make Joe Maddon perhaps the greatest threat to the tranquility of the Republic since John Brown? The answer is that Joe Maddon is relentlessly focused on the central meaningful event of his business: the act of scoring. Joe Maddon’s job is to make sure that his team scores as many runs as possible, and he is willing to do what it takes in order to make that happen. The reason that he is so dangerous—and why the academics of America may just deserve the thrashing the Scott Walkers of the nation appear so willing to give them—is that American democracy is not so singlemindedly devoted to getting the maximum value out of its central meaningful event: the act of voting.

Like the baseball insiders who scoff at Joe Maddon for scuttling after a spare run or two over the course of 162 games—like the major league assistant general quoted by Lindbergh who dismissed the concept by saying “the benefit of batting the pitcher eighth is tiny if it exists at all”—American political insiders believe that a system that profligately disregards the value of votes doesn’t really matter over the course of a political season—or century. And it is indisputable that the American political system is profligate with the value of American votes. The value of a single elector in the Electoral College, for example, can differ by hundreds of thousands of votes cast by voters each Election Day, depending on the state; while through “the device of geographic—rather than population-based—representation in the Senate, [the system] substantially dilutes the voice and voting power of the majority of Americans who live in urban and metropolitan areas in favor of those living in rural areas,” as one Princeton political scientist has put the point. Or to put it more directly, as Dylan Matthews put it for the Washington Post two years ago, if “senators representing 17.82 percent of the population agree, they can get a majority”—while on the other hand “11.27 percent of the U.S. population,” as represented by the smallest 20 states, “can successfully filibuster legislation.” Perhaps most significantly, as Frances Lee and Bruce Oppenheimer have shown in their Sizing Up the Senate: The Unequal Consequences of Equal Representation, “less populous states consistently receive more federal funding than states with more people.” As presently constructed, in other words, the American political system is designed to waste votes, not to seek all of their potential value.

American academia, however, does not discuss such matters. Indeed, the disciplines usually thought of as the most politically “radical”—usually those in the humanities—are more or less expressly designed to rule out the style of thought (naturalistic, realistic) taken on here: one reason, perhaps, explaining the split in psychology professors between their opinions on economic matters and “cultural” ones observed by Maria Konnikova. Yet just because an opinion is not registered in academia does not mean it does not exist: imbalances are inevitably corrected, which undoubtedly will occur in this matter of the relative value of an American vote. The problem of course is that such “price corrections,” when it comes to issues like this, are not particularly known for being calm or smooth. Perhaps there is one possible upside however: when that happens—and there is no doubt that the day of what the song calls “the fateful lightning” will arrive, be it tomorrow or in the coming generations—Joe Maddon may receive his due as not just a battler in the frontlines of sport, but a warrior for justice. That, at least, might not be entirely surprising to his fellow Chicagoans—who remember that it was not the flamboyant tactics of busting up liquor stills that ultimately got Capone, but instead the slow and patient work of tax accountants and auditors.

You know, the people who counted.