Nunc Dimittis

Nunc dimittis servum tuum, Domine, secundum verbum tuum in pace:
Quia viderunt oculi mei salutare tuum
Quod parasti ante faciem omnium populorum:
Lumen ad revelationem gentium, et gloriam plebis tuae Israel.
—“The Canticle of Simeon.”
What appeared obvious was therefore rendered problematical and the question remains: why do most … species contain approximately equal numbers of males and females?
—Stephen Jay Gould. “Death Before Birth, or a Mite’s Nunc dimittis.”
    The Panda’s Thumb: More Reflections in Natural History. 1980.
 4199HWwzLWL._AC_US218_

Since last year the attention of most American liberals has been focused on the shenanigans of President Trump—but the Trump Show has hardly been the focus of the American right. Just a few days ago, John Nichols of The Nation observed that ALEC—the business-funded American Legislative Exchange Council that has functioned as a clearinghouse for conservative proposals for state laws—“is considering whether to adopt a new piece of ‘model legislation’ that proposes to do away with an elected Senate.” In other words, ALEC is thinking of throwing its weight behind the (heretofore) fringe idea of overturning the Seventeenth Amendment, and returning the right to elect U.S. Senators to state legislatures: the status quo of 1913. Yet, why would Americans wish to return to a period widely known to be—as the most recent reputable academic history, Wendy Schiller and Charles Stewart’s Electing the Senate: Indirect Democracy Before the Seventeenth Amendment has put the point—“plagued by significant corruption to a point that undermined the very legitimacy of the election process and the U.S. Senators who were elected by it?” The answer, I suggest, might be found in a history of the German higher educational system prior to the year 1933.

“To what extent”—asked Fritz K. Ringer in 1969’s The Decline of the German Mandarins: The German Academic Community, 1890-1933—“were the German mandarins to blame for the terrible form of their own demise, for the catastrophe of National Socialism?” Such a question might sound ridiculous to American ears, to be sure: as Ezra Klein wrote in the inaugural issue of Vox, in 2014, there’s “a simple theory underlying much of American politics,” which is “that many of our most bitter political battles are mere misunderstandings” that can be solved with more information, or education. To blame German professors, then, for the triumph of the Nazi Party sounds paradoxical to such ears: it sounds like blaming an increase in rats on a radio station. From that view, then, the Nazis must have succeeded because the German people were too poorly-educated to be able to resist Hitler’s siren song.

As one appraisal of Ringer’s work in the decades since Decline has pointed out, however, the pioneering researcher went on to compare biographical dictionaries between Germany, France, England and the United States—and found “that 44 percent of German entries were academics, compared to 20 percent or less elsewhere”; another comparison of such dictionaries found that a much-higher percentage of Germans (82%) profiled in such books had exposure to university classes than those of other nations. Meanwhile, Ringer also found that “the real surprise” of delving into the records of “late nineteenth-century German secondary education” is that it “was really rather progressive for its time”: a higher percentage of Germans found their way to a high school education than did their peers in France or England during the same period. It wasn’t, in other words, for lack of education that Germany fell under the sway of the Nazis.

All that research, however, came after Decline, which dared to ask the question, “Did the work of German academics help the Nazis?” To be sure, there were a number of German academics, like philosopher Martin Heidegger and legal theorist Carl Schmitt, who not only joined the party, but actively cheered the Nazis on in public. (Heidegger’s connections to Hitler have been explored by Victor Farias and Emannuel Faye; Schmitt has been called “the crown jurist of the Third Reich.”) But that question, as interesting as it is, is not Ringer’s; he isn’t interested in the culpability of academics in direct support of the Nazis, perhaps the culpability of elevator repairmen could as well be interrogated. Instead, what makes Ringer’s argument compelling is that he connects particular intellectual beliefs to a particular historical outcome.

While most examinations of intellectuals, in other words, bewail a general lack of sympathy and understanding on the part of the public regarding the significance of intellectual labor, Ringer’s book is refreshing insofar as it takes the opposite tack: instead of upbraiding the public for not paying attention to the intellectuals, it upbraids the intellectuals for not understanding just how much attention they were actually getting. The usual story about intellectual work and such, after all, is about just how terrible intellectuals have it—how many first novels, after all, are about young writers and their struggles? But Ringer’s research suggests, as mentioned, the opposite: an investigation of Germany prior to 1933 shows that intellectuals were more highly thought of there than virtually anywhere in the world. Indeed, for much of its history before the Holocaust Germany was thought of as a land of poets and thinkers, not the grim nation portrayed in World War II movies. In that sense, Ringer has documented just how good intellectuals can have it—and how dangerous that can be.

All of that said, what are the particular beliefs that, Ringer thinks, may have led to the installation of the Fürher in 1933? The “characteristic mental habits and semantic preferences” Ringer documents in his book include such items as “the underlying vision of learning as an empathetic and unique interaction with venerated texts,” as well as a “consistent repudiation of instrumental or ‘utilitarian’ knowledge.” Such beliefs are, to be sure, seemingly required of the departments of what are now—but weren’t then—thought of, at least in the United States, as “the humanities”: without something like such foundational assumptions, subjects like philosophy or literature could not remain part of the curriculum. But, while perhaps necessary for intellectual projects to leave the ground, they may also have some costs—costs like, say, forgetting why the Seventeenth Amendment was passed.

That might sound surprising to some—after all, aren’t humanities departments hotbeds of leftism? Defenders of “the humanities”—like Gregory Harpham, once Director of the National Endowment for the Humanities—sometimes go even further and make the claim—as Harpham did in his 2011 book, The Humanities and the Dream of America—that “the capacity to sympathize, empathize, or otherwise inhabit the experience of others … is clearly essential to democratic society,” and that this “kind of capacity … is developed by an education that includes the humanities.” Such views, however, make a nonsense of history: traditionally, after all, it’s been the sciences that have been “clearly essential to democratic society,” not “the humanities.” And, if anyone thinks about it closely, the very notion of democracy itself depends on an idea that, at base, is “scientific” in nature—and one that is opposed to the notion of “the humanities.”

That idea is called, in scientific circles, “the Law of Large Numbers”—a concept first written down formally two centuries ago by mathematician Jacob Bernoulli, but easily illustrated in the words of journalist Michael Lewis’ most recent book. “If you flipped a coin a thousand times,” Lewis writes in The Undoing Project, “you were more likely to end up with heads or tails roughly half the time than if you flipped it ten times.” Or as Bernoulli put it in 1713’s Ars Conjectandi, “it is not enough to take one or another observation for such a reasoning about an event, but that a large number of them are needed.” It is a restatement of the commonsensical notion that the more times a result is repeated, the more trustworthy it is—an idea hugely applicable to human life.

For example, the Law of Large Numbers is why, as publisher Nate Silver recently put it, if “you want to predict a pitcher’s win-loss record, looking at the number of strikeouts he recorded and the number of walks he yielded is more informative than looking at his W’s and L’s from the previous season.” It’s why, when financial analyst John Bogle examined the stock market, he decided that, instead of trying to chase the latest-and-greatest stock, “people would be better off just investing their money in the entire stock market for a very cheap price”—and thereby invented the index fund. It’s why, Malcolm Gladwell has noted, the labor movement has always endorsed a national health care system: because they “believed that the safest and most efficient way to provide insurance against ill health or old age was to spread the costs and risks of benefits over the biggest and most diverse group possible.” It’s why casinos have limits on the amounts bettors can wager. In all these fields, as well as more “properly” scientific ones, it’s better to amass large quantities of results, rather than depend on small numbers of them.

What is voting, after all, but an act of sampling of the opinion of the voters, an act thereby necessarily engaged with the Law of Large Numbers? So, at least, thought the eighteenth-century mathematician and political theorist the Marquis de Condorcet—who called the result “the miracle of aggregation.” Summarizing a great deal of contemporary research, Sean Richey of Georgia State University has noted that Condorcet’s idea was that (as one of Richey’s sources puts the point) “[m]ajorities are more likely to select the ‘correct’ alternative than any single individual when there is uncertainty about which alternative is in fact the best.” Or, as Richey describes how Condorcet’s process actually works more concretely puts it, the notion is that “if ten out of twelve jurors make random errors, they should split five and five, and the outcome will be decided by the two who vote correctly.” Just as, in sum, a “betting line” demarks the boundary of opinion between gamblers, Condorcet provides the justification for voting: Condorcet’s theory was that “the law of large numbers shows that this as-if rational outcome will be almost certain in any large election if the errors are randomly distributed.” Condorcet, thereby, proposed elections as a machine for producing truth—and, arguably, democratic governments have demonstrated that fact ever since.

Key to the functioning of Condorcet’s machine, in turn, is large numbers of voters: the marquis’ whole idea, in fact, is that—as David Austen-Smith and Jeffrey S. Banks put the French mathematician’s point in 1996—“the probability that a majority votes for the better alternative … approaches 1 [100%] as n [the number of voters] goes to infinity.” In other words, the point is that the more voters, the more likely an election is to reach the correct decision. The Seventeenth Amendment is, then, just such a machine: its entire rationale is that the (extremely large) pool of voters of a state is more likely to reach a correct decision than an (extremely small) pool voters consisting of the state legislature alone.

Yet the very thought that anyone could even know what truth is, of course—much less build a machine for producing it—is anathema to people in humanities departments: as I’ve mentioned before, Bruce Robbins of Columbia University has reminded everyone that such departments were “founded on … the critique of Enlightenment rationality.” Such departments have, perhaps, been at the forefront of the gradual change in Americans from what the baseball writer Bill James has called “an honest, trusting people with a heavy streak of rationalism and an instinctive trust of science,” with the consequence that they had “an unhealthy faith in the validity of statistical evidence,” to adopting “the position that so long as something was stated as a statistic it was probably false and they were entitled to ignore it and believe whatever they wanted to [believe].” At any rate, any comparison of the “trusting” 1950s America described by James by comparison to what he thought of as the statistically-skeptical 1970s (and beyond) needs to reckon with the increasingly-large bulge of people educated in such departments: as a report by the Association of American Colleges and Universities has pointed out, “the percentage of college-age Americans holding degrees in the humanities has increased fairly steadily over the last half-century, from little over 1 percent in 1950 to about 2.5 percent today.” That might appear to be a fairly low percentage—but as Joe Pinsker’s headline writer put the point of Pinsker’s article in The Atlantic, “Rich Kids Major in English.” Or as a study cited by Pinsker in that article noted, “elite students were much more likely to study classics, English, and history, and much less likely to study computer science and economics.” Humanities students are a small percentage of graduates, in other words—but historically they have been (and given the increasingly-documented decreasing social mobility of American life, are increasingly likely to be) the people calling the shots later.

Or, as the infamous Northwestern University chant had it: “That‘s alright, that’s okay—you’ll be working for us someday!” By building up humanities departments, the professoriate has perhaps performed useful labor by clearing the ideological ground for nothing less than the repeal of the Seventeenth Amendment—an amendment whose argumentative success, even today, depends upon an audience familiar not only with Condorcet’s specific proposals, but also with the mathematical ideas that underlay them. That would be no surprise, perhaps, to Fritz Ringer, who described how the German intellectual class of the late nineteenth century and early twentieth constructed an “a defense of the freedom of learning and teaching, a defense which is primarily designed to combat the ruler’s meddling in favor of a narrowly useful education.” To them, the “spirit flourishes only in freedom … and its achievements, though not immediately felt, are actually the lifeblood of the nation.” Such an argument is reproduced by such “academic superstar” professors of humanities as Judith Butler, Maxine Elliot Professor in the Departments of Rhetoric and Comparative Literature at (where else?) the University of California, Berkeley, who has argued that the “contemporary tradition”—what?—“of critical theory in the academy … has shown how language plays an important role in shaping and altering our common or ‘natural’ understanding of social and political realities.”

Can’t put it better.

Advertisements

Forked

He had already heard that the Roman armies were hemmed in between the two passes at the Caudine Forks, and when his son’s courier asked for his advice he gave it as his opinion that the whole force ought to be at once allowed to depart uninjured. This advice was rejected and the courier was sent back to consult him again. He now advised that they should every one be put to death. On receiving these replies … his son’s first impression was that his father’s mental powers had become impaired through his physical weakness. … [But] he believed that by taking the course he first proposed, which he considered the best, he was establishing a durable peace and friendship with a most powerful people in treating them with such exceptional kindness; by adopting the second he was postponing war for many generations, for it would take that time for Rome to recover her strength painfully and slowly after the loss of two armies.
There was no third course.
Titus LiviusAb Urbe Condita. Book IX 

 

Of course, we want both,” wrote Lee C. Bollinger, the president of Columbia University, in 2012, about whether “diversity in post-secondary schools should be focused on family income rather than racial diversity.” But while many might wish to do both, is that possible? Can the American higher educational system serve two masters? According to Walter Benn Michaels of the University of Illinois at Chicago, Bollinger’s thought that American universities can serve both economic goals and racial justice has been the thought of “every academic” with whom he’s ever discussed the subject—but Michaels, for his part, wonders just how sincere that wish really is. American academia, he says, has spent “twenty years of fighting like a cornered raccoon on behalf of the one and completely ignoring the other”; how much longer, he wonders, before “‘we want both’ sounds hollow not only to the people who hear it but to the people who say it?” Yet what Michaels doesn’t say is just why, as pious as that wish is, it’s a wish that is necessarily doomed to go unfulfilled—something that is possible to see after meeting a fictional bank teller named Linda.

Linda”—the late 1970s creation of two Israeli psychologists, Amos Tversky and Daniel Kahneman—may be the most famous fictional woman in the history of the social sciences, but she began life as a single humble paragraph:

Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Following that paragraph, there were a series of eight statements describing Linda—but as the biologist Stephen Jay Gould would point out later, “five are a blind, and only three make up the true experiment.” The “true experiment” wouldn’t reveal anything about Linda—but it would reveal a lot about those who met her. “Linda,” in other words, is like Nietzsche’s abyss: she stares back into you.

The three pointed statements of Kahneman and Tversky’s experiment are these: “Linda is active in the feminist movement; Linda is a bank teller; Linda is a bank teller and is active in the feminist movement.” The two psychologists would then ask their test subjects to guess which of the three statements was more likely. Initially, these test subjects were lowly undergraduates, but as Kahneman and Tversky performed and then re-performed the experiment, they gradually upgraded: using graduate students with a strong background in statistics next—and then eventually faculty. Yet, no matter how sophisticated the audience to which they showed this description, what Kahneman and Tversky found was that virtually everyone always thought that the statement “Linda is a bank teller and active in the feminist movement” was more likely than the statement “Linda is a bank teller.” But as only a little thought requires, that is impossible.

I’ll let the journalist Michael Lewis, who recently published a book about the work of the pair of psychologists entitled The Undoing Project: A Friendship That Changed Our Minds, explain the impossibility:

“Linda is a bank teller and is active in the feminist movement” could never be more probable than “Linda is a bank teller.” “Linda is a bank teller and is active in the feminist movement” was just a special case of “Linda is a bank teller.” “Linda is a bank teller” included “Linda is a bank teller and is active in the feminist movement” along with “Linda is a bank teller and likes to walk naked through Serbian forests” and all other bank-telling Lindas. One description was entirely contained by the other.

“Linda is a bank teller and is active in the feminist movement” simply cannot be more likely than “Linda is a bank teller.” As Louis Menand of Harvard observed about the “Linda problem” in The New Yorker in 2005, thinking that “bank teller and feminist” is more likely than the “bank teller” description “requires two things to be true … rather than one.” If the one is true so is the other; that’s why, as Lewis observed in an earlier article on the subject, it’s “logically impossible” to think otherwise. Kahneman and Tversky’s finding is curious enough on its own terms for what it tells us about human cognition, of course, because it exposes a reaction that virtually every human being ever encountering it has made. But what makes it significant in the present context is that it is also the cognitive error Lee C. Bollinger makes in his opinion piece.

“The Linda problem,” as Michael Lewis observed in The Undoing Project, “resembled a Venn diagram of two circles, but with one of the circles wholly contained by the other.” One way to see the point, perhaps, is in relation to prison incarceration. As political scientist Marie Gottschalk of the University of Pennsylvania has observed, although the

African-American incarceration rate of about 2,300 per 100,000 people is clearly off the charts and a shocking figure … [f]ocusing so intently on these racial disparities often obscures the fact that the incarceration rates for other groups in the United States, including whites and Latinos, is also comparatively very high.

While the African-American rate of imprisonment is absurdly high, in other words, the “white incarceration rate in the United States is about 400 per 100,000,” which is at least twice the rate of “the most punitive countries in Western Europe.” What that means is that, while it is possible to do something regarding, say, African-American incarceration rates by lowering the overall incarceration rates, it can’t be done the other way.“Even,” as Gottschalk says, “if you released every African American from US prisons and jails today, we’d still have a mass incarceration crisis in this country.” Releasing more prisoners means fewer minority prisoners, but releasing minority prisoners still means a lot of prisoners.

Which, after all, is precisely the point of the “Linda problem”: just as “bank teller” contains both “bank teller” and any other set of descriptors that could be added to “bank teller,” so too does “prisoner” include any other set of descriptors that could be added to it. Hence, reducing the prison population will necessarily reduce the numbers of minorities in prison—but reducing the numbers of minority prisoners will not do (much) to reduce the number of prisoners. “Minority prisoners” is a circle contained within the circle of “prisoners”—saying you’d like to reduce the numbers of minority prisoners is essentially to say that you don’t want to do anything about prisons.

Hence, when Hillary Clinton asked her audience during the recent presidential campaign “If we broke up the big banks tomorrow … would that end racism?” and “Would that end sexism?”—and then answered her own question by saying, “No,” what she was effectively saying was that she would do nothing about any of those things, racism and sexism included. (Which, given that this was the candidate who asserted that politicians ought to have “both a public and a private position,” is not out of the question.) Wanting “both,” or an alleviation of economic inequality and discrimination—as Lee Bollinger and “every academic” Walter Benn Michaels has ever talked to say they want—is simply the most efficient way of not getting either. As Michaels says, “diversity and antidiscrimination have done and can do [emp. added] nothing whatsoever to mitigate economic inequality.” The sooner that Americans realize that Michaels isn’t kidding—that anti-discrimination, identity politics is not an alternative solution, but in fact no solution—and why he’s right, the sooner that something could be done about America’s actual problems.

Assuming, of course, that’s something anyone really wants.

Noble Lie

With a crew and good captain well seasoned,
They left fully loaded for Cleveland.
—“The Wreck of the Edmund Fitzgerald.” 1976.

The comedian Bill Maher began the “panel” part of his show Real Time the other day—the last episode before the election—by noting that virtually every political expert had dismissed Donald Trump’s candidacy at every stage of the past year’s campaign. When Trump announced he was running, Maher observed, the pundits said “oh, he’s just saying that … because he just wants to promote his brand.” They said Trump wouldn’t win any voters, Maher noted—“then he won votes.” And then, Maher went on, they said he wouldn’t win any primaries—“then he won primaries.” And so on, until Trump became the Republican nominee. So much we know, but what was of interest about the show was the response one of Maher’s guests: David Frum, a Canadian who despite his immigrant origins became a speechwriter for George W. Bush, invented the phrase “axis of evil,” and has since joined the staff of the supposedly liberal magazine, The Atlantic. The interest of Frum’s response was not only how marvelously inane it was—but also how it had already been decisively refuted only hours earlier, by men playing a boy’s game on the Lake Erie shore.

Maybe I’m being cruel however: like most television shows, Real Time with Bill Maher is shot before it is aired, and this episode was released last Friday. Frum then may not have been aware, when he said what he said, that the Chicago Cubs won the World Series on Wednesday—and if he is like most people, Frum is furthermore unaware of the significance of that event, which goes (as I will demonstrate) far beyond matters baseball. Still, surely Frum must have been aware of how ridiculous what he said was, given that the conversation began with Maher reciting the failures of the pundit class—and Frum admitted to belonging to that class. “I was one of those pundits that you made fun of,” Frum confessed to Maher—yet despite that admission, Frum went on to make a breathtakingly pro-pundit argument.

Trump’s candidacy, Frum said, demonstrated the importance of the gatekeepers of the public interest—the editors of the national newspapers, for instance, or the anchors of the network news shows, or the mandarins of the political parties. Retailing a similar  argument to one made by, among others, Salon’s Bob Cesca—who contended in early October that “social media is the trough from which Trump feeds”—Frum proceeded to make the case that the Trump phenomena was only possible once apps like Facebook and Twitter enabled presidential candidates to bypass the traditional centers of power. To Frum, in other words, the proper response to the complete failure of the establishment (to defeat Trump) was to prop up the establishment (so as to defeat future Trumps). To protect against the failure of experts Frum earnestly argued—with no apparent sense of irony—that we ought to give more power to experts.

There is, I admit, a certain schadenfreude in witnessing a veteran of the Bush Administration tout the importance of experts, given that George W.’s regime was notable for, among other things, “systematically chang[ing] and supress[ing] … scientific reports about global warming” (according to the British Broadcasting Corporation)—and not even to discuss how Bush cadres torpedoed the advice of the professionals of the CIA vis á vis the weapons-buying habits of a certain Middle Eastern tyrant. But the larger issue, however, is that the very importance of “expert” knowledge has been undergoing a deep interrogation for decades now—and that the victory of the Chicago Cubs in this year’s World Series has brought much of that critique to the mainstream.

What I mean can be demonstrated by a story told by the physicist Freeman Dyson—a man who never won a Nobel Prize, nor even received a doctorate, but nevertheless was awarded a place at Princeton’s Institute of Advanced Study at the ripe age of thirty by none other than Robert Oppenheimer (the man in charge of the Manhattan Project) himself. Although Dyson has had a lot to say during his long life—and a lot worth listening to—on a wide range of subjects, from interstellar travel to Chinese domestic politics, of interest to me in connection to Frum’s remarks on Donald Trump is an article Dyson published in The New York Review of Books in 2011, about a man who did win the Nobel Prize: the Israeli psychologist Daniel Kahneman, who won the prize for economics in 2002. In that article, Dyson told a story about himself: specifically, what he did during World War II—an experience, it turns out, that leads by a circuitous path over the course of seven decades to the epic clash resolved by the shores of Lake Erie in the wee hours of 3 November.

Entitled “How to Dispel Your Illusions,” Dyson there tells the story of being a young statistician with the Royal Air Force’s Bomber Command in the spring of 1944—a force that suffered, according to the United Kingdom’s Bomber Command Museum, “a loss rate comparable only to the worst slaughter of the First World War trenches.” To combat this horror, Dyson was charged with discovering the common denominator between the bomber crews that survived until the end of their thirty-mission tour of duty (about 25% of all air crews). Since they were succeeding when three out of four of their comrades were failing, Dyson’s superiors assumed that those successful crews were doing something that their less-successful colleagues (who were mostly so much less successful that they were no longer among the living) were not.

Bomber Command, that is, had a theory about why some survived and some died: “As [an air crew] became more skillful and more closely bonded,” Dyson writes that everyone at Bomber Command thought, “their chances of survival would improve.” So Dyson, in order to discover what that something was, plunged in among the data of all the bombing missions the United Kingdom had run over Germany since the beginning of the war. If he could find it, maybe it could be taught to the others—and the war brought that much closer to an end. But despite all his searching, Dyson never found that magic ingredient.

It wasn’t that Dyson didn’t look hard enough for it: according to Dyson, he “did a careful analysis of the correlation between the experience of the crews and their loss rates, subdividing the data into many small packages so as to eliminate effects of weather and geography.” Yet, no matter how many different ways he looked at the data, he could not find evidence that the air crews that survived were any different than the ones shot down over Berlin or lost in the North Sea: “There was no effect of experience,” Dyson’s work found, “on loss rate.” Who lived and who died while attempting to burn Dresden or blow up Hamburg was not a matter of experience: “whether a crew lived or died,” Dyson writes, “was purely a matter of chance.” The surviving crews possessed no magical ingredient. They couldn’t—perhaps because there wasn’t one.

Still, despite the conclusiveness of Dyson’s results his studies had no effect on the operations of Bomber Command: “The crews continued to die, experienced and inexperienced alike, until Germany was overrun and the war finally ended.” While Dyson’s research suggested that dying in the stratosphere over Lübeck had no relation to skill, no one at the highest levels wanted to admit that the survivors weren’t experts—that they were instead just lucky. Perhaps, had the war continued, Dyson’s argument might eventually have won out—but the war ended, fortunately (or not) for the air crews of the Royal Air Force, before Bomber Command had to admit he was right.

All of that, of course, might appear to have little to do with the Chicago Cubs—until it’s recognized that the end of their century-long championship drought had everything to do with the eventual success of Dyson’s argument. Unlike Bomber Command, the Cubs have been at the forefront of what The Ringer’s Rany Jazayerli calls baseball’s “Great Analytics War”—and unlike the contest between Dyson and his superiors, that war has had a definite conclusion. The battle between what Jazayerli calls an “objective, data-driven view” and an older vision of baseball “ended at 48 minutes after midnight on November 3”—when the Cubs (led by a general manager who, like Dyson, trusted to statistical analysis) recorded the final out of the 2016 season.

That general manager is Theo Epstein—a man who was converted to Dyson’s “faith” at an early age. According to ESPN, Epstein, “when he was 12 … got his first Bill James historical abstract”—and as many now recognize, James pioneered applying the same basic approach Dyson used to think about how to bomb Frankfurt to winning baseball games. An obscure graduate of the University of Kansas, after graduation James took a job as a night security guard at the Stokely-Van Camp pork and beans cannery in Kansas City—and while isolated in what one imagines were the sultry (or wintry) Kansas City evenings of the 1970s, James had plenty of time to think about what interested him. That turned out to be somewhat like the problem Dyson had faced a generation earlier: where Dyson was concerned with how to win World War II, James was interested in what appeared to be the much-less portentous question of how to win the American League. James thereby invented an entire field—what’s now known as sabermetrics, or the statistical study of baseball—and in so doing, the tools James invented have become the keys to baseball’s kingdom. After all, Epstein—employed by a team owner who hired James as a consultant in 2003—not only used James’ work to end the Cubs’ errand in baseball’s wilderness but also, as all the world knows, constructed the Boston Red Sox championship teams of 2004 and 2007.

What James had done, of course, is shown how the supposed baseball “experts”—the ex-players and cronies that dominated front offices at the time—in fact knew very little about the game: they did not know, for example, that the most valuable single thing a batter can do is to get on base, or that stolen bases are, for the most part, a waste of time. (The risk of making an out, as per for example David Smith’s “Maury Wills and the Value of a Stolen Base,” is more significant than the benefit of gaining a base.) James’ insights had not merely furnished the weaponry used by Epstein; during the early 2000s another baseball team, the Oakland A’s, and their manager Billy Beane, had used James-inspired work to get to the playoffs four consecutive years (from 2000 to 2003), and won twenty consecutive games in 2002—a run famously chronicled by journalist Michael Lewis’ book, Moneyball: The Art of Winning an Unfair Game, which later became a Hollywood movie starring Brad Pitt. What isn’t much known, however, is that Lewis has noticed the intellectual connection between this work in the sport of baseball—and the work Dyson thought of as similar to his own work as a statistician for Bomber Command: the work of psychologist Kahneman and his now-deceased colleague, Amos Tversky.

The connection between James, Kahneman, and Tversky—an excellent name for a law firm—was first noticed, Lewis says, in a review of his Moneyball book by University of Chicago professors Cass Sunstein, of the law school, and Richard Thaler, an economist. When Lewis described the failures of the “old baseball men,” and conversely Beane’s success, the two professors observed that “Lewis is actually speaking here of a central finding in cognitive psychology”: the finding upon which Kahneman and Tversky based their careers. Whereas Billy Beane’s enemies on other baseball teams tended “to rely on simple rules of thumb, on traditions, on habits, on what other experts seem to believe,” Sunstein and Thaler pointed out that Beane relied on the same principle that Dyson found when examining the relative success of bomber pilots: “Statistics and simple arithmetic tell us more about ourselves than expert intuition.” While Bomber Command in other words relied on the word of their “expert” pilots, who perhaps might have said they survived a run over a ball-bearing plant because of some maneuver or other, baseball front offices relied for decades on ex-players who thought they had won some long-ago game on the basis of some clever piece of baserunning. Tversky and Kahneman’s work, however—like that of Beane and Dyson—suggested that much of what passes as “expert” judgment can be, for decades if not centuries, an edifice erected on sand.

That work has, as Lewis found after investigating the point when his attention was drawn to it by Sunstein and Thaler’s article, been replicated in several fields: in the work of the physician Atul Gawande, for instance, who, Lewis says, “has shown the dangers of doctors who place too much faith in their intuition.” The University of California, Berkeley finance professor Terry Odean “examined 10,000 individual brokerage accounts to see if stocks the brokers bought outperformed stocks they sold and found that the reverse was true.” And another doctor, Toronto’s Donald Redelmeier—who studied under Tversky—found “that an applicant was less likely to be admitted to medical school if he was interviewed on a rainy day.” In all of these cases (and this is not even to bring up the subject of, say, the financial crisis of 2007-08, a crisis arguably brought on precisely by the advice of “experts”), investigation has shown that “expert” opinion may not be what it is cracked up to be. It may in fact actually be worse than the judgment of laypeople.

If so, might I suggest, then David Frum’s “expert” suggestion about what to do to avoid a replay of the Trump candidacy—reinforce the rule of experts, a proposition that itself makes several questionable assumptions about the nature of the events of the past two years, if not decades—stops appearing to be a reasonable proposition. It begins, in fact, to appear rather more sinister: an attempt by those in Frum’s position in life—what we might call Eastern, Ivy League-types—to will themselves into believing that Trump’s candidacy is fueled by a redneck resistance to “reason,” along with good old-fashioned American racism and sexism. But what the Cubs’ victory might suggest is that what could actually be powering Trump is the recognition by the American people that many of the “cures” dispensed by the American political class are nothing more than snake oil proffered by cynical tools like David Frum. That snake oil doubles down on exactly the same “expert” policies (like freeing capital to wander the world, while increasingly shackling labor) that, debatably, is what led to the rise of Trump in the first place—a message that, presumably, must be welcome to Frum’s superiors at whatever the contemporary equivalent of Bomber Command is.

Still, despite the fact that the David Frums of the world continue to peddle their nonsense in polite society, even this descendant of South Side White Sox fans must allow that Theo Epstein’s victory has given cause for hope down here at the street-level of a Midwestern city that for has, for more years than the Cubs have been in existence, been the plaything of Eastern-elite labor and trade policies. It’s a hope that, it seems, now has a Ground Zero.

You can see it at the intersection of Clark and Addison.

The Oldest Mistake

Monte Ward traded [Willie] Keeler away for almost nothing because … he made the oldest mistake in management: he focused on what the player couldn’t do, rather than on what he could.
The New Bill James Historical Baseball Abstract

 

 

What does an American “leftist” look like? According to academics and the inhabitants of Brooklyn and its spiritual suburbs, there are means of tribal recognition: unusual hair or jewelry; a mode of dress either strikingly old-fashioned or futuristic; peculiar eyeglasses, shoes, or other accessories. There’s a deep concern about food, particularly that such food be the product of as small, and preferably foreign, an operation as possible—despite a concomitant enmity of global warming. Their subject of study at college was at minimum one of the humanities, and possibly self-designed. If they are fans of sports at all, it is either extremely obscure, obscenely technical, and does not involve a ball—think bicycle racing—or it is soccer. And so on. Yet, while each of us has exactly a picture of such a person in mind—probably you know at least a few, or are one yourself—that is not what a real American leftist looks like at the beginning of the twenty-first century. In reality, a person of the actual left today drinks macro-, not micro-, brews, studied computer science or some other such discipline at university, and—above all—is a fan of either baseball or football. And why is that? Because such a person understands statistics intuitively—and the great American political battle of the twenty-first century will be led by the followers of Strabo, not Pyrrho.

Each of those two men were Greeks: the one, a geographer, the other a philosopher—the latter often credited with being one of the first “Westerners” to visit India. “Nothing really exists,” Pyrrho reportedly held, “but human life is governed by convention”—a philosophy very like that of the current American “cultural left,” governed as it is by the notion, as put by American literary critic Stanley Fish, that “norms and standards and rules … are in every instance a function or extension of history, convention, and local practice.” Arguably, most of the “political” work of the American academy over the past several generations has been done under that rubric: as Fish and others have admitted in recent years, it’s only by acceding to some version of that doctrine that anyone can work as an American academic in the humanities these days.

Yet while “official” leftism has prospered in the academy under a Pyrrhonian rose, in the meantime enterprises like fantasy football and above all, sabermetrics, have expanded as a matter of “entertainment.” But what an odd form of relaxation! It’s an bizarre kind of escapism that requires a familiarity with both acronyms and the formulas used to compute them: WAR, OPS, DIPS, and above all (with a nod to Greek antecedents), the “Pythagorean expectation.” Yet the work on these matters has, mainly, been undertaken as a purely amateur endeavor—Bill James spent decades putting out his baseball work without any remuneration, until finally being hired latterly by the Boston Red Sox in 2003 (the same year that Michael Lewis published Moneyball, a book about how the Oakland A’s were using methods pioneered by James and his disciples). Still, all of these various methods of computing the value of both a player and a team have a perhaps-unintended effect: that of training the mind in the principle of Greek geographer, Strabo.

“It is proper to derive our explanations from things which are obvious,” Strabo wrote two thousand years ago, in a line that would later be adopted by the Englishman who constructed geology, Charles Lyell. In Lyell’s Principles of Geology (which largely founded the field) Lyell held—in contrast to the mysteriousness of Pyrrho—that the causes of things are likely to those already around us, and not due to unique, unrepeatable events. Similarly, sabermetricians—as opposed to the old-school scouts depicted in the film version of Moneyball—judge players based on their performance on the field, not on their nebulous “promise” or “intangibles.” (In Moneyball scouts were said to judge players on such qualities as the relative attractiveness of their girlfriends, which was said to signify the player’s own confidence in his ability.) Sabermetricians disregard such “methods” of analysis in favor of examination of the acts performed by the player as recorded by statistics.

Why, however, would that methodological commitment lead sabermetricians to be politically “liberal”—or for that matter, why would it lead in a political direction at all? The answer to the latter question is, I suspect, inevitable: sabermetrics, after all, is a discipline well-suited for the purpose of discovering how to run a professional sports team—and in its broadest sense, managing organizations simply is what “politics” is. The Greek philosopher Aristotle, for that reason, defined politics as a “practical science”—as the discipline of organizing human beings for particular purposes. It seems inevitable then that at least some people who have spent time wondering about, say, how to organize a baseball team most effectively might turn their imaginations towards some other end.

Still, even were that so, why “liberalism,” however that is defined, as opposed to some other kind political philosophy? Going by anecdotal evidence, after all, the most popular such doctrine among sports fans might be libertarianism. Yet, beside the fact that libertarianism is the philosophy of twelve-year-old boys (not necessarily a knockdown argument against its success), it seems to me that anyone following the methods of sabermetrics will be led towards positions usually called “liberal” in today’s America because from that sabermetrical, Strabonian perspective, certain key features of the American system will nearly instantly jump out.

The first of those features will be that, as it now stands, the American system is designed in a fashion contrary to the first principle of sabermetrical analysis: the Pythagorean expectation. As Charles Hofacker described it in a 1983 article for Baseball Analyst, the “Pythagorean equation was devised by Bill James to predict winning percentage from … the critical difference between runs that [a team] scores and runs that it allows.” By comparing these numbers—the ratio of a team’s runs scored and runs allowed versus the team’s actual winning percentage—James found that a rough approximation of a team’s real value could be determined: generally, a large difference between those two sets of numbers means that something fluky is happening.

If a team scores a lot of runs while also preventing its opponents from scoring, in other words, and yet somehow isn’t winning as many games as those numbers would suggest, then that suggests that that team is either tremendously unlucky or there is some hidden factor preventing success. Maybe, for instance, that team is scoring most of its runs at home because its home field is particularly friendly to the type of hitters the team has … and so forth. A disparity between runs scored/runs allowed and actual winning percentage, in short, compels further investigation.

Weirdly however the American system regularly produces similar disparities—and yet while, in the case of a baseball team, that would set off alerts for a sabermetrician, no such alarms are set off in the case of the so-called “official” American left, which apparently has resigned itself to the seemingly inevitable. In fact, instead of being the subject of curiosity and even alarm, many of the features of the U.S. constitution, like the Senate and the Electoral College—not to speak of the Supreme Court itself—are expressly designed to thwart what Chief Justice Earl Warren said was “the clear and strong command of our Constitution’s Equal Protection Clause”: the idea that “Legislators represent people … [and] are elected by voters, not farms or cities or economic interests.” Whereas a professional baseball team, in the post-James era, would be remiss if it were to ignore a difference between its ratio of runs scored and allowed and its games won and lost, under the American political system the difference between the will of the electorate as expressed by votes cast and the actual results of that system as expressed by legislation passed is not only ignored, but actively encouraged.

“The existence of the United States Senate”—for example wrote Justice Harlan in his dissent to the 1962 case of Baker v. Carr—“is proof enough” that “those who have the responsibility for devising a system of representation may permissibly consider that factors other than bare numbers should be taken into account.” That is, the existence of the U.S. Senate, which sends two senators from each state regardless of each state’s population, is support enough for those who believe—as the American “cultural left” does—in the importance of factors like “history” or the like in political decisions, as opposed to, say, the will of the American voters as expressed by the tally of all American votes.

As Jonathan Cohn remarked in The New Republic not long ago, in the Senate “predominantly rural, thinly populated states like Arkansas and North Dakota have the exact same representation as more urban, densely populated states like California and New York”—meaning that voters in those rural states have more effective political power than voters in the urban ones do. In sum, the Senate is, as Cohn says, one of Constitution’s “levers for thwarting the majority.” Or to put it in sabermetrical terms, it is a means of hiding a severe disconnect in America’s Pythagorean expectation.

Some will defend that disconnect, as Justice Harlan did over fifty years ago, on the grounds of terms familiar to the “cultural left”: that of “history” and “local practice” and so forth. In other words, that is how the Constitution originally constructed the American state. Yet, attempting (in Cohn’s words) to “prevent majorities from having the power to determine election outcomes” is a dangerous undertaking; as the Atlantic’s Ta Nehisi-Coates wrote recently about certain actions taken by the Republican party designed to discourage voting, to “see the only other major political party in the country effectively giving up on convincing voters, and instead embarking on a strategy of disenfranchisement, is a bad sign for American democracy.” In baseball, the sabermetricians know, a team with a high difference between its “Pythagorean expectation” and its win-loss record will usually “snap back” to the mean. In politics, as everyone since before Aristotle has known, such a “snap back” is usually a bit more costly than, say, the price of a new pitcher—which is to say that, if you see any American revolutionaries around you right now, he or she is likely wearing, not a poncho or a black turtleneck, but an Oakland A’s hat.        

The Weight We Must Obey

The weight of this sad time we must obey,
Speak what we feel, not what we ought to say.
King Lear V,iii

There’s a scene in the film Caddyshack that at first glance seems like a mere throwaway one-liner, but that rather neatly sums up what I’m going to call the “Kirby Puckett” problem. Ted Knight’s Judge Smails character asks Chevy Chase’s Ty Webb character about how if Webb doesn’t, as he claims, keep score, then how does he measure himself against other golfers? “By height,” Webb replies. It’s a witty enough reply on its own of course. But it also (and perhaps there’s a greater humor to be found here) raises a rather profound question: is there a way to know someone is a great athlete—aside from their production on the field? Or, to put the point another way, what do bodies tell us?

I call this the “Kirby Puckett” problem because of something Bill James, the noted sabermetrician and former , once wrote in his New Historical Baseball Abstract: “Kirby Puckett,” James observed, “once said that his fantasy was to have a body like Glenn Braggs’.” Never heard of Glenn Braggs? Well, that’s James’ point: Glenn Braggs looked like a great ballplayer—“slender, fast, very graceful”—but Kirby Puckett was a great ballplayer: a first-ballot Hall of Famer, in fact. Yet despite his own greatness—and surely Kirby Puckett was aware he was, by any measure, a better player than Glenn Braggs—Puckett could not help but wish he appeared “more like” the great player he, in reality, was.

What we can conclude from this is that a) we all (or most of us) have an idea of what athletes look like, and b) that it’s extremely disturbing when that idea is called into question, even when you yourself are a great athlete.
This isn’t a new problem, to be sure. It’s the subject, for instance, of Moneyball, the book (and the movie) about how the Oakland A’s, and particularly their general manager Billy Beane, began to apply statistical analysis to baseball. “Some scouts,” wrote Michael Lewis in that book, about the difference between the A’s old and the new ways of doing things, “still believed they could tell by the structure of a young man’s face not only his character but his future in pro ball.” What Moneyball is about is how Beane and his staff learned to ignore what their eyes told them, and judge their players solely on the numbers.

Or in other words, to predict future production only by past production, instead of by what appearances appeared to promise. Now, fairly obviously that doesn’t mean that coaches and general managers of every sport need to ignore their players’ appearances when evaluating their future value. Indisputably, many different sports have an ideal body. Jockeys, of course, are small men, whereas football players are large ones. Basketball players are large, too, but in a different way: taller and not as bulky. Runners and bicyclists have yet a different shape. Pretty clearly, completely ignoring those factors would lead any talent judge far astray quickly.

Still, the variety of successful body types in a given sport might be broader than we might imagine—and that variety might be broader yet depending on the sport in question. Golf for example might be a sport with a particularly broad range of potentially successful bodies. Roughly speaking, golfers of almost any body type have been major champions.

“Bantam” Ben Hogan for example, greatest of ballstrikers, stood 5’7” and weighed about 135 pounds during his prime, and going farther back Harry Vardon, who invented the grip used almost universally today and won the British Open six times, stood 5’9” and weighed about 155 pounds. But alternately, Jack Nicklaus was known as “Fat Jack” when he first came out on tour—a nickname that tells its own story—and long before then Harry Vardon had competed against Ted Ray, who won two majors of his own (the 1912 British and the 1920 U.S. Opens)—and was described by his contemporaries as “hefty.” This is not even to bring up, say, John Daly.

The mere existence of John Daly, however, isn’t strong enough to expand our idea of what constitutes an athlete’s body. Golfers like Daly and the rest don’t suggest that the overweight can be surprisingly athletic; instead, they provoke the question of whether golf is a sport at all. “Is Tiger Woods proof that golf is a sport, or is John Daly confirmation to the contrary?” asks a post on Popular Science’s website entitled “Is Golf a Sport?” There’s even a Facebook page entitled “Golf Is Not a Sport.”

Facebook pages like the above confirm just how difficult it is to overcome our idealized notions of what athletes are. It’s to the point that if somebody, no matter how skillful his efforts, doesn’t appear athletic, then we are more likely to narrow our definition of athletic acts rather than expand our definition of athletic bodies. Thus, Kirby Puckett had trouble thinking of himself as an athlete, despite that he excelled in a sport that virtually anyone will define as one.

Where that conclusion could (and, to some minds, should) lead us is to the notion that a great deal of what we think of as “natural” is, in fact, “cultural”—that favorite thesis of the academic Left in the United States, the American liberal arts professors proclaiming the good news that culture trumps nature. One particular subspecies of the gens is the supposedly expanding (aaannnddd rimshot) field called by its proponents “Fat Studies,” which (according to Elizabeth Kolbert of The New Yorker) holds that “weight is not a dietary issue but a political one.” What these academics think, in other words, is that we are too much the captives of our own ideas of what constitutes a proper body.

In a narrow (or, anti-wide) sense, that is true: even Kirby Puckett was surprised that he, Kirby Puckett, could do Kirby Puckett-like things while looking like Kirby Puckett. To the academics involved in “Fat Studies” his reaction might be a sign of “fatphobia, the fear and hatred of fatness and fat people.” It’s the view of Kirby Puckett, that is, as self-hater; one researcher, it seems, has compared “fat prejudice … to anti-semitism.” In “a social context in which fat hatred is endemic,” this line of thinking might go, even people who achieve great success with the bodies they have can’t imagine that success without the bodies that culture tells them ought to be attached to it.

What this line of work might then lead us to is the conclusion that the physical dimensions of a player matter very little. That would make the success of each athlete largely independent (or not) of physical advantage—and thereby demonstrate that thousands of coaches everywhere would, at least in golf, be able to justify asserting that success is due to the “will to succeed” rather than a random roll of the genetic dice. It might mean that nations looking (in expectation perhaps of the next Summer Olympics, where golf will be a medal sport) to achieve success in golf—like, for instance, the Scandinavian nations whose youth athletics programs groom golfers, or nations like Russia or China with a large population but next to no national golf tradition—should look for young people with particular psychological characteristics rather than particular physical ones.

Yet whereas “Fat Studies” or the like might focus on Kirby Puckett’s self-image, Bill James instead focuses on Kirby Puckett’s body: the question James asks isn’t whether Puckett played well despite his bad self-image, bur rather whether Puckett played well because he actually had a good body for baseball. James asks whether “short, powerful, funny-looking kind of guy[s]” actually have an advantage when it comes to baseball, rather than the assumed advantage of height that naturally allows for a faster bat speed, among the other supposed advantages of height. “Long arms,” James speculates, “really do not help you when you’re hitting; short arms work better.” Maybe, in fact, “[c]ompressed power is more effective than diffuse power,” and James goes on to name a dozen or more baseball stars who all were built something like Honus Wagner, who stood 5’11” and weighed 200 pounds. Which, as it happens, was also about the stat line for Jack Nicklaus in his prime.

So too, as it happens, do a number of other golfers. For years the average height of a PGA Tour player was usually said to be 5’9”; these days, due to players like Dustin Johnson, that stat is most often said to be about 5’11”. Still—as remarked by the website Golf Today—“very tall yet successful golfers are a rarity.”I don’t have the Shotlink data—which has a record of every shot hit by a player on the PGA Tour since 2003—to support the idea that certain-sized guys of one sort or another had the natural advantage, though today it’s possible that it could easily be obtained. What’s interesting about even asking the question, however, is that it is a much-better-than-merely-theoretically-solvable problem—which significantly distinguishes it from that of the question that might be framed around our notions of what constitutes an athletic body, as might be done by the scholars of “Fat Studies.”

Even aside from the narrow issue of allocating athletic resources, however, there’s reason for distrusting those scholars. It’s true, to be sure, that Kirby Puckett’s reaction to being Kirby Puckett might lend some basis for thinking that a critical view of our notions of what bodies are is salutary in an age where our notions of what bodies are and should be are—to add to an already-frothy mix of elements—increasingly driven by an advertising industry that, in the guise of either actors or models, endlessly seeks the most attractive bodies.

It would easier to absorb such warnings, however, were there not evidence that obesity is not remaining constant, but rather a, so to say, growing problem. As Kolbert reports, the federal government’s Centers for Disease Control, which has for decades done measurements of American health, found that whereas in the early 1960s a quarter of Americans were overweight, now more than third are. And in 1994, their results got written up in the Journal of American Medicine: “If this was about tuberculosis,” Kolbert reports about one researcher, “it would be called an epidemic.” Over the decade previous to that report Americans had, collectively, gained over a billion pounds.

Even if “the fat … are subject to prejudice and even cruelty,” in other words, that doesn’t mean that being that way doesn’t pose serious health risks both for the individual and for society as a whole. The extra weight carried by Americans, Kolbert for instance observes, “costs the airlines a quarter of a billion dollars’ worth of jet fuel annually,” and this isn’t to speak of the long-term health care costs that attach themselves to the public pocketbook in nearly unimaginable ways. (Kolbert notes that, for example, doors to public buildings are now built to be fifteen, instead of twelve, feet wide.)

“Fat Studies” researchers might claim in other words, as Kolbert says, that by shattering our expectations of what a body ought to be so thoroughly fat people (they insist on the term, it seems) can shift from being “revolting … agents of abhorrence and disgust” to “‘revolting’ in a different way … in terms of overthrowing authority, rebelling, protesting, and rejecting.” They might insist that “corpulence carries a whole new weight [sic] as a subversive cultural practice.” In “contrast to the field’s claims about itself,” says Kolbert however, “fat studies ends up taking some remarkably conservative positions,” in part because it “effectively allies itself with McDonald’s and the rest of the processed-food industry, while opposing the sorts of groups that advocate better school-lunch programs and more public parks.” In taking such an extreme position, in short, “Fat Studies” ends up only strengthening the most reactionary policy tendencies.

As, logically speaking, it must. “To claim that some people are just meant to be fat is not quite the same as arguing that some people are just meant to be poor,” Kolbert observes, “but it comes uncomfortably close.” Similarly, to argue that our image of a successfully athletic body is tyrannical can, if not done carefully, be little different from the fanatical coach who insists that determination is the only thing separating his charges from championships. Maybe it’s true that success in golf, and other sports, is largely a matter of “will”—but if it is, wouldn’t it be better to be able to prove it? If it isn’t, though, that would certainly enable a more rational distribution of effort all the way around: from the players themselves (who might thereby seek another sport at an earlier age) to recruiters, from national sporting agencies to American universities, who would then know what they sought. Maybe, in other words, measuring golfers by height isn’t so ridiculous at all.