At Play In The Fields Of The Lord

Logo for 2015 US Amateur at Olympia Fields Country Club
Logo for 2015 US Amateur at Olympia Fields Country Club

 

Behold, I send you forth as sheep in the midst of wolves:
be ye therefore wise as serpents, and harmless as doves.
—Matthew 10:16

Now that the professional, Open tournaments are out of the way, the U.S. Amateur approaches. A tournament that has always been a symbol of wealth and discrimination—the Amateur was a tournament invented specifically to keep out the riff-raff of professional golfers—the site of this year’s edition might be considered particularly unfortunate considering that this year the tournament will fall just more than a year after the Michael Brown shooting in Ferguson, Missouri: Olympia Fields, in Chicago’s south suburbs, is a relatively wealthy enclave among a swath of exceedingly poor villages and towns very like the terrain of the St. Louis suburbs just a few hundred miles away. Yet there’s a deeper irony at work here that might be missed even by those who’d like to point out that similarity of setting: the format of the tournament, match-play, highlights precisely what the real message of the Brown shooting was. That real message, the one that is actually dangerous to power, wasn’t the one shouted by protestors—that American police departments are “racist.” The really dangerous message is the one echoed by the Amateur: a message that, read properly, tells us that our government’s structure is broken.

The later rounds of U. S. Amateur are played under golf’s match play, rather than stroke play, rules—a difference that will seem arcane to those unfamiliar with the sport, but is a very significant difference nevertheless. In stroke play, competitors play whatever number of holes are required—in professional tournaments, usually 72 holes—and count up however many strokes each took: the player with the fewest strokes wins. Match play however is not the same: in the first place, because in stroke play each golfer is effectively playing against every other player in the field, because all the strokes of every player count. But this is not so in match play.

In the first place, match play consists of, as the name suggests, matches: that is, once the field is cut to the 64 players with the lowest score after an initial two-day stroke play tournament, each of those 64 contestants plays an 18-hole match against one other contestant. The winner of each of these matches then proceeds to move on, until there is a champion—a single-elimination tournament that is exactly like the NCAA basketball tournament held every year in March. The winner of each match in turn, as John Van der Borght says on the website of the United States Golf Association, “is the player who wins the most holes.” That is, what matters on every hole is just whether the golfer has shot a lower score than the opponent for that hole, not overall. Each hole starts the competition again, in other words—like flipping coins, what happened in the past is irrelevant. It’s a format that might sound hopeful, because on each hole whatever screw-ups a player commits are consigned to the dustbin of history. In fact, however, it’s just this element that makes match-play the least egalitarian of formats—and ties it to Ferguson.

Tournaments conducted under match play rules are always subject to a kind of mathematical oddity called a Simpson’s Paradox: such a paradox occurs when, as the definition on Wikipedia says, it “appears that two sets of data separately support a certain hypothesis, but, when considered together, they support the opposite hypothesis.” For example, as I have mentioned in this blog before, in the first round of the PGA Tour’s 2014 Accenture Match Play tournament in Tucson, an unknown named Pedro Larrazabal shot a 68 to Hall-of-Famer Ernie Els’ 75—but because they played different opponents, Larrazabal was out of the tournament and Els was in. Admittedly, even with such an illustration the idea might still sound opaque, but the meaning can be seen by considering, for example, the tennis player Roger Federer’s record versus his rival Rafael Nadal.

Roger Federer has won 17 major championships in men’s tennis, a record—and yet many people argue that he is not the Greatest Of All Time (G.O.A.T.). The reason those people can argue that is because, as Michael Steinberger pointed in the New York Times not long ago, Federer “has a losing record against Nadal, and a lopsided one at that.” Steinberger then proceeded to argue why that record should be discarded and Federer should be called the “GOAT” anyway. But weirdly, Steinberger didn’t attempt—and neither, so far as I can tell, has anyone else—what an anonymous blogger did in 2009: a feat that demonstrates just what a Simpson’s Paradox is, and how it might apply both to the U.S. Amateur and Ferguson, Missouri.

What that blogger did, on a blog entitled SW19—a reference to the United Kingdom’s postal code for Wimbledon, the great tennis arena—was he counted up the points.

Let me repeat: he counted up the points.

That might sound trivial, of course, but as the writer of the SW19 blog realized, tennis is a game that abounds in Simpson’s Paradoxes: that is, it is a game in which it is possible to score fewer points than your opponent, but still win the match. Many people don’t realize this: it might be expected, for example, that because Nadal has an overwhelmingly-dominant win-loss record versus Federer, he must also have won an equally-dominant number of points from the Swiss champion. But an examination of the points scored in each of the matches between Federer and Nadal demonstrates that in fact the difference between them was miniscule.

The SW19 blogger wrote his post in 2009; at that time Nadal led Federer by 13 matches to 7 matches, a 65 percent winning edge for the Spaniard, Nadal. Of those 20 matches, Nadal won the 2008 French Open—played on Nadal’s best surface, clay—in straight sets, 6-1, 6-3, 6-0. In those 20 matches, the two men played 4,394 total points: that is, where one player served and the two volleyed back and forth until one player failed to deliver the ball to the other court according to the rules. If tennis had a straightforward relationship between points and wins—like golf’s stroke play format, in which every “point” (stroke) is simply added to the total and the winner has the fewest points—then it might be expected that Nadal has won about 65 percent of those 4,394 points played, which would be about 2,856 points. In other words, to get a 65 percent edge in total matches, Nadal should have about a 65 percent edge in total points: the point total, as opposed to the match record, between the two ought to be about 2,856 to 1,538.

Yet this, as the SW19 blogger realized, is not the case: the real margin between the two players was Nadal, 2,221, and Federer, 2,173. In other words, even including the epic beating at Roland Garros in 2008, Nadal had only beaten Federer by a total of 48 points over the course of their careers–a total of less than one percent of all the points scored. Not merely that, but if that single match at the 2008 French Open is excluded, then the margin becomes eight points.  The mathematical difference between Nadal and Federer, thus, is the difference between a couple of motes of dust on the edge of a coin while it’s being flipped—if what is measured is the act that is the basis of the sport, the act of scoring points. In terms of points scored, Nadal’s edge is about a half of percentage point—and most of that percentage was generated by a single match. But Nadal had a 65 percent edge in their matches.

How did that happen? The answer is that the structure of tennis scoring is similar to that of match play in golf: the relation between wins and points isn’t direct. In fact, as the SW19 blogger shows, of the twenty matches Nadal and Federer had played to that moment in 2009, Federer had actually scored more points than Nadal in three of them—and still lost the match. If there were a direct relation between points and wins in tennis, that is, the record between Federer and Nadal would actually stand even, at 10-10, instead of what it was in reality, 13-7—a record that would have accurately captured the real point differential between them. But because what matters in tennis isn’t—exactly—the total number of points you score, but instead the numbers of games and sets you win, it is entirely possible to score more points than your opponent in a tennis match—and still lose. (Or, the converse.)

The reason why that is possible, as Florida State University professor Ryan Rodenberg put it in The Atlantic not long ago, is due to “tennis’ decidedly unique scoring system.” (Actually, not unique, because as might be obvious by now match play golf is scored similarly.) In sports like soccer, baseball, or stroke play golf, as sports psychologist Allen Fox once wrote in Tennis magazine, “score is cumulative throughout the contest … and whoever has the most [or, in the case of stroke play golf, least] points at the end wins.” But in tennis things are different: “[i]f you reach game point and win it, you get the entire game while your opponent gets nothing—all the points he or she won in the game are eliminated.” Just in the same way that what matters in tennis is the game, not the point, in match play golf all that matters is the hole, and not the stroke.

Such scoring systems breed Simpson’s Paradoxes: that is, results that don’t reflect the underlying value a scoring system is meant to reflect—we want our games to be won by the better player, not the lucky one—but instead are merely artifacts of the system used to measure. The point (ha!) can be shown by way of an example taken from a blog written by one David Smith, head of marketing for a company called Revolution Analytics, about U.S. median wages. In that 2013 post, Smith reported that the “median US wage has risen about 1%, adjusted for inflation,” since 2000. But was that statistic important—that is, did it measure real value?

Well, what Smith found was that wages for high school dropouts, high school graduates, high school graduates with some college, college graduates, and people with advanced degrees all fell over the same period. Or, as Smith says, “within every educational subgroup, the median wage is now lower than it was in 2000.” But how can it be that “overall wages have risen, but wages within every subgroup have fallen?” The answer is similar to the reason why Rafael had a 65 percent winning margin against Federer: although there are more college graduates now than in 2000, the wages of college graduates haven’t fallen (1.2%) as far as, say, high school dropouts (7.9%). So despite the fact that everyone is poorer—everyone is receiving lower wages, adjusted for inflation—than in 2000, mendacious people can say wages are actually up. Wages are up—if you “compartmentalize” the numbers in just the way that reflects the story you’d like to tell.

Now, while the story about American wages might suggest a connection to Ferguson—and it does—that isn’t the connection between the U.S. Amateur and Ferguson, Missouri, I’d like to discuss. That connection is this one: if the trouble about the U.S. Amateur is that it is conducted under match play—a format that permits Simpson’s Paradox results—and Simpson’s Paradoxes are, at heart, boundary disputes—arguments about whether to divide up the raw data into smaller piles or present them as one big pile—then that suggests the real link to Ferguson because the real issue behind Darren Wilson’s shooting of Michael Brown then isn’t racism—or at least, the way to solve it isn’t to talk about racism. Instead, it’s to talk borders.

After Ferguson police officer Darren Wilson shot Michael Brown last August, the Department of Justice issued a report that was meant, as Zoë Carpenter of The Nation wrote this past March, to “address the roots of the police force’s discriminatory practices.” That report held that those practices were not “simply the result of racist cops,” but instead stemmed “from the way the city preys on residents financially, relying on the fines that accompany even minor offenses to balance its budget.” The report found an email from Ferguson’s finance director to the town’s police chief that, Carpenter reported, said “unless ticket writing ramps up significantly before the end of the year, it will be hard to significantly raise collections next year.” The finance director’s concerns were justified: only slightly less than a quarter of Ferguson’s total budget was generated by traffic tickets and other citations. The continuing operation of the town depends on revenue raised by the police—a need, in turn, that drives the kind of police zealotry that the Department of Justice said contributed to Brown’s death.

All of which might seem quite far from the concerns of the golf fans watching the results of the matches at the U.S. Amateur. Yet consider a town not far from Ferguson: Beverly Hills, Missouri. Like Ferguson, Beverly Hills is located to the northwest of downtown St. Louis, and like Ferguson it is a majority black town. But where Ferguson has over 20,000 residents, Beverly Hills has only around 600 residents—and that size difference is enough to make the connection to the U.S. Amateur’s format of play, match play, crystalline.

Ferguson after all is not alone in depending so highly on police actions for its revenues: Calverton Park, for instance, is another Missouri “municipality that last fiscal year raised a quarter of its revenue from traffic fines,” according to the St. Louis Post-Dispatch. Yet while Ferguson, like Calverton Park, also raised about a quarter of its budget from police actions, Beverly Hills raised something like half of its municipal budget on traffic and other kinds of citations, as a story in the Washington Post. All these little towns, all dependent on traffic tickets to meet their budgets; “Most of the roughly ninety municipalities in St. Louis County,” Carpenter reports in The Nation, “have their own courts, which … function much like Ferguson’s: for the purpose of balancing budgets.” Without even getting into the issue of the fairness of property taxes or sales taxes as a basis for municipal budgeting, it seems obvious that depending on traffic tickets as a major source of revenue is poor planning at best. Yet without the revenue provided by cops writing tickets—and, as a result of Ferguson, the state of Missouri is considering limiting the percentage of a town’s budget that can be raised by such tickets, as the St. Louis Dispatch article says—many of these towns will simply fail. And that is the connection to the U.S. Amateur.

What these towns are having to consider in other words is, according to the St. Louis Post-Dispatch, an option mentioned by St. Louis County Executive Steve Stenger last December: during an interview, the official said that “the consolidation of North County municipalities is what we should be talking about” in response to the threat of cutting back reliance on tickets. Small towns like Beverly Hills may simply be too small: they create too little revenue to support themselves without a huge effort on the part of the police force to find—and thus, in a sense, create—what are essentially taxable crimes. The way to solve the problem of a “racist” police department, in other words, might not be to conduct workshops or seminars in order to “retrain” the officers on the frontline, but instead to redrawn the political boundaries of the greater St. Louis metropolitan area.

That, at least, is a solution that our great-grandparents considered, as an article by writer Kim-Mai Cutler for Tech Crunch this past April remarked. Examining the historical roots of the housing crisis in San Francisco, Cutler discovered that in “1912, a Greater San Francisco movement emerged and the city tried to annex Oakland,” a move Oakland resisted. Yet as a consequence of not creating a Bay-wide government, Cutler says, “the Bay Area’s housing, transit infrastructure and tax system has been haunted by the region’s fragmented governance” ever since: the BART (Bay Area Regional Transit) system, for example, as originally designed “would have run around the entire Bay Area,” Cutler says, “but San Mateo County dropped out in 1961 and then Marin did too.” Many of the problems of that part of Northern California could be solved, Cutler thusly suggests via this and other instances—contra the received wisdom of our day—by bigger, not smaller, government.

“Bigger,” that is, in the sense of “more consolidated”: by the metric of sheer numbers, a government built to a larger scale might not employ as many people as do the scattered suburban governments of America today. But what such a government would do is capture all of the efficiencies of economies of scale available to a larger entity—thus, it might be in a sense smaller than the units it replaced, but definitely would be more powerful. What Missourians and Californians—and possibly others—may be realizing then is that the divisions between their towns are like the divisions tennis makes around its points, or match play golf makes around its strokes: dividing a finite resource, whether points or strokes or tax dollars (or votes), into smaller pools creates what might be called “unnatural,” or “artificial,” results—i.e., results that inadequately reflect the real value of the underlying resource. Just like match play can make Ernie Els’ 75 look better than Pedro Larrazabal’s 68, or tennis’ scoring system can make Rafael Nadal look much better than Federer—when in reality the difference between them is (or was) no more than a sliver of a gnat’s eyelash—dozens of little towns dissipate the real value, economic and otherwise, of the people that inhabit a region.

That’s why when Eric Holder, Attorney General for the United States, said that “the underlying culture” of the police department and court system of Ferguson needs to be reformed, he got it exactly wrong. The problems in St. Louis and San Francisco, the evidence suggests, are created not because government is getting in the way, but because government isn’t structured correctly to channel the real value of the people: scoring systems that leave participants subject to the vagaries of Simpson’s Paradox results might be perfectly fine for games like tennis or golf—where the downsides are minimal—but they shouldn’t be how real life gets scored, and especially not in government. Contra Holder, the problem is not that the members of the Ferguson police department are racists. The problem is that the government structure requires them, like occupying soldiers or cowboys, to view their fellow citizens as a kind of herd. Or, to put the manner in a pithier way: A system that depends on the harvesting of sheep will turn its agents into wolves. Instead of drowning the effects of racism—as a big enough government would through its very size—multiplying struggling towns only encourages racism: instead of diffusing racism, a system broken into little towns focuses it. The real problem of Ferguson then—the real problem of America—is not that Americans are systematically discriminatory: it’s that the systems used by Americans aren’t keeping the score right.

Advertisements

Bait and Switch

Golf, Race, and Class: Robert Todd Lincoln, Oldest Son of President Abraham Lincoln, and President of the Chicago Golf Club
Golf, Race, and Class: Robert Todd Lincoln, Oldest Son of President Abraham Lincoln, and President of the Chicago Golf Club

But insiders also understand one unbreakable rule: They don’t criticize other insiders.
A Fighting Chance
    Senator Elizabeth Warren.

… cast out first the beam out of thine own eye …
Matthew 7:5

 

“Where are all the black golfers?” Golf magazine’s Michael Bamberger asked back in 2013: Tiger Wood’s 1997 victory at the Masters, Bamberger says, was supposed to open “the floodgates … to minority golfers in general and black golfers in particular.” But nearly two decades later Tiger is the only player on the PGA Tour to claim to be African-American. It’s a question likely to loom larger as time passes: Woods missed the cut at last week’s British Open, the first time in his career he has missed a cut in back-to-back majors, and FiveThirtyEight.com’s line from April about Woods (“What once seemed to be destiny—Woods’ overtaking of Nicklaus as the winningest major champion ever—now looks like a fool’s notion”) seems more prophetic than ever. As Woods’ chase for Nicklaus fades, almost certainly the question of Woods’ legacy will turn to the renaissance in participation Woods was supposedly going to unleash—a renaissance that never happened. But where will the blame fall? Once we exclude Woods’ from responsibility for playing Moses, is the explanation for why are there no black golfers, as Bamberger seems to suggest, because golf is racist? Or is it, as Bamberger’s own reporting shows, more likely due to the economy? And further, if we can’t blame Woods for not creating more golfers in his image, can we blame Bamberger for giving Americans the story they want instead of the story they need?

Consider, for instance, Bamberger’s mention of the “Tour caddie yard, once a beautiful example of integration”—and now, he writes, “so white it looks like Little Rock Central High School, circa 1955.” Or his description of how, in “Division I men’s collegiate golf … the golfers, overwhelmingly, are white kids from country-club backgrounds with easy access to range balls.” Surely, although Bamberger omits the direct reference, the rise of the lily-white caddie yard is likely not due to a racist desire to bust up the beautifully diverse caddie tableau Bamberger describes, just as it seems more likely that the presence of the young white golfers at the highest level of collegiate golf owes more to their long-term access to range balls than it does to the color of their skin. Surely the mysterious disappearance of the black professional golfer is more likely due—as the title of a story by Forbes contributor Bob Cook has it—to “How A Declining Middle Class Is Killing Golf” than golf’s racism. An ebbing tide lowers all boats.

“Golf’s high cost to entry and association with an older, moneyed elite has resulted in young people sending it to the same golden scrap heap as [many] formerly mass activities,” as Cook wrote in Forbes—and so, as “people [have] had less disposable income and time to play,” golf has declined among all Americans and not just black ones. But then, maybe that shouldn’t be surprising when, as Scientific American reported in March, the “top 20% of US households own more than 84% of the wealth, and the bottom 40% combine for a paltry 0.3%,” or when, as Time said two years ago, “the wages of median workers have remained essentially the same” for the past thirty years. So it seems likelier that the non-existent black golfer can be found at the bottom of the same hole to which many other once-real and now-imaginary Americans—like a unionized, skilled, and educated working-class—have been consigned.

The conjuring trick however whereby the disappearance of black professional golfers becomes a profound mystery, rather than a thoroughly understandable consequence of the well-documented overall decline in wages for all Americans over the past two generations, would be no surprise to Walter Benn Michaels of the University of Illinois at Chicago. “In 1947,” Michaels has pointed out for instance, repeating all the statistics, “the bottom fifth of wage-earners got 5 per cent of total income,” while “today it gets 3.4 per cent.” But the literature professor is aware not only that inequality is rising, but also that it’s long been a standard American alchemy to turn economic matters into racial ones.

Americans, Michaels has written, “love thinking that the differences that divide us are not the differences between those of us who have money and those who don’t but are instead the differences between those of us who are black and those who are white or Asian or Latino or whatever.” Why? Because if the differences between us are due to money, and the lack of it, then there’s a “need to get rid of inequality or to justify it”—while on the other hand, if those differences are racial, then there’s a simple solution: “appreciating our diversity.” In sum, if the problem is due to racism, then we can solve it with workshops and such—but if the problem is due to, say, an historic loss of the structures of middle-class life, then a seminar during lunch probably won’t cut it.

Still, it’s hard to blame Bamberger for refusing to see what’s right in front of him: Americans have been turning economic issues into racial ones for some time. Consider the argument advanced by the Southern Literary Messenger (the South’s most important prewar magazine) in 1862: the war, the magazine said, was due to “the history of racial strife” between “a supposedly superior race” that had unwisely married its fortune “with one it considered inferior, and with whom co-existence on terms of political equality was impossible.” According to this journal, the Civil War was due to racial differences, and not from any kind of clash between two different economic interests—one of which was getting incredibly wealthy by the simple expedient of refusing to pay their workers and then protecting their investment by making secret and large-scale purchases of government officials while being protected by bought-and-paid-for judges. (You know, not like today.)

Yet despite how ridiculous it sounds—because it is—the theory does have a certain kind of loopy logic. According to these Southern, and some Northern, minds, the two races were so widely divergent politically and socially that their deep, historical differences were the obvious explanation for the conflict between the two sections of the country—instead of that conflict being the natural result of allowing a pack of lying, thieving criminals to prey upon decent people. The identity of these two races—as surely you have already guessed, since the evidence is so readily apparent—were, as historian Christopher Hanlon graciously informs us: “the Norman and Saxon races.”

Duh.

Admittedly, the theory does sound pretty out there—though I suspect it sounds a lot more absurd now that you know what races these writers were talking about, rather than the ones I suspect you thought they were talking about. Still, it’s worth knowing something of the details if only to understand how these could have been considered rational arguments: to understand, in other words, how people can come to think of economic matters as racial, or cultural, ones.

In the “Normans vs. Saxons” version of this operation, the theory comes in two flavors. According to University of Georgia historian James Cobb, the Southern flavor of this racial theory held that Southerners were “descended from the Norman barons who conquered England in the 11th century and populated the upper classes of English society,” and were thus naturally equipped for leadership. Northern versions held much the same, but flipped the script: as Ralph Waldo Emerson wrote in the 1850s, the Normans were “greedy and ferocious dragoons, sons of greedy and ferocious pirates” who had, as Conlon says, “imposed serfdom on their Saxon underlings.” To both sides then the great racial conflagration, the racial apocalypse, destined to set the continent alight would be fought between Southern white people … and Northern white people.

All of which is to say that Americans have historically liked to make their economic conflicts about race, and they haven’t always been particular about which ones—which might seem like downer news. But there is, perhaps, a bright spot to all this: whereas the Civil War-era writers treated “race” as a real description of a natural kind—as if their descriptions of “Norman” or “Saxon” had as much validity as a description of a great horned toad or Fraser’s eagle owl—nowadays Americans like to “dress race up as culture,” as Michaels says. This current orthodoxy holds that “the significant differences between us are cultural, that such differences should be respected, that our cultural heritages should be perpetuated, [and] that there’s a value in making sure that different cultures survive.” Nobody mentions that substituting “race” and “racial” for “culture” and “cultural” doesn’t change the sentence’s meaning in any important respects.

Still, it certainly has had an effect on current discourse: it’s what caused Bamberger to write that Tiger Woods “seems about as culturally black as John Boehner.” The phrase “culturally black” is arresting, because it implies that “race” may not be a biological category, as it was for the “Normans vs. Saxons” theorists. And certainly, that’s a measure of progress: just a generation or two ago it was possible to refer unselfconsciously to race in an explicitly biological way. So in that sense, it might be possible to think that because a Golf writer feels it necessary to clarify that “blackness” is a cultural, and not a biological, category, that constitutes a victory.

The credit for that victory surely goes to what the “heirs of the New Left and the Sixties have created, within the academy” as Stanford philosopher Richard Rorty wrote before his death—“a cultural Left.” The victories of that Left have certainly been laudable—they’ve even gotten a Golf magazine writer to talk about a “cultural,” instead of biological, version of whatever “blackness” is! But there’s also a cost, as Rorty also wrote: this “cultural Left,” he said, “thinks more about stigma than money, more about deep and hidden psychosexual motivations than about shallow and evident greed.” Seconding Rorty’s point, University of Chicago philosopher Martha Nussbaum has written that academia today is characterized by “the virtually complete turning away from the material side of life, toward a type of verbal and symbolic politics”—a “cultural Left” that thinks “the way to do … politics is to use words in a subversive way, in academic publications of lofty obscurity and disdainful abstractness,” and that “instructs its members that there is little room for large-scale social change, and maybe no room at all.” So, while it might be slightly better that mainstream publications now think of race in cultural, instead biological, terms, this might not be the triumph it’s sometimes said to be given the real facts of economic life in the United States.

Yet the advice of the American academy is that what the United States needs is more talk about culture, rather than a serious discussion about political economy. Their argument is a simple one, summarized by the recently deceased historical novelist E.L. Doctorow in an essay called “Notes on the History of Fiction”: there, the novelist argues that while there is a Richard III Society in England attempting to “recover the reputation of their man from the damage done to it by the calumnies of Shakespeare’s play,” all their efforts are useless—“there is a greater truth for the self-reflection of all mankind in the Shakespearean vision of his life than any simple set of facts can summon.” What matters, Doctorow is arguing, isn’t the real Richard III—coincidentally, the man apparently recently dug up in an English parking lot—but rather Shakespeare’s approximation of him, just in the same way that some Civil War-era writers argued that what mattered was “race” instead of the economics of slavery, or how Michael Bamberger fails to realize that the presence of the real white golfers that are in front of him explains the absence of the imaginary black golfers that aren’t fairly easily. What Doctorow then is really saying, and thus by extension what the “cultural Left” is really saying, is that the specific answer to the question of where the black golfers are is irrelevant, because dead words matter more than live people—an idea, however, that seems difficult to square with the notion that, as the slogan has it, black lives matter.

Golfers or not.

This Pitiless Storm

Poor naked wretches, whereso’er you are,
That bide the pelting of this pitiless storm,
How shall your houseless heads and unfed sides,
Your loop’d and window’d raggedness, defend you,
From seasons such as these?
The Tragedy of King Lear Act III, Scene 4

“Whenever people talk to me about the weather,” the Irish writer Oscar Wilde once remarked, “I always feel quite certain that they mean something else.” As it happens, the weather at this year’s British Open has been delayed by high winds and will not be finished with the regulation 72 holes until Monday at the earliest. Which raises a question: why does the Open need to finish all 72 holes? The answer concerns something called a “Simpson’s Paradox”—an answer that also demonstrates just how talk about the weather at the British Open is in fact talk about something else. Namely, the 2016 American presidential election.

To see how, it’s first necessary to see the difference between the British Open and other professional golf tournaments, which are perfectly fine with shortening themselves. Take for instance the 2005 Northern Trust Open in Los Angeles: Adam Scott won in a playoff against Chad Campbell after the tournament was shortened to 36 holes due to weather. In 2013, the Tournament of Champions at Kapalua in Hawaii was “first cut to 54 holes because of unplayable conditions over the first two days,” according to Reuters, and was under threat of “being further trimmed to 36 holes.” The same story also quoted tour officials as saying “the eventual champion would wind up with an ‘unofficial win’” were the tournament to be shortened to 36 holes. (As things shook out they did end up completing 54 holes, and so Dustin Johnson’s win officially counted.) In a standard PGA tournament then, the “magic number” for an “official” tournament is 54 holes. But if so, then why does the Open need 72?

To answer that, let’s take a closer look at the standard professional golf tournament. Most such tournaments are conducted according to what the Rules of Golf calls “stroke play”: four rounds of golf, or 72 holes, at the end of which the players who have made it that far add up their scores—their number of strokes. The player with the lowest score, it may seem like it goes without saying, wins. But it does need to be said—because that isn’t the only option.

Many amateur tournaments after all, such as the United States Amateur, use the rules format known as “match play.” Under this format, the winner of the contest is not necessarily the player who shoots the lowest overall score, as in stroke play. Instead, as John Van der Borght has put the matter on the website of the United States Golf Association, in match play the “winner is the player who wins the most holes.” It’s a seemingly minor difference—but in fact it creates such a difference that match play is virtually a different sport than stroke play.

Consider, for instance, the Accenture Match Play tournament—the only tournament on the PGA Tour to be held under match play rules. The 2014 edition (held at the Dove Mountain course near Tucson, Arizona), had some results that demonstrate just how different match play is than stroke play, as Doug Ferguson of the Associated Press observed. “Pablo Larrazabal shot a 68 and was on his way back to Spain,” Ferguson noted about the first day’s results, while “Ernie Els shot 75 and has a tee time at Dove Mountain on Thursday.” In other words, Larrazabal lost his match and Els won his, even though Larrazabal was arguably the better player at this tournament—at least, if you consider the “better player” to be the one who puts his ball in the hole most efficiently.

Such a result might seem unfair—but why? It could be argued that while shooting a lower number might be what stroke play golf is, that isn’t what match play golf is. In other words, Larrazabal obviously wasn’t better at whatever it was that this tournament measured: if Larrazabal couldn’t beat his opponent, while Els could, then clearly Els deserved to continue to play while Larrazabal did not. While you might feel that, somehow or other, Larrazabal got jobbed, that’s merely a sentimental reaction to what ought to be a hardhearted calculation: maybe it’s true that under stroke play rules Larrazabal would have won, but that wasn’t the rules of the contest at Dove Mountain. In other words, you could say that golfing ability was, in a sense, socially constructed: what matters isn’t some “ahistorical” ability to golf, but instead how it is measured.

Here’s the $64,000 question a guy named Bill James might ask in response to such an argument, however (couched in terms of baseball players): “If you were trying to win a pennant, how badly would you want this guy?” In other words, based on the evidence presented, what would you conclude about the respective golf ability of Els and Larrazabal? Wouldn’t you conclude that Larrazabal is better at the task of putting his ball in the hole, and that the various rule systems that could be constructed around that task are merely different ways of measuring that ability—an ability that pre-existed those systems of measurement?

“We’re not trying to embarrass the best players in the game,” said Sandy Tatum at the 1974 U.S. Open, the so-called Massacre at Winged Foot: “We’re trying to identify them.” Scoring systems in short should be aimed at revealing, not concealing, ability. I choose Bill James to make the point not just because the question he asks is so pithy, but because he invented an equation that is designed to discover underlying ability: an equation called the Pythagorean Expectation. That equation, in turn, demonstrates just why it is so that match play and stroke play are not just different—yet equally valid—measures of playing ability. In so doing, James also demonstrates just why it is that the Open Championship requires that all 72 holes be played.

So named because it resembles so closely that formula, fundamental to mathematics, called the Pythagorean Theorem, what the Pythagorean Expectation says is that the ratio of a team’s (or player’s) points scored to that team’s (or player’s) points allowed is a better predictor of future success than the team’s (or player’s) ratio of wins to losses. (James used “runs” because he was dealing with baseball.) More or less it works: as Graham MacAree puts it on the website FanGraphs, using James’ formula makes it “relatively easy to predict a team’s win-loss record”—even in sports other than baseball. Yet why is this so—how can a single formula predict future success at any sport? It might be thought, after all, that different sports exercise different muscles, or use different strategies: how can one formula describe underlying value in many different venues—and thus, incidentally, demonstrate that ability can be differentiated from the tools we use to measure it?

The answer to these questions is that adding up the total points scored, rather than the total games won, gives us a better notion of the relative value of a player or a team because it avoids something called the “Simpson’s Paradox”—which is what happens when, according to Wikipedia, it “appears that two sets of data separately support a certain hypothesis, but, when considered together, they support the opposite hypothesis.” Consider what happens for example when we match Ernie Els’ 75 to Pablo Larrazabal’s 68: if we match them according to who won each hole, Els comes out the winner—but if we just compared raw scores, then Larrazabal would. Simpson’s Paradoxes appear, in short, when we draw the boundaries around the raw data differently: the same score looks different depending on what lens is used to view it—an answer that might seem to validate those who think that underlying ability doesn’t exist, but only the means used to measure it. But what Simpson’s Paradox shows isn’t that all boundaries around the data are equal—in fact, it shows just the opposite.

What Simpson’s Paradox shows, in other words, is that drawing boundaries around the data can produce illusions of value if that drawing isn’t done carefully—and most specifically, if the boundaries don’t capture all of the data. That’s why the response golf fans might have to the assertion that Pablo Larrazabal is better than Ernie Els proves, rather than invalidates, the argument so far: people highly familiar with golf might respond, “well, you haven’t considered the total picture—Els, for instance, has won two U.S. Opens, widely considered to be the hardest tournament in the world, and Larrazabal hasn’t won any.” But then consider that what you have done just demonstrates the point made by Simpson’s Paradox: in order to say that Els is better, you have opened up the data set; you have redrawn the boundaries of the data in order to include more information. So what you would have conceded, were you to object to the characterization of Larrazabal as a better golfer than Els on the grounds that Els has a better overall record than Larrazabal, is that the way to determine the better golfer is to cast the net as wide as possible. You have demanded that the sample size be increased.

That then is why a tournament contested over only 36 holes isn’t considered an “official” PGA tournament, while 54 holes isn’t enough to crown the winner of a major tournament like the Open Championship (which is what the British Open is called when it’s at home). It’s all right if a run-of-the-mill tournament be cut to 54 holes, or even 36 (though in that case we don’t want the win to be official). But in the case of a major championship, we want there to be no misunderstandings, no “fluky” situations like the one in which Els wins and Larrazabal doesn’t. The way to do that, we understand, is to maximize chances, to make the data set as wide as possible: in sum, to make a large sample size. We all, I think, understand this intuitively: it’s why baseball has a World Series rather than a World Championship Game. So that is why, in a major championship, it doesn’t matter how long it takes—all the players qualified are going to play all 72 holes.

Here I will, as they say in both golf and baseball, turn for home. What all of this about Simpson’s Paradoxes means, at the end of the day, is that a tournament like the Open Championship is important—as opposed to, say, an American presidential election. In a presidential election as everyone knows, what matters isn’t the total numbers of votes a candidate wins, but how many states. In that sense, American presidential elections are conducted according to what, in golf, would be considered match play instead of stroke play. Now, as Bill James might acknowledge, that begs the question: does that process result in better candidates being elected?

As James might ask in response: would you like to bet?

Lest The Adversary Triumph

… God, who, though his power
Creation could repeat, yet would be loath
Us to abolish, lest the Adversary
Triumph …
Paradise Lost Book XI

… the literary chit-chat which makes the reputation of poets boom and crash in an imaginary stock exchange …
The Anatomy of Criticism

A list of articles for “liberal” magazine Salon.com. The first is an attack on Darwinians like Richard Dawkins; the others ridicule creationists for being anti-Darwinian
A list of articles for “liberal” magazine Salon.com. The first is an attack on Darwinians like Richard Dawkins; the others ridicule creationists for being anti-Darwinian

 

“Son, let me make one thing clear,” Air Force General Curtis Le May, the first head of the Strategic Air Command, supposedly said sometime in the 1950s to a young officer who repeatedly referred to the Soviet Union as the “enemy” during a presentation about Soviet nuclear capabilities. “The Soviet Union,” the general explained, “is our adversary. The enemy is the United States Navy.” Similarly, the “sharp rise in U.S. inequality, especially at the very top of the income scale” in recent years—as Nobel Prize winner Paul Krugman called it, in 1992—might equally be the result of confusion: as Professor Walter Benn Michaels of the University of Illinois at Chicago has written, “the intellectual left has responded to the increase in economic inequality by insisting on the importance of cultural identity.” The simplest explanation for that disconnect, I’d suggest, is that while the “intellectual left” might talk a good game about “speaking truth to power” and whatnot, “power” is just their adversary. The real enemy is science, especially Darwinian biology—and, yet more specifically, a concept called “survivorship bias”—and that enmity may demonstrate that the idea of an oppositional politics based around culture, rather than science, is absurd.

Like a lot of American wars, this one is often invisible to the American public, partly because when academics like University of Chicago English professor W.J.T. Mitchell do write for the public,  they often claim their modest aim is merely to curb scientific hubris. As Mitchell piously wrote in 1998’s The Last Dinosaur Book: The Life and Times of a Cultural Icon, his purpose in that book was merely to note that “[b]iological explanations of human behavior … are notoriously easy, popular, and insidious.” As far as that goes, of course, Mitchell is correct: the history of the twentieth century is replete with failed applications of Darwinian thought to social problems. But then, the twentieth century is replete with a lot of failed intellectual applications—yet academic humanists tend to focus on blaming biology for the mistakes of the past.

Consider for example how many current academics indict a doctrine called “social Darwinism” for the social ills of a century ago. In ascending order of sophistication, here is Rutgers historian Jackson Lears asserting from the orchestra pit, in a 2011 review of books by well-known atheist Sam Harris, that the same “assumptions [that] provided the epistemological foundations for Social Darwinism” did the same “for scientific racism and imperialism,” while from the mezzanine level of middlebrow popular writing here is William Kleinknecht, in The Man Who Sold The World: Ronald Reagan and the Betrayal of Main Street America, claiming that in the late nineteenth and early twentieth centuries, “social Darwinism … had nourished a view of the lower classes as predestined by genetics and breeding to live in squalor.” Finally, a diligent online search discovers, in the upper balcony, Boston University student Evan Razdan’s bald assertion that at the end of the nineteenth century, “Darwinism became a major justification for racism and imperialism.” I could multiply the examples: suffice it to say that for a good many in academe, it is now gospel truth that Darwinism was on the side of the wealthy and powerful during the early part of the twentieth century.

In reality however Darwin was usually thought of as on the side of the poor, not the rich, in the early twentieth century. For investigative reporters like Ida Tarbell, whose The History of the Standard Oil Company is still today the foundation of muckraking journalism, “Darwin’s theory [was] a touchstone,” according to Steve Weinberg’s Taking on the Trust: The Epic Battle of Ida Tarbell and John D. Rockefeller. The literary movement of the day, naturalism, drew its characters “primarily from the lower middle class or the lower class,” as Donald Pizer wrote in Realism and Naturalism in Nineteenth-Century American Fiction, and even a scholar with a pro-religious bent like Doug Underwood must admit, as he does in From Yahweh to Yahoo: The Religious Roots of the Secular Press, that the “naturalists were particularly influenced by the theories of Charles Darwin.” Progressive philosopher John Dewey wrote in 1910’s “The Influence of Darwinism on Philosophy” that Darwin’s On the Origin of Species “introduced a mode of thinking that in the end was bound to transform the logic of knowledge, and hence the treatment of morals, politics, and religion.” (As American philosopher Richard Rorty has noted, Dewey and his pragmatists began “from a picture of human beings as chance products of evolution.”) Finally, Karl Marx—a person no one has ever thought to be on the side of the wealthy—thought so highly of Darwin that he exclaimed, in a letter to Frederick Engels, that On the Origin of Species “contains the basis in natural history for our view.” To blame Darwin for the inequality of the Gilded Age is like blaming Smokey the Bear for forest fires.

Even aside from the plain facts of history, however, you’d think the sheer absurdity of pinning Darwin for the crimes of the robber barons would be self-evident. If a thief cited Matthew 5:40—“And if any man will sue thee at the law, and take away thy coat, let him have thy cloke also”—to justify his theft, nobody would think that he had somehow thereby indicted Jesus. Logically, the idea a criminal cites to justify his crime makes no difference either to the fact of the crime or to the idea: that is why the advocates of civil disobedience, like Martin Luther King Jr., held that lawbreaking in the name of a higher law still requires the lawbreaker to be arrested, tried, and, if found guilty, sentenced. (Conversely, is it somehow worse that King was assassinated by a white supremacist? Or would it have been better had he been murdered in the course of a bank robbery that had nothing to do with his work?) Just because someone commits a crime in the name of an idea, as King sometimes did, doesn’t make the idea itself wrong. nor could it make Martin Luther King Jr. any less dead. And anyway, isn’t the notion of taking a criminal’s word about her motivations at face value dubious?

Somehow however the notion that Darwin is to blame for the desperate situation of the poor at the beginning of twentieth century has been allowed to fester in the American university system: Eric Rauchway, a professor of history at the University of California Davis, even complained in 2007 that anti-Darwinism has become so widespread among his students that it’s now a “cliche of the history paper that during the industrial era” all “misery and suffering” was due to the belief of the period’s “lords of plutocracy” in the doctrines of “‘survival of the fittest’” and “‘natural selection.’” That this makes no sense doesn’t seem to enter anyone’s calculations—despite the fact that most of these “lords,” like John Rockefeller and Andrew Carnegie,  were “good Christian gentlemen,” just like many businessmen are today.

The whole idea of blaming Darwin, as I hope is clear, is at best exaggerated and at worst nonsense. But really to see the point, it’s necessary to ask why all those “progressive” and “radical” thinkers thought Darwin was on their side, not the rich man’s. The answer can be found by thinking clearly about what Darwin actually taught, rather than what some people supposedly used him to justify. And what the biologist taught was the doctrine of natural selection: a process that, understood correctly, is far from a doctrine that favors the wealthy and powerful. It would be closer to the truth to say that, on the contrary, what Darwin taught must always favor the poor against the wealthy.

To many in the humanities, that might sound absurd—but to those uncommitted, let’s begin by understanding Darwin as he understood himself, not by what others have claimed about him. And misconceptions of Darwin begin at the beginning: many people credit Charles Darwin with the idea of evolution, but that was not his chief contribution to human knowledge. A number of very eminent people, including his own grandfather, Erasmus Darwin, had argued for the reality of evolutionary descent long before Charles was even born: in his two-volume work of 1796, Zoonomia; or, the Laws of Organic Life, this older Darwin had for instance asserted that life had been evolving for “millions of ages before the commencement of the history of mankind.” So while the theory of evolution is at times presented as springing unbidden from Erasmus’ grandson Charles’ head, that’s simply not true.

By the time Charles published On the Origin of Species in 1859, the general outline of evolution was old hat to professionals, however shocking it may have been to the general public. On the Origin of Species had the impact it did because of the mechanism Darwin suggested to explain how the evolution of species could have proceeded—not that it presented the facts of evolutionary descent, although it do that in copious detail. Instead, as American philosopher Daniel Dennett has observed, “Darwin’s great idea” was “not the idea of evolution, but the idea of evolution by natural selection.” Or as the biologist Stephen Jay Gould has written, Darwin’s own chief aim in his work was “to advance the theory of natural selection as the most important mechanism of evolution.” Darwin’s contribution wasn’t to introduce the idea that species shared ancestors and hence were not created but evolved—but instead to explain how that could have happened.

What Darwin did was to put evolution together with a means of explaining it. In simplest terms, that natural selection is what Darwin would say it was in the Origin: the idea that, since “[m]ore individuals are born than can possibly survive,” something will inevitably “determine which individual shall live and which shall die.” In such a circumstances, as he would later write in the Historical Sketch of the Progress of Opinion on the Origin of Species, “favourable variations would tend to be preserved, and unfavourable ones would be destroyed.” Or as Stephen Jay Gould has succinctly put it, natural selection is “the unconscious struggle among individual organisms to promote their own personal reproductive success.” The word unconscious is the keyword here: the organisms don’t know why they have succeeded—nor do they need to understand. They just do—to paraphrase Yoda—or do not.

Why any of this should matter to the humanities or to people looking to contest economic inequality ought be immediately apparent—and would be in any rational society. But since the American education system seems designed at the moment to obscure the point I will now describe a scientific concept related to natural selection known as survivorship bias. Although that concept is used in every scientific discipline, it’s a particularly important one to Darwinian biology. There’s an argument, in fact, that survivorship bias is just a generalized version of natural selection, and thus it simply is Darwinian biology.

That’s because the concept of “survivorship bias” describes how human beings are tempted to describe mindless processes as mindful ones. Here I will cite one of the concept’s most well-known contemporary advocates, a trader and professor of something called “risk engineering” at New York University named Nassim Nicholas Taleb—precisely because of his disciplinary distance both from biology and the humanities: his distance from both, as Bertold Brecht might have has described it, “exposes the device” by stripping the idea from its disciplinary contexts. As Taleb says, one example of survivorship bias is the tendency all human beings have to think that someone is “successful because they are good.” Survivorship bias, in short, is the sometimes-dangerous assumption that there’s a cause behind every success. But, as Darwin might have said, that ain’t necessarily so.

Consider for instance a hypothetical experiment Taleb constructs in his Fooled By Randomness: The Hidden Role of Chance in Life and in the Markets, consisting of 10,000 money managers. The rules of this experiment are that “each one has a 50% probability of making $10,000 at the end of the year, and a 50% probability of losing $10,000.” If we should run this experiment five times—five runs through randomness—then at the end of those conjectural five years, by the laws of probability we can expect “313 managers who made money for five years in a row.” Is there anything especially clever about these few? No: their success has nothing to do with any quality each might possess. It’s simply due, as Taleb says, to “pure luck.” But these 313 will think of themselves as very fine fellows.

Now, notice that, by substituting the word “zebra” for the words “money managers” and “10 offspring” for “$10,000” Taleb has more or less described the situation of the Serengeti Plain—and, as early twentieth-century investigative reporter Ida Tarbell realized, the wilds of Cleveland, Ohio. Tarbell, in 1905’s “John D. Rockefeller: A Character Study” actually says that by 1868, when Rockefeller was a young businessman on the make, he “must have seen clearly … that nothing but some advantage not given by nature or recognized by the laws of fair play in business could ever make him a dictator in the industry.” In other words, Rockefeller saw that if he merely allowed “nature,” as it were, to take its course, he stood a good chance of being one of the 9000-odd failures, instead of the 300-odd success stories. Which is why he went forward with the various shady schemes Tarbell goes on to document in her studies of the man and his company. (Whose details are nearly unbelievable—unless you’re familiar with the details of the 2008 housing bubble.) The Christian gentleman John D. Rockefeller, in other words, hardly believed in the “survival of the fittest.”

It should in other words be clear just how necessary the concept of survivorship bias—and thus Darwin’s notion of natural selection—is to any discussion of economic inequality. Max Weber at least, the great founder of sociology, understood it—that’s why, in The Protestant Ethic and the Spirit of Capitalism, Weber famously described the Calvinist doctrine of predestination, in which “God’s grace is, since His decrees cannot change, as impossible for those to whom He has granted it to lose as it is unattainable for those to whom He has denied it.” As Weber knew, if the Chosen of God are known by their worldly success, then there is no room for debate: the successful simply deserve their success in a fashion not dissimilar to the notion of the divine right of kings.

If there’s a possibility that worldly success is however due to chance, i.e. luck, then the road is open to argue about the outcomes of the economic system. Since John D. Rockefeller, at least according to Tarbell, certainly did act as though worldly success was far more due to “chance” rather than the fair outcome of a square game, one could I suppose argue that he was a believer in Darwinism like the believers in the “social Darwinist” camp say. But that seems to stretch the point.

Still, what has this to do with the humanities? The answer is that you could do worse than define the humanities by saying they are the disciplines of the university that ignore survivorship bias—although, if so, that might mean that “business” ought to be classified alongside comparative literature in the course catalogue, at least as Taleb puts it.

Examine economist Gary Smith’s Standard Deviations: Flawed Assumptions, Tortured Data, And Other Ways To Lie With Statistics. As Michael Shermer of Pomona College notes in a review of Smith’s book, Smith shows how business books like Jim Collins’ Good to Great “culled 11 companies out of 1,435 whose stock beat the market average over a 40-year time span and then searched for shared characteristics among them,” or how In Search of Excellence, 1982’s best-seller,  “identified eight common attributes of 43 ‘excellent’ companies.” As Taleb says in his The Black Swan: The Impact of the Highly Improbable, such studies “take a population of hotshots, those with big titles and big jobs, and study their attributes”—they “look at what those big guns have in common: courage, risk taking, optimism and so on, and infer that these traits, most notably risk taking, help you to become successful.” But as Taleb observes, the “graveyard of failed persons [or companies] will be full of people who shared the following traits: courage, risk taking, optimism, et cetera.” The problem with “studies” like these is that they begin with Taleb’s 313, instead of the 10,000.

Another way to describe “survivorship bias” in other words is to say that any real investigation into anything must consider what Taleb calls the “silent evidence”: in the case of the 10,000 money managers, it’s necessary to think of the 9000-odd managers who started the game and failed, and not just the 300-odd managers who succeeded. Such studies will surely always find “commonalities” between the “winners,” just as Taleb’s 313 will surely always discover some common trait between them—and in the same way that a psychic can always “miraculously” know that somebody just died.

Yet, why should the intellectual shallowness of business writers matter to scholars in the humanities, who write not for popular consumption but for peer-review? Well, because as Taleb points out, the threat posed by survivorship bias to shoddy kinds of scholarship is not particular to shoddy studies and shoddy scholars, but instead is endemic to entire species of writing. Take for instance Shermer’s discussion of Walter Isaacson’s 2011 biography of Apple Computer’s Steve Jobs … which I’d go into if it were necessary.

But it isn’t, according to Taleb: the “entire notion of biography,” Taleb says in The Black Swan, “is grounded in the arbitrary ascription of a causal relation between specified traits and subsequent events.” Biography by definition takes a number of already-successful entities and then tries to explain their success, instead of starting with equally-unknown entities and watching them either succeed or fail. Nobody finds Beethoven before birth, and even Jesus Christ didn’t pick up disciples before adulthood. Biographies then might be entertaining, but they can’t possibly have any real intellectual substance. Biographies could only really be valuable if their authors predicted a future success—and nobody could possibly write a predictive biography. Biography then simply is an exercise in survivorship bias.

And if biography, then how about history? About the only historians who discuss the point of survivorship bias are those who write what’s known as “counterfactual” history. A genre largely kicked off by journalist MacKinlay Kantor’s fictitious 1960 speculation, If the South Had Won the Civil War, it’s been defined by former Regius Professor of History at Cambridge University Richard J. Evans as “alternative versions of the past in which one alteration in the timeline leads to a different outcome from the one we know actually occurred.” Or as David Frum, thinking in The Atlantic about what might have happened had the United States not entered World War I in 1917, says about his enterprise: “Like George Bailey in It’s a Wonderful Life, I contemplate these might-have-beens to gain a better appreciation for what actually happened.” In statements like these, historians confront the fact that their discipline is inevitably subject to the problem of survivorship bias.

Maybe that’s why counterfactual history is also a genre with a poor reputation with historians: Evans himself has condemned the genre, in The Guardian, by writing that it “threatens to overwhelm our perceptions of what really happened in the past.” “The problem with counterfactuals,” Evans says, “is that they almost always treat individual human actors … as completely unfettered,” when in fact historical actors are nearly always constrained by larger forces. FDR could, hypothetically, have called for war in 1939—it’s just that he probably wouldn’t have been elected in 1940, and someone else would have been in office on that Sunday in Oahu. Which, sure, is true, and responsible historians have always, as Evans says, tried “to balance out the elements of chance on the one hand, and larger historical forces (economic, cultural, social, international) on the other, and come to some kind of explanation that makes sense.” That, to be sure, is more or less the historian’s job. But I am sure the man on the wire doesn’t like to reminded of the absence of a net either.

The threat posed by survivorship bias extends even into genres that might appear to be immune to it: surely the study of literature, which isn’t about “reality” in any strict sense, is immune to the acid bath of survivorship bias. But look at Taleb’s example of how a consideration of survivorship bias affects just how we think about literature, in the form of a discussion of the reputation of the famous nineteenth French novelist Honoré de Balzac.

Let’s say, Taleb proposes, someone asks you why Balzac deserves to be preserved as a great writer, and in reply “you attribute the success of the nineteenth-century novelist … to his superior ‘realism,’ ‘insights,’ ‘sensitivity,’ ‘treatment of characters,’ ‘ability to keep the reader riveted,’ and so on.” As Taleb points out, those characteristics only work as a justification for preserving Balzac “if, and only if, those who lack what we call talent also lack these qualities.” If, on the other hand, there are actually “dozens of comparable literary masterpieces that happened to perish” merely by chance, then “your idol Balzac was just the beneficiary of disproportionate luck compared to his peers.” Without knowing who Balzac’s competitors were, in other words, we are not in a position to know with certainty whether Balzac’s success is due to something internal to his work, or whether his survival is simply the result of dumb luck. So even literature is threatened by survivorship bias.

If you wanted to define the humanities you could do worse than to say they are the disciplines that pay little to no attention to survivorship bias. Which, one might say, is fine: “In my father’s house are many mansions,” to cite John 14:2. But the trouble may be that, since as Taleb or Smith—and the examples could be multiplied—point out, the work of the humanities share the same “scholarly” standards as those of many “business writers,” it does not really matter how “radical”—or even merely reformist—their claims are. The similarities of method may simply overwhelm the message.

In that sense then, despite the efforts of many academics to center a leftist politics on the classrooms of the English department rather than the scientific lab, that just may not be possible: the humanities will always be centered on fending off survivorship bias in the guise of biology’s threat to “reduce the complexities of human culture to patterns in animal behavior,” as W.J.T. Mitchell says—and in so doing, the disciplines of culture will inevitably end up arguing, as Walter Benn Michaels says, “that the differences that divide us are not the differences between those of who have money and those who don’t but are instead the differences between those of us who are black and those who are white or Asian or Latino or whatever.” The humanities are antagonistic to biology because the central concept of Darwinian biology, natural selection, is a version of the principle of survivorship bias, while survivorship bias is a concept that poses a real and constant intellectual threat to the humanities—and finally, to complete the circle, survivorship bias is the only argument against allowing the rich to run the world according to their liking. It may then not be any wonder why, as the tide has gone out on the American dream, the American academy has essentially responded by saying “let’s talk about something else.” To the gentlemen and ladies of the American disciplines of the humanities, the wealthy are just the adversary.