Par For The Course: Memorial Day, 2016

 

For you took what’s before me and what’s behind me
You took east and west when you would not mind me
Sun, moon and stars from me you have taken
And Christ likewise if I’m not mistaken.
“Dónal Óg.” Traditional.

 

None of us were sure. After two very good shots—a drive off the tee, and a three- or four-wood second—both ladies found themselves short of the green by more than forty yards. Two chips later, neither of which were close, both had made fives—scores that either were pars or bogies. But we did not know which scores they were; that is, we didn’t know what par was on the hole, the eighth on Medinah’s Course One. That was important because, while in normal play, the difference would hardly have mattered, it did matter in this case because our foursome was playing as part of a larger tournament, and the method of scoring of this tournament was what is called a “modified Stableford” format. In “modified Stableford,” points are assigned for each score: instead of the total number of strokes being added up or the number of holes being added up, in other words, as in stroke and match play scoring formats, under a modified Stableford format players receive zero points for a par, but lose a point for bogey. To know what the ladies had scored, then, it was important to know what the par was—and since Course One had only just reopened last year after a renovation, none of us knew if the par for ladies had changed with it. The tournament scorecard was no help—we needed a regular scorecard to check against, which we could only get when we returned towards the clubhouse after the ninth hole. When we did, we learned what we needed to know—and I learned just how much today’s women golfers still have in common with both French women, circa 1919, and the nation of France, today.

The eighth hole on Medinah Country Club’s Course One is, for men, a very long par four, measuring 461 yards from the back tee. For the most part it is straight, though with a slight curve from left to right along its length. Along with length, the hole is also defended with a devilish green that is highly sloped from the high side on the left to a low side on the right. It is an extremely difficult hole, ranked as the fifth-hardest hole on the golf course. And though the ladies do not play from the back tees, the eighth is still nearly 400 yards for them, which even for very good women players is quite long; it is not unusual to find ladies’ par fives at that distance. Hence, we had good reason to at least wish to question whether the tournament scorecard was printed in error.

Returning to the clubhouse, we went by the first tee where all the scorecards for Course One are kept. Picking one up, I quickly scanned it and found that, indeed, the par for the eighth hole was four for the ladies, as the tournament scorecard said. At that instant, one of the assistant pros happened by, and I asked him about it: “Well,” he said, “if the par’s the same for everyone it hardly matters—par’s just a number, anyway.” In a sense, of course, he was right: par really is, in one way, completely arbitrary. A golfer scores what she scores: whether that is “par” or not really makes little difference—par is just a name, it might be said. Except that in this case the name of the thing really did matter, because it had a direct effect on the scoring for the tournament as a whole … I could feel my brain slowly sinking into a mental abyss, as I tried to work out the possible consequences of what might appear to be merely an inconsequential name change.

What I immediately realized, at least, was that making the hole a par four greatly amplified the efforts of a long-hitting woman: being able to reach that green in two gave any woman even more of a huge advantage over her fellow competitors than she already had simply by hitting the ball further. Making the hole a par four made such a woman an electric guitar against everyone else’s acoustic: she would just drown everyone out. Furthermore, that advantage would multiply the more rounds the tournament played: the interest, in other words, would compound.

It’s in that sense that, researching another topic, I became interested in the fate of Frenchwomen in the year 1919—the year after the end of the Great War, or World War I. That war, as everyone knows, virtually wiped out an entire generation of young men: Britain, for example, lost nearly a million young men in battle, while France lost nearly one and half millions. (Germany, by comparison, lost nearly two millions.) Yet, although occasionally the point comes up during Veterans Day observations in America—what the Europeans call “Armistice Day” is, with good reason, a much bigger deal—or classroom discussions about writers of the 1920s in English classes (like Fitzgerald or Hemingway, the “Lost Generation”), the fact is treated sentimentally: we are supposed to be sad about those many, many deaths. But what we do not do is think about the long-term effect of losing so many young men (and, though less so, women) in their youth.

We do not, that is, consider the fact that, as writer Fraser Cameron observed in 2014, in France in “1919, the year after the war was over in France, there were 15 women for every man between the ages of 18 and 30.” We do not think about, as Cameron continues, “all of the lost potential, all of the writers, artists, teachers, inventors, and leaders that were killed.” Cameron neglects to consider all of the janitors that were killed also, but his larger point is solid: the fact of the Great War has had a measurable effect on France’s destiny as a nation, because all of those missing young men would have contributed to France’s total productivity, would have paid taxes, would have paid into pensions—and perhaps above all, would have had babies who would have done the same. And those missing French (and British and German and Russian and Italian …) babies still matter—and probably will forever.

“In the past two decades,” says Malcolm Gladwell of the New Yorker, in an article from a few years ago entitled, “The Risk Pool,” “Ireland has gone from being one of the most economically backward countries in Western Europe to being one of the strongest: its growth rate has been roughly double that of the rest of Europe.” Many explanations have been advanced for that growth, Gladwell says—but the most convincing explanation, he also says, may have been advanced by two Harvard economists, David Bloom and David Canning: “In 1979, restrictions on contraception that had been in place since Ireland’s founding”—itself a consequence, by the bye, of the millions of deaths on the Western Front—“were lifted, and the birth rate began to fall.” What had been an average of nearly four children per woman in the late 1960s became, by the mid-nineteen-nineties, less than two. And so Ireland, in those years, “was suddenly free of the enormous social cost of supporting and educating and caring for a large dependent population”—which, as it happens, coincides with the years when the Irish economy exploded. Bloom and Canning argue that this is not a coincidence.

It might then be thought, were you to take a somewhat dark view, that France in 1919 was thusly handed a kind of blessing: the French children that were born in 1919 would be analogous to Irish children in 1969, a tiny cohort easily supported by the rest of the nation. But actually, of course, the situation is rather the opposite: when French children of 1919 came of age, that meant there were many fewer of them to support the rest of the nation—and, as we know, Frenchmen born in 1919 were doubly the victims of fate: the year they turned twenty was the year Hitler invaded Poland. Hence, the losses first realized during the Great War were doubled down—not only was the 1919 generation many times less than there would have been had there been no general European war in the first decades of the twentieth-century, but now there would be many fewer of their grandchildren, too. And so it went: if you are ever at a loss for something to do, there is always the exercise of thinking about all of those millions of missing French (and Italian and English and Russian …) people down through the decades, and the consequences of their loss.

That’s an exercise that, for the most part, people do not do: although nearly everyone in virtually every nation on earth memorializes their war dead on some holiday or another, it’s very difficult to think of the ramifying, compounding costs of those dead. In that sense, the dead of war are a kind of “hidden” cost, for although they are remembered on each nation’s version of Memorial Day or Armistice Day or Veterans Day, they are remembered sentimentally, emotionally. But while that is, to be sure, an important ritual to be performed—because rituals are performed for the living, not the dead—it seems to me also important to remember just what it is that wars really mean: they are a kind of tax on the living and on the future, a tax that represents choices that can never be made and roads that may never be traveled. The dead are debt that can never be repaid and whose effects become greater, rather than less, with time—a compound interest of horror that goes on working like one of Blake’s “dark satanic mills” through all time.

Hidden costs, of course, are all around us, all of the time; very few of us have the luxury of wondering about how far a bullet fired during, say, the summer of 1916 or the winter of 1863 can really travel. For all of the bullets that ever found their mark, fired in all of the wars that were ever fought, are, and always will be, still in flight, onwards through the generations. Which, come to think of it, may have been what James Joyce meant at the end of what has been called “the finest short story in the English language”—a story entitled, simply, “The Dead.” It’s a story that, like the bullets of the Great War, still travels forward through history; it ends as the story’s hero, Gabriel Conroy, stands at the window during a winter’s night, having just heard from his wife—for the first time ever—the story of her youthful romance with a local boy, Michael Fury, long before she ever met Gabriel. At the window, he considers how Fury’s early death of tuberculosis affected his wife’s life, and thusly his own: “His soul swooned slowly as he heard the snow falling faintly through the universe and, faintly falling, like the descent of their last end, upon all the living and the dead.” As Joyce saw, all the snowflakes are still falling, all the bullets are still flying, and we will never, ever, really know what par is.

Advertisements

So Small A Number

How chance the King comes with so small a number?
The Tragedy of King Lear. Act II, Scene 4.

 

Who killed Michael Brown, in Ferguson, Missouri, in 2014? According to the legal record, it was police officer Darren Wilson who, in August of that year, fired twelve bullets at Brown during an altercation in Ferguson’s streets—the last being, said the coroner’s report, likely the fatal one. According to the protesters against the shooting (the protest that evolved into the #BlackLivesMatter movement), the real culprit was the racism of the city’s police department and civil administration; a charge that gained credibility later when questionable emails written, and sent to, city employees became public knowledge. In this account, the racism of Ferguson’s administration itself simply mirrored the racism that is endemic to the United States; Darren Wilson’s thirteenth bullet, in short, was racism. Yet, according to the work of Radley Balko of the Washington Post, among others, the issue that lay behind Brown’s death was not racism, per se, but rather a badly-structured political architecture that fails to consider a basic principle of reality banally familiar to such bastions of sophisticated philosophic thought as Atlantic City casinos and insurance companies: the idea that, in the words of the New Yorker’s Malcolm Gladwell, “the safest and most efficient way to provide [protection]” is “to spread the costs and risks … over the biggest and most diverse group possible.” If that is so, then perhaps it could be said that Brown’s killer was whoever caused Americans to forget that principle—if so, a case could be made that Brown’s killer was a Scottish philosopher who lived more than two centuries ago: the sage of skepticism, David Hume.

Hume is well-known in philosophical circles for, among other contributions, describing something he called the “is-ought problem”: in his early work, A Treatise of Human Nature, Hume said his point was that “the distinction of vice and virtue is not founded merely on the relations of objects”—or, that just because reality is a certain way, that does not mean that it ought to be that way. British philosopher G.E. Moore later called the act of mistaking is with ought the “naturalistic fallacy”: in 1903’s Principia Ethica, Moore asserted (as J.B. Schneewind of Johns Hopkins has paraphrased it) that “claims about morality cannot be derived from statements of facts.” It’s a claim, in other words, that serves to divide questions of morality, or values, from questions of science, or facts—and, as should be self-evident, the work of the humanities requires a intellectual claim of this form in order to exist. If morality, after all, is amenable to scientific analysis there would be little reason for the humanities.

Yet, there is widespread agreement among many intellectuals that the humanities are not subject to scientific analysis, and specifically because only the humanities can tackle subjects of “value.” Thus, for instance, we find professor of literature Michael Bérubé, of Pennsylvania State University—an institution noted for its devotion to truth and transparency—scoffing “as if social justice were a matter of discovering the physical properties of the universe” when faced with doubters like Harvard biologist E. O. Wilson, who has had the temerity to suggest that the humanities could learn something from the sciences. And, Wilson and others aside, even some scientists ascribe to some version of this split: the biologist Stephen Jay Gould, for example, echoed Moore in his essay “Non-Overlapping Magisteria” by claiming that while the “net of science covers the empirical universe: what is it made of (fact) and why does it work this way (theory),” the “net of religion”—which I take in this instance as a proxy for the humanities generally—“extends over questions of moral meaning and value.” Other examples could be multiplied.

How this seemingly-arid intellectual argument affected Michael Brown can be directly explained, albeit not easily. Perhaps the simplest route is by reference to the Malcolm Gladwell article I have already cited: the 2006 piece entitled “The Risk Pool.” In a superficial sense, the text is a social history about the particulars of how social insurance and pensions became widespread in the United States following the Second World War, especially in the automobile industry. But in a more inclusive sense, “The Risk Pool” is about what could be considered a kind of scientific law—or, perhaps, a law of the universe—and how, in a very direct sense, that law affects social justice.

In the 1940s, Gladwell tells us, the leader of the United Auto Workers union was Walter Reuther—a man who felt that “risk ought to be broadly collectivized.” Reuther thought that providing health insurance and pensions ought to be a function of government: that way, the largest possible pool of laborers would be paying into a system that could provide for the largest possible pool of recipients. Reuther’s thought, that is, most determinedly centered on issues of “social justice”: the care of the infirm and the aged.

Reuther’s notions however also could be thought of in scientific terms: as an instantiation of what is called, by statisticians, the “law of large numbers.” According to Caltech physicist Leonard Mlodinow, the law of large numbers can be described as “the way results reflect underlying probabilities when we make a large number of observations.” A more colorful way to think of it is the way trader and New York University professor Nassim Taleb puts it in his book, Fooled By Randomness: The Hidden Role of Chance in Life and in the Markets: there, Taleb observes that, were Russian roulette a game in which the survivors gained the savings of the losers, then “if a twenty-five-year-old played Russian roulette, say, once a year, there would be a very slim possibility of his surviving until his fiftieth birthday—but, if there are enough players, say thousands of twenty-five-year-old players, we can expect to see a handful of (extremely rich) survivors (and a very large cemetery).” In general, the law of large numbers is how casinos (or investment banks) make money legally (and bookies make it illegally): by taking enough bets (which thereby cancel each other out) the institution, whether it is located in a corner tavern or Wall Street, can charge customers for the privilege of betting—and never take the risk of failure that would accrue were that institution to bet one side or another. Less concretely, the same law is what allows us to assert confidently a belief in scientific results: because they can be repeated again and again, we can trust that they reflect something real.

Reuther’s argument about social insurance and pensions more or less explicitly mirrors that law: like a casino, the idea of social insurance is that, by including enough people, there will be enough healthy contributors paying into the fund to balance out the sick people drawing from it. In the same fashion, a pension fund works by ensuring that there are enough productive workers paying into the pension to cancel out the aged people receiving from it. In both casinos and pension funds, in other words, the only means by which they can work is by having enough people included in them—if there are too few, the fund or casino takes the risk that the numbers of those drawing out exceed those paying in, at which point the operation fails. (In gambling, this is called “breaking the bank”; Ward Wilson pithily explains why that doesn’t happen very often in his learned tome, Gambling for Winners; Your Hard-Headed, No B.S., Guide to Gaming Opportunities With a Long-Term, Mathematical, Positive Expectation: “the casino has more money than you.”) Both casinos and insurance funds must have large numbers of participants in order to function: as numbers decrease, the risk of failure increases. Reuther therefore thought that the safest possible way to provide social protection for all Americans was to include all Americans.

Yet, according to those following Moore’s concept of the “naturalistic fallacy,” Reuther’s argument would be considered an illicit intrusion of scientific ideas into the realm of politics, or “value.” Again, that might appear to be an abstruse argument between various schools of philosophers, or between varieties of intellectuals, scientific and “humanistic.” (It’s an argument that, in addition to accruing to the humanities the domain of “value,” also cedes categories like stylish writing—as if scientific arguments can only be expressed by equations rather than quality of expression, and as if there weren’t scientists who were brilliant writers and humanist scholars who weren’t awful ones.) But while in one sense this argument takes place in very rarified air, in another it takes place on the streets where we live. Or, more specifically, the streets where Michael Brown was shot and killed.

The problem of Ferguson, Radley Balko’s work for the Washington Post tells us, is not one of “race,” but instead a problem of poor people. More exactly, a problem of what happens when poor people are excluded from larger population pools—or in other words, when the law of large numbers is excluded from discussions of public policy. Balko’s story draws attention to two inarguable facts: the first, that there “are 90 municipalities in St. Louis County”—Ferguson’s county—and nearly all of them “have their own police force, mayor, city manager and town council,” while 81 of those towns also have their municipal court capable of sentencing lawbreakers to paying fines. By contrast, Balko draws attention to the second-largest, by population, Missouri urban county: Kansas City’s Jackson County, which is both “geographically larger than St. Louis County and has about two-thirds the population”—and yet “has just 19 municipalities, and just 15 municipal courts.” Comparing the two counties, that is, implies that St. Louis County is far more segmented than Jackson County is: there are many more population pools in the one than in the other.

Knowing what is known about the law of large numbers then, it might not be surprising that a number of the many municipalities of St. Louis County are worse off than the few municipalities of Jackson County: in St. Louis County some towns, Balko reports, “can derive 40 percent or more of their annual revenue from the petty fines and fees collected by their municipal courts”—rather than, say, property taxes. That, it seems likely, is due to the fact that instead of many property owners paying taxes, there are instead a large number of renters paying rent to a small number of landlords, who in turn are wealthy enough to minimize their tax burden by employing tax lawyers and other maneuvers. Because these towns thusly cannot depend on property tax revenue, they must instead depend on the fines and fees the courts can recoup from residents: an operation that, because of the chaos that necessarily implies for the lives of those citizens, usually results in more poverty. (It’s difficult to apply for a job, for example, if you are in jail due to failure to pay a parking ticket.) Yet, if the law of large numbers is excluded a priori from political discussion—as some in the humanities insist it must be, whether out of disciplinary self-interest or some other reason—that necessarily implies that residents of Ferguson cannot address the real causes of their misery, a fact that may explain just why those addressing the problems of Ferguson focus so much on “racism” rather than the structural issues raised by Balko.

The trouble however with identifying “racism” as an explanation for Michael Brown’s death is that it leads to a set of “solutions” that do not address the underlying issue. In the November following Brown’s death, for example, Trymaine Lee of MSNBC reported that the federal Justice Department “held a two-day training with St. Louis area police on implicit racial bias and fair and impartial policing”—as if the problem of Ferguson was wholly to blame on the police department or even the town administration as a whole. Not long afterwards, the Department of Justice reported (according to Ray Sanchez of CNN) that, while Ferguson is 67% African-American, in the two years prior to Brown’s death “85% of people subject to vehicle stops by Ferguson police were African-American,” while “90% of those who received citations were black and 93% of people arrested were black”—data that seems to imply that, were those numbers only closer to 67%, then there would be no problem in Ferguson.

Yet, even if the people arrested in Ferguson were proportionately black, that would have no effect on the reality that—as Mike Maciag of Governing reported shortly after Brown’s death—“court fine collections [accounted] for one-fifth of [Ferguson’s] total operating revenue” in the years leading up to the shooting.  The problem of Ferguson isn’t that its residents are black, and so that the town’s problems could be solved by, say, firing all the white police officers and hiring all black ones. Instead, Ferguson’s difficulty is not just that the town’s citizens are poor—but that they are politically isolated.

There is, in sum, a fundamental reason that the doctrine of “separate but equal” is not merely bad for American schools, as the Supreme Court held in the 1954 decision of Brown v. Board of Education, the landmark case that ended Jim Crow in the American South. That reason is the same at all scales: from the nuclear supercollider at CERN exploring the essential particles of the universe to the roulette tables of Las Vegas to the Social Security Administration, the greater the number of inputs the greater the certainty, and hence safety, of the results. Instead of affirming that law of the universe, however, the work of people like Michael Bérubé and others is devoted to questioning whether universal laws exist—in other words, to resisting the encroachment of the sciences on their turf. Perhaps that resistance is somehow helpful in some larger sense; perhaps it is so that, as is often claimed, the humanities enlarge our sense of what it means to be human, among other sometimes-described possible benefits—I make no claims on that score.

What’s absurd, however, is the monopolistic claim sometimes retailed by Bérubé and others that the humanities have an exclusive right to political judgment: if Michael Brown’s death demonstrates anything, it ought (a word I use without apology) to show that, by promoting the idea of the humanities as distinct from the sciences, humanities departments have in fact collaborated (another word I use without apology) with people who have a distinct interest in promoting division and discord for their own ends. That doesn’t mean, of course, that anyone who has ever read a novel or seen a film helped to kill Michael Brown. But, just as it is so that institutions that cover up child abuse—like the Catholic Church or certain institutions of higher learning in Pennsylvania—bear a responsibility to their victims, so too is there a danger in thinking that the humanities have a monopoly on politics. Darren Wilson did have a thirteenth bullet, though it wasn’t racism. Who killed Michael Brown? Why, if you think that morality should be divided from facts … you did.

Hot Shots

 

… when the sea was calm all boats alike
Show’d mastership in floating …
—William Shakespeare.
     Coriolanus Act IV, Scene 3 (1608).

 

 

“Indeed,” wrote the Canadian scholar Marshall McLuhan in 1964, “it is only too typical that the ‘content’ of any medium blinds us to the character of the medium.” Once, it was a well-known line among literate people, though much less now. It occurred to me recently however as I read an essay by Walter Benn Michaels of the University of Illinois at Chicago, in the course of which Michaels took issue with Matthew Yglesias of Vox. Yglesias, Michaels tells us, tried to make the argument that

although “straight white intellectuals” might tend to think of the increasing economic inequality of the last thirty years “as a period of relentless defeat for left-wing politics,” we ought to remember that the same period has also seen “enormous advances in the practical opportunities available to women, a major decline in the level of racism … and wildly more public and legal acceptance of gays and lesbians.”

Michaels replies to Yglesias’ argument that “10 percent of the U.S. population now earns just under 50 percent of total U.S. income”—a figure that is, unfortunately, just the tip of the economic iceberg when it comes to inequality in America. But the real problem—the problem that Michaels’ reply does not do justice to—is that there just is a logical flaw in the kind of “left” that we have now: one that advocates for the rights of minorities rather than labors for the benefit of the majority. That is, a “cultural” left rather than a scientific one: the kind we had when, in 1910, American philosopher John Dewey could write (without being laughed at), that Darwin’s Origin of Species “introduced a mode of thinking that in the end was bound to transform the logic of knowledge, and hence the treatment of morals, politics, and religion.” When he was just twenty years old the physicist Freeman Dyson discovered why, when Winston Churchill’s government paid him to think about what was really happening in the flak-filled skies over Berlin.

The British had a desperate need to know, because they were engaged in bombing Nazi Germany at least back to the Renaissance. Hence they employed Dyson as a statistician, to analyze the operations of Britain’s Bomber Command. Specifically, Dyson was to investigate whether bomber crews “learned by experience”: if whether the more missions each crew flew, the better each crew became at blowing up Germany—and the Germans in it. Obviously, if they did, then Bomber Command could try to isolate what those crews were doing and teach what it was to the others so that Germany and the Germans might be blown up better.

The bomb crews themselves believed, Dyson tells us, that as “they became more skillful and more closely bonded, their chances of survival would improve”—a belief that, for obvious reasons, was “essential to their morale.” But as Dyson went over the statistics of lost bombers, examining the relation between experience and loss rates while controlling for the effects of weather and geography, he discovered the terrible truth:

“There was no effect of experience on loss rate.”

The lives of each bomber crew, in other words, were dependent on chance, not skill, and the belief in their own expertise was just an illusion in the face of horror—an illusion that becomes the more awful when you know that, out of the 125,000 air crews who served in Bomber Command, 55,573 were killed in action.

“Statistics and simple arithmetic,” Dyson therefore concluded, “tell us more about ourselves than expert intuition”: a cold lesson to learn, particularly at the age of twenty—though that can be tempered by the thought that at least it wasn’t Dyson’s job to go to Berlin. Still, the lesson is so appalling that perhaps it is little wonder that, after the war, it was largely forgotten, and has only been taken up again by a subject nearly as joyful as the business of killing people on an industrial scale is horrifying: sport.

In one of the most cited papers in the history of psychology, “The Hot Hand in Basketball: On the Misperception of Random Sequences,” Thomas Gilovich, Robert Vallone and Amos Tversky studied how “players and fans alike tend to believe that a player’s chance of hitting a shot are greater following a hit than following a miss on the previous shot”—but “detailed analysis … provided no evidence for a positive correlation between the outcomes of successive shots.” Just as, in other words, the British airmen believed some crews had “skill” that kept them in the air, when in fact all that kept them aloft was, say, the poor aim of a German anti-aircraft gunner or a happily-timed cloud, so too did the three co-authors find that, in basketball, people believed some shooters could get “hot.” That is, reel off seemingly impossible numbers of shots in a row, like when Ben Gordon, then with the Chicago Bulls, knocked down 9 consecutive three-pointers against Washington in 2006. But in fact hits and misses are reliant on a player’s skill, not his “luck”: toss a coin enough times and the coin will produce “runs” of heads and tails too.

The “hot hand” concept in fact applies to more than simply the players: it extends to coaches also. “In sports,” says Leonard Mlodinow in his book The Drunkard’s Walk: How Randomness Rules Our Lives, “we have developed a culture in which, based on intuitive feelings of correlation, a team’s success or failure is often attributed largely to the ability of the coach”—a reality that perhaps explains just why, as Florida’s Lakeland Ledger reported in in 2014, the average tenure of NFL coaches over the past decade has been 38 months. Yet as Mlodinow also says, “[m]athematical analysis of firings in all major sports … has shown that those firings had, on average, no effect on team performance”: fans (and perhaps more importantly, owners) tend to think of teams rising and falling based on their coach, while in reality a team’s success has more to do with the talent the team has.

Yet while sports are a fairly trivial part of most peoples’ lives, that is not true when it comes to our “coaches”: the managers that run large corporations. As Diane Stafford found out for the Kansas City Star a few years back, it turns out that American corporations have as little sense of the real value of CEOs as NFL owners have of their coaches: the “pay gap between large-company CEOs and average American employees,” Stafford said, “vaulted from 195 to 1 in 1993 to 354 to 1 in 2012.” Meanwhile, more than a third “of the men who appeared on lists ranking America’s 25 highest-paid corporate leaders between 1993 and 2012 have led companies bailed out by U.S. taxpayers, been fired for poor performance or led companies charged with fraud.” Just like the Lancasters flown by Dyson’s aircrews, American workers (and their companies’ stockholders) have been taken for a ride by men flying on the basis of luck, not skill.

Again, of course, many in what’s termed the “cultural” left would insist that they too, stand with American workers against the bosses, that they too, wish things were better, and they too, think paying twenty bucks for a hot dog and a beer is an outrage. What matters however isn’t what professors or artists or actors or musicians or the like say—just as it didn’t matter what Britain’s bomber pilots thought about their own skills during the war. What matters is what their jobs say. And the fact of the matter is that cultural production, whether it be in academia or in New York or in Hollywood, simply is the same as thinking you’re a hell of a pilot, or you must be “hot,” or Phil Jackson is a genius. That might sound counterintuitive, of course—I thought writers and artists and, especially, George Clooney were all on the side of the little guy!—but, like McLuhan says, what matters is the medium, not the message.

The point is likely easiest to explain in terms of the academic study of the humanities, because at least there people are forced to explain themselves in order to keep their jobs. What one finds, across the political spectrum, is some version of the same dogma: students in literary studies can, for instance, refer to American novelist James Baldwin’s insistence, in the 1949 essay “Everybody’s Protest Novel,” that “literature and sociology are not the same,” while, at the other end of the political spectrum, political science students can refer to Leo Strauss’ attack on “the ‘scientific’ approach to society” in his 1958 Thoughts on Machiavelli. Every discipline in the humanities has some version of the point, because without such a doctrine they couldn’t exist: without them, there’s just a bunch of people sitting in a room reading old books.

The effect of these dogmas can perhaps be best seen by reference to the philosophical version of it, which has the benefit of at least being clear. David Hume called it the “is-ought problem”; as the Scotsman claimed in  A Treatise of Human Nature, “the distinction of vice and virtue is not founded merely on the relations of objects.” Later, in 1903’s Principe Ethica, British philosopher G.E. Moore called the same point the “naturalistic fallacy”: the idea that, as J.B. Schneewind of Johns Hopkins has put it, “claims about morality cannot be derived from statements of facts.” The advantage for philosophers is clear enough: if it’s impossible to talk about morality or ethics strictly by the light of science, that certainly justifies talking about philosophy to the exclusion of anything else. But in light of the facts about shooting hoops or being killed by delusional Germans, I would hope that the absurdity of Moore’s “idea” ought to be self-evident: if it can be demonstrated that something is a matter of luck, and not skill, that changes the moral calculation drastically.

That then is the problem with running a “left” based around the study of novels or rituals or films or whatever: at the end of the day, the study of the humanities, just like the practice of the arts, discourages the thought that, as Mlodinow puts it, “chance events are often conspicuously misinterpreted as accomplishments or failures.” And without such a consideration, I would suggest, any talk of “values” or “morality” or whatever you would like to call it, is empty. It matters if your leader is lucky or skillful, it matters if success is the result of hard work or who your parents are—and a “left” built on the opposite premises is not, to my mind, a “left” at all. Although many people in the “cultural left,” then, might have the idea that their overt exhortations to virtue might outweigh the covert message being told by their institutional positions, reality tells a different tale: by telling people they can fly, you should not be shocked when they crash.