Size Matters

That men would die was a matter of necessity; which men would die, though, was a matter of circumstance, and Yossarian was willing to be the victim of anything but circumstance.
Catch-22.
I do not pretend to understand the moral universe; the arc is a long one, my eye reaches but little ways; I cannot calculate the curve and complete the figure by the experience of sight; I can divine it by conscience. And from what I see I am sure it bends towards justice.
Things refuse to be mismanaged long.
—“Of Justice and the Conscience.

 

monte-carlo-casino
The Casino at Monte Carlo

 

 

Once, wrote the baseball statistician Bill James, there was “a time when Americans” were such “an honest, trusting people” that they actually had “an unhealthy faith in the validity of statistical evidence”–but by the time James wrote in 1985, things had gone so far the other way that “the intellectually lazy [had] adopted the position that so long as something was stated as a statistic it was probably false.” Today, in no small part because of James’ work, that is likely no longer as true as it once was, but nevertheless the news has not spread to many portions of academia: as University of Virginia historian Sophia Rosenfeld remarked in 2012, in many departments it’s still fairly common to hear it asserted—for example—that all “universal notions are actually forms of ideology,” and that “there is no such thing as universal common sense.” Usually such assertions are followed by a claim for their political utility—but in reality widespread ignorance of statistical effects is what allowed Donald Trump to be elected, because although the media spent much of the presidential campaign focused on questions like the size of Donald Trump’s … hands, the size that actually mattered in determining the election was a statistical concept called sample size.

First mentioned by the mathematician Jacob Bernoulli made in his 1713 book, Ars Conjectandi, sample size is the idea that “it is not enough to take one or another observation for such a reasoning about an event, but that a large number of them are needed.” Admittedly, it might not appear like much of an observation: as Bernoulli himself acknowledged, even “the most stupid person, all by himself and without any preliminary instruction,” knows that “the more such observations are taken into account, the less is the danger of straying from the goal.” But Bernoulli’s remark is the very basis of science: as an article in the journal Nature put the point in 2013, “a study with low statistical power”—that is, few observations—“has a reduced chance of detecting a true effect.” Sample sizes need to be large enough to be able to eliminate chance as a possible factor.

If that isn’t known it’s possible to go seriously astray: consider an example drawn from the work of Israeli psychologists Amos Tversky (MacArthur “genius” grant winner) and (Nobel Prize-winning) Daniel Kahneman—a study “of two toys infants will prefer.” Let’s say that in the course of research our investigator finds that, of “the first five infants studied, four have shown a preference for the same toy.” To most psychologists, the two say, this would be enough for the researcher to conclude that she’s on to something—but in fact, the two write, a “quick computation” shows that “the probability of a result as extreme as the one obtained” being due simply to chance “is as high as 3/8.” The scientist might be inclined to think, in other words, that she has learned something—but in fact her result has a 37.5 percent chance of being due to nothing at all.

Yet when we turn from science to politics, what we find is that an American presidential election is like a study that draws grand conclusions from five babies. Instead of being one big sample—as a direct popular national election would be—presidential elections are broken up into fifty state-level elections: the Electoral College system. What that means is that American presidential elections maximize the role of chance, not minimize it.

The laws of statistics, in other words, predict that chance will play a large role in presidential elections—and as it happens, Tim Meko, Denise Lu and Lazaro Gamio reported for The Washington Post three days after the election that “Trump won the presidency with razor-thin margins in swing states.” “This election was effectively decided,” the trio went on to say, “by 107,000 people”—in an election in which more than 120 million votes were cast, that means that election was decided by less than a tenth of one percent of the total votes. Trump won Pennsylvania by less than 70,000 votes of nearly 6 million, Wisconsin by less than 30,000 of just less than three million, and finally Michigan by less than 11,000 out of 4.5 million: the first two by just more than one percent of the total vote each—and Michigan by a whopping .2 percent! Just to give you an idea of how insignificant these numbers are by comparison with the total vote cast, according to the Michigan Department of Transportation it’s possible that a thousand people in the five largest counties were involved in car crashes—which isn’t even to mention people who just decided to stay home because they couldn’t find a babysitter.

Trump owes his election, in short, to a system that is vulnerable to chance because it is constructed to turn a large sample (the total number of American voters) into small samples (the fifty states). Science tells us that small sample sizes increase the risk of random chance playing a role, American presidential elections use a smaller sample size than they could, and like several other presidential elections, the 2016 election did not go as predicted. Donald Trump could, in other words, be called “His Accidency” with even greater justice than John Tyler—the first vice-president to be promoted due to the death of his boss in office—was. Yet, why isn’t that point being made more publicly?

According to John Cassidy of The New Yorker, it’s because Americans haven’t “been schooled in how to think in probabilistic terms.” But just why that’s true—and he’s essentially making the same point Bill James did in 1985, though more delicately—is, I think, highly damaging to many of Clinton’s biggest fans: the answer is, because they’ve made it that way. It’s the disciplines where many of Clinton’s most vocal supporters make their home, in other words, that are most directly opposed to the type of probabilistic thinking that’s required to see the flaws in the Electoral College system.

As Stanford literary scholar Franco Moretti once observed, the “United States is the country of close reading”: the disciplines dealing with matters of politics, history, and the law within the American system have, in fact, more or less been explicitly constructed to prevent importing knowledge of the laws of chance into them. Law schools, for example, use what’s called the “case method,” in which a single case is used to stand in for an entire body of law: a point indicated by the first textbook to use this method, Christopher Langdell’s A Selection of Cases on the Law of Contracts. Other disciplines, such as history, are similar: as Emory University’s Mark Bauerlein has written, many such disciplines depend for their very livelihood upon “affirming that an incisive reading of a single text or event is sufficient to illustrate a theoretical or historical generality.” In other words, it’s the very basis of the humanities to reject the concept of sample size.

What’s particularly disturbing about this point is that, as Joe Pinsker documented in The Atlantic last year, the humanities attract a wealthier student pool than other disciplines—which is to say that the humanities tend to be populated by students and faculty with a direct interest in maintaining obscurity around the interaction between the laws of chance and the Electoral College. That doesn’t mean that there’s a connection between the architecture of presidential elections and the fact that—as Geoffrey Harpham, former president and director of the National Humanities Center, has observed—“the modern concept of the humanities” (that is, as a set of disciplines distinct from the sciences) “is truly native only to the United States, where the term acquired a meaning and a peculiar cultural force that it does not have elsewhere.” But it does perhaps explain just why many in the national media have been silent regarding that design in the month after the election.

Still, as many in the humanities like to say, it is possible to think that the current American university and political structure is “socially constructed,” or in other words could be constructed differently. The American division between the sciences and the humanities is not the only way to organize knowledge: as the editors of the massive volumes of The Literary and Cultural Reception of Darwin in Europe pointed out in 2014, “one has to bear in mind that the opposition of natural sciences … and humanities … does not apply to the nineteenth century.” If that opposition that we today find so omnipresent wasn’t then, it might not be necessary now. Hence, if the choice of the American people is between whether they ought to get a real say in the affairs of government (and there’s very good reason to think they don’t), or whether a bunch of rich yahoos spend time in their early twenties getting drunk, reading The Great Gatsby, and talking about their terrible childhoods …well, I know which side I’m on. But perhaps more significantly, although I would not expect that it happens tomorrow, still, given the laws of sample size and the prospect of eternity, I know how I’d bet.

Or, as another sharp operator who’d read his Bernoulli once put the point:

The arc of the moral universe is long, but it bends towards justice.”

 

Fine Points

 

Whenever asked a question, [John Lewis] ignored the fine points of whatever theory was being put forward and said simply, “We’re gonna march tonight.”
—Taylor Branch.
   Parting the Waters: America in the King Years Vol. 1 

 

 

“Is this how you build a mass movement?” asked social critic Thomas Frank in response to the Occupy Wall Street movement: “By persistently choosing the opposite of plain speech?” To many in the American academy, the debate is over—and plain speech lost. More than fifteen years ago articles like philosopher Martha Nussbaum’s 1999 criticism of professor Judith Butler, “The Professor of Parody,” or political scientist James Miller’s late 1999 piece “Is Bad Writing Necessary?” got published—and both articles sank like pianos. Since then it’s seemed settled that (as Nussbaum wrote at the time) the way “to do … politics is to use words in a subversive way.” Yet at a minimum this pedagogy diverts attention from, as Nussbaum says, “the material condition of others”—and at worst, as professor Walter Benn Michaels suggests, it turns the the academy into “the human resources department of the right, concerned that the women [and other minorities] of the upper middle class have the same privileges as the men.” Supposing then that bad writers are not simply playing their part in class war, what is their intention? I’d suggest that subversive writing is best understood as a parody of a tactic used, but not invented, by the civil rights movement: packing the jails.

“If the officials threaten to arrest us for standing up for our rights,” Martin Luther King, Jr. said in a January 1960 speech in Durham, North Carolina, “we must answer by saying that we are willing and prepared to fill up the jails of the South.” King’s speech was written directly towards the movement’s pressing problem: bailing out protestors cost money. In response, Thomas Gaither, a field secretary for the Congress for Racial Equality (CORE), devised a solution: he called it “Jail No Bail.” Taylor Branch, the historian, explained the concept in Parting the Waters: America in the King Years 1954-63: the “obvious advantage of ‘jail, no bail’ was that it reversed the financial burden of protest, costing the demonstrators no cash while obligating the white authorities to pay for jail space and food.” All protestors had to do was: get arrested, serve the time—and thereby cost the state their room and board.

Yet Gaither did not invent the strategy. “Packing the jails” as a strategy began, so far as I can tell, in October of 1909; so reports the Minnesotan, Harvey O’Connor, in his 1964 autobiography Revolution in Seattle: A Memoir. All that summer, the International Workers of the World (the “Wobblies”) had been engaged in a struggle against “job sharks”: companies that claimed to procure jobs for their clients after the payment of a fee—and then failed to deliver. (“It was customary,” O’Connor wrote, “for the employment agencies … to promote a rapid turnover”: the companies would take the money and either not produce the job, or the company that “hired” the newly-employed would fire them shortly afterwards.) In the summer of 1909 those companies succeeded in banning public assemblies and speaking on the part of the Wobblies, and legal challenges proved impossible. So in the October of that year the Wobblies “sent out a call” in the labor organization’s newspaper, the Industrial Worker: “Wanted: Men To Fill The Jails of Spokane.”

Five days later, the Wobblies held a “Free Speech Day” rally, and managed to get 103 men arrested. By “the end of November 500 Wobblies were in jail.” Through the “get arrested” strategy, the laborers filled the city’s jail “to bursting and then a school was used for the overflow, and when that filled up the Army obligingly placed a barracks at the city’s command.” And so the Wobblies’ strategy was working: the “jail expenses threatened to bankrupt the treasuries of cities even as large as Spokane.” As American writer and teacher Archie Binns had put the same point in 1942: it “was costing thousands of dollars every week to feed” the prisoners, and so the city was becoming “one big jail.” In this way, the protestors threatened to “eat the capitalistic city out of house and home”—and so the “city fathers” of Spokane backed down, instituting a permitting system for public marches and assemblies. “Packing the jails” won.

What, however, has this history to do with the dispute between plain-speakers and bad writers? In the first place it demonstrates how our present-day academy would much rather talk about Martin Luther King, Jr. and CORE than Harvey O’Connor and the Wobblies. Writing ruefully about left-wing professors like himself, Walter Benn Michaels writes “We would much rather get rid of racism than get rid of poverty”; elsewhere he says, “American liberals … carry on about racism and sexism in order to avoid doing so about capitalism.” Despite the fact that, historically, the civil rights movement borrowed a lot from the labor movement, today’s left doesn’t have much to say about that—nor much about today’s inequality. So connecting the tactics of the Wobblies to those of the civil rights movement is important because it demonstrates continuity where today’s academy wants to see, just as much as any billionaire, a sudden break.

That isn’t the only point of bringing up the “packing the jails” tactic however—the real point is that writers like Butler are making use of a version of this argument without publicly acknowledging it. As laid out by Nussbaum and others, the unsaid argument or theory or idea or concept (whatever name you’d have for it) behind “bad” writing is a version of “packing the jails.” To be plain: that by filling enough academic seats (with the right sort of person) political change will somehow automatically follow, through a kind of osmosis.

Admittedly, no search of the writings of America’s professors, Judith Butler or otherwise, will discover a “smoking gun” regarding that idea—if there is one, presumably it’s buried in an email or in a footnote in a back issue of Diacritics from 1978. The thesis can only to be discovered in the nods and understandings of the “professionals.” On what warrant, then, can I claim that it is their theory? If that’s the plan, how do I know?

My warrant extends from a man who knew, as Garry Wills of Northwestern says,  something about “the plain style”: Abraham Lincoln. To Lincoln, the only possible method of interpretation is a judgment of intent: as Lincoln said in his speech at Peoria in 1858, “when we see a lot of framed timbers, different portions of which we know have been gotten out at different times and places by different workmen,” and “we see these timbers joined together, and see they exactly make the frame of a house or a mill,” why, “in such a case we find it impossible not to believe” that everyone involved “all understood each other from the beginning.” Or as Walter Benn Michaels has put the same point: “you can’t do textual interpretation without some appeal to authorial intention.” In other words, when we see a lot of people acting in similar ways, we should be able to make a guess about what they’re trying to do.

In the case of Butlerian feminists—and, presumably, other kinds of bad writers—bad writing allows them to “do politics in [the] safety of their campuses,” as Nussbaum says, by “making subversive gestures through speech.” Instead of “packing the jails” this pedagogy, this bad writing, teaches “packing the academy”: the theory presumably being that, just as Spokane could only jail so many people, the academy can only hold so many professors. (Itself an issue, because there are a lot fewer professorships available these days, and only liable to be fewer.) Since, as Abraham Lincoln said about what he saw in the late 1850s, we can only make a guess—but we must make a guess—about what those intentions are, I’d hazard that my guess is more or less what these bad writers have in mind.

Unfortunately, in the hands of Butler and others, bad writing is only a parody—it only mimics the very real differences between the act of going to jail and that of attempting to become the, say, Coca-Cola Professor of Rhetoric at Wherever State. A black person willing to go to jail in the South in 1960 was a person with a great deal of courage—and still would be today. But it’s also true that it’s unlikely the courageous civil rights volunteers would have conceived of, much less carried out, the act of attempting to “pack the jails” without the example of the Wobblies prior to them—just as it might be argued that, without the sense of being of the same race and gender as their oppressors, the Wobblies might not have had the courage to pack the jails of Spokane. So it certainly could be argued that the work of the “bad writers” is precisely to make those connections—and so create the preconditions for similar movements in the future.

Yet, as George Orwell might have asked, “where’s the omelette?” Where are the people in jail—and where are the decent pay and equal rights that might follow them? Butler and other “radical” critics don’t produce either: I am not reliably informed of Judith Butler’s arrest record, but I’d suspect it’s not much. So Nussbaum’s observation that while Butler’s pedagogy “instructs people that they can, right now, without compromising their security, do something bold” [emp. added] she wasn’t entirely snide then, and her words look increasingly prescient now. That’s what Nussbaum means when she says that “Butlerian feminism is in many ways easier than the old feminism”: it is a path that demonstrates to middle-class white people, women especially, just how they can “dissent” without giving up their status or power. Nussbaum thus implies that feminism or any other kind of “leftism” practiced along Butler’s lines is not only, quite literally, physically cowardly—but perhaps more importantly suggests just why the “left,” such as it is, is losing.

For surely the “Left” is losing: as many, many people besides Walter Benn Michaels have written, economic inequality has risen, and is rising, even as the sentences and jargon of today’s academics have become more complex—and the academy’s own power slowly dissolves into a mire of adjunct professorships and cut-rate labor policies. Emmanuel Saez of the University of California says that “U.S. income inequality has been steadily increasing since the 1970s, and now has reached levels not seen since 1928,” and Nobel Prize winner Paul Krugman says that even the wages of “highly educated Americans have gone nowhere since the late 1990s.” We witness the rise of plutocrats on a scale never seen before, perhaps at least since the fall of the Bourbons—or even the Antonines.

That is not to suggest, to be sure, that individual “bad writers” are or are not cowards: merely to be a black person or a woman requires levels of courage many people will never be aware of in their lifetimes. Yet, Walter Benn Michaels is surely correct when he says that as things now stand, the academic left in the United States today is largely “a police force for, than an alternative to, the right,” insofar as it “would much rather get rid of racism [or sexism] than get rid of poverty.” Fighting “power” by means of a program of bad, rather than good, writing—writing designed to appeal to great numbers of people—is so obviously stupid it could only have been invented by smart people.

The objection is that giving up the program of Butlerian bad writing requires giving up the program of “liberation” her prose suggests: what Nussbaum calls Butler’s “radical libertarian” dream of the “sadomasochistic rituals of parody.” Yet as Thomas Frank has suggested, it’s just that kind of libertarian dream that led the United States into this mess in the first place: America’s recent troubles have, Frank says, resulted from “the political power of money”—a political power that was achieved courtesy of “a philosophy of liberation as anarchic in its rhetoric as Occupy [Wall Street] was in reality” [emp. Frank’s]. By rejecting that dream, American academics might obtain “food, schools, votes” and (possibly) less rape and violence for both women and men alike. But how?

Well, I have a few ideas—but you’d have to read some plain language.

After the Messiah

There was trouble in the state of Lu, and the reigning monarch called in Confucius to ask for his help. When he arrived at the court, the Master went to a public place and took a seat in the correct way, facing south, and all the trouble disappeared.

—Frances Fitzgerald
    Fire in the Lake: the Vietnamese and the Americans in Vietnam 
  

Speaking to the BBC about the new season before the turn of the year, Rory McIlroy placidly remarked that “trying to make up for ’13 with two in ’14 would be nice.” Rory’s burden is however not as light as was his tone: only 16 men have done the same since 1922. But Rory’s opponents do not just live in the record books: recently, Tiger Woods’ agent more or less told Golf Digest that Tiger needed to win a major this year. Although it’s possible for both men to achieve their goals, it isn’t likely: the smooth 63 McIlroy put on Woods at Dubai, while playing in the same group, served that notice. But because of something called the ”Tiger Woods Effect,” the collateral damage of this war might include other parties—chief among them the FedEx Cup.

The “Tiger Woods Effect” was named in a 2009 paper by an economics professor: Jennifer Brown of Northwestern University. The paper, entitled “Quitters Never Win: The (Adverse) Effects of Competing With Superstars,” examined PGA Tour results during the early years of the twenty-first century; perhaps unsurprisingly, all golfers, even the best, played worse when TW was in the field versus when he wasn’t. The difference was about a stroke worse per tournament, and when Tiger was really “on,” the other players were about two shots worse. After controlling for other possible explanations, Brown argued that what this might mean is that human beings, faced with the near certainty that no matter their effort they are doomed to second place (even if that belief is misplaced), eventually can no longer give their best efforts. This is what the Effect is.

Once we realize we can’t win—or at least, believe that—human beings will not produce extra effort, Brown’s theory claimed: a theory that the mere existence of the FedEx Cup validates nearly single-handedly. Almost certainly, that is, the FedEx Cup was introduced precisely as a response to the “Tiger Woods Effect”—it was first announced in 2005, around the time that Woods was completing the “Tiger Slam” by winning all four majors in a row. The Cup itself has been “tweaked” every year since it began in 2007, but its basic form has remained.

Throughout the “regular season” players accumulate “points”—which are not just the amount of dollars won in each event. In August, the point leaders gather for a series of “playoff tournaments” whose fields grow progressively smaller, so that by the time of the Tour Championship in September there are only thirty players in the field. As things now stand (after the ”tweakings”), because the four playoff events have higher point totals than the regular season events, it’s theoretically possible for even the 30th ranked player to win the $10 million dollar prize that constitutes the FedEx Cup and the title “tour champion.”

For the PGA Tour, the idea is to generate excitement—$10 million, it seems, is cheap for what it buys. As a Grantland piece (“Putting For Dough” 19 Sept. 2013) suggests, however, there’s something odd about the notion, if you think about it: the problem is, if the FedEx Cup is meant to identify the best player in golf, it’s indisputable that, nearly every year, “Tiger Woods has had the best season of anyone.” Woods won five events in 2013 alone, and nearly $8 million in prize money. How can, in other words, someone get more money than Woods just for playing well at the right time of year? “Golf,” as the Grantland piece puts it, “is a cumulative sport”—the FedEx Cup is a glaring exception to that rule.

The FedEx Cup, in sum, is essentially a way to give a big prize to someone not named Woods at the end of the golf season—depending on the mood, it might be called the “Best White Golfer Award” or something equally snarky. It could be thought of as an example of practical racism at work on par with Jim Thorpe having his Olympic medals taken away, or Jack Johnson pursued by the law, or Muhammed Ali being shut out of his sport for years of his athletic prime. Why not just go off the money list? Why all the finagaling about “points?” Why, in a sport filled with conservative ideologues, should this obviously “socialistic” mechanism exist?

“Never was any such event,” wrote the Frenchman de Toqueville, about the French Revolution, “stemming from factors so far back in the past, so inevitable and yet so completely unforeseen.” Or to put it another way, history proceeds by way of ironies—which is perhaps likely what upsets Woods, if he thinks of it at all. In one sense, that is, there is no better exemplar of the kind of Ayn Randian John Galt-type hero in golf than Woods, and yet it seems that golf has gone out of its way to avoid rewarding him properly.

It’s in that way, however, that Woods shares the most with the man Tiger’s father always asserted would be the standard to measure his son by. In the years since Martin Luther King’s assassination, the congruence between one aspect of King’s legacy and a certain capital-friendly American ideology hasn’t escaped the intellectual grasp of some on the right. John Danforth, for instance, was a Republican senator from Missouri when he championed the notion of a holiday to honor Dr. King: to Danforth, King symbolized “the spirit of American freedom and self-determination,” as a recent article in Salon tracing the history of the holiday’s establishment notes. Tiger Woods’ ascension to the world’s most successful pitchman in history, in other words, is likely the result of many factors, deep forces that can only be glimpsed, and not fully understood, by those moved by them.

Woods’ nearly monomaniacal work ethic, for example, doesn’t have its source solely in his father’s service in the United States Army. Almost certainly, even if Woods is unconscious of it, it has roots that go back long before he, or even his father, was born. Just as certainly, it has something to do with the real legacy of the civil rights movement in general and Martin Luther King, Jr. in particular.

“My father,” wrote Hamden Rice recently in the Daily Kos, “told me with a sort of cold fury” just what it was that Dr. King had done for the South when, as a “smart ass home from first year of college,” Rice had dared to question King’s real contribution to the civil rights movement. “‘Dr. King,’” Rice’s father said, “ended the terror of living in the South.’”

What Rice’s father meant was by no means figurative: what he was referring to was the fact that Southern white people “occasionally went berserk, and grabbed random black people, usually men, and lynched them.” What King’s movement had done was ended that—something that usually gets glossed over when MLK Day runs around: the fact that, in America, sometimes some people got randomly murdered with, essentially, the blessing of the state.

The connection between this state-sponsored terrorism and Woods’ career isn’t entirely psychologically implausible if Rice is correct about the effect the terror had. Remembering those days prior to the movement, Rice recalls how his father taught him “many, many humiliating practices in order to prevent the random, terroristic, berserk behavior of white people.” His point is that centuries of horror drilled in codes of behavior—ones that, in fact, it was precisely King’s mission to teach Americans (all of us) to unlearn.

Where the codes taught behavior designed to avoid what were, to be euphemistic, poor outcomes, King taught people to confront their fears. Be reprimanded, be fired, go to jail. Be beaten. And, if necessary, die, rather than continue to submit. The civil rights movement taught, as Rice says, “whatever you are most afraid of doing vis a vis white people, go do it.” Or, as we might say, just do it. King’s message was that African-Americans could only achieve their freedom themselves—which, at the end of the day, is just what the civil rights movement was.

Yet, while of course such a kind of attitude is necessary to throw off the yoke of the Bull Connors of the world, it’s also an attitude that might be outdated. No one’s ever questioned Woods’ work ethic, for example—but a viable question to ask about Woods is whether his ferocious ability to put in the time hasn’t actually hurt his career. Woods’ left knee, among other injuries, essentially shattered because of all the pressure put on it over the years—pressure that included endless hours on the range perfecting all of the various swings he has caused to be taught to him.

No golfer in history has had so many swing coaches, nor different swings: Tiger’s won majors with at least three different methods of hitting the golf ball, which might be some kind of record itself. Tiger’s continuing search for the perfect swing is a kind of metonym for his own “search for excellence,” as the management theory books put it—but might it also be a sign of an engine, with nothing else to work on, tearing itself apart? Rather than something praiseworthy, isn’t there something a bit much about tearing down a perfectly functioning machine in the hope of building something fractionally better?

In that sense, then, it’s possible to read the FedEx Cup as not just a lavish reward for the Best Non-Tiger Golfer. It’s possible to read the FedEx Cup not just as an anti-Tiger manifesto, but an argument for a different set of values: the FedEx Cup celebrates the latecomer versus the early-riser, the “brilliant” rather than the “hard-working.” It’s Romantic against Classical; Dionysian versus Apollonian. It, nearly literally, rewards what some might term a certain kind of lackadaisical, nonchalant approach: the kind of behavior that, one suspects, drives Woods himself apoplectic.

The kind of behavior, that is, that might lead a golfer to be late for an important tee time, for example. Rory McIlroy, who arrived for his singles Ryder Cup match in September of 2012 so late that he arrived in a police car, may know something about that.