The Smell of Victory

To see what is in front of one’s nose needs a constant struggle.
George Orwell. “In Front of Your Nose”
    Tribune, 22 March 1946

 

Who says country clubs are irony-free? When I walked into Medinah Country Club’s caddie shack on the first day of the big member-guest tournament, the Medinah Classic, Caddyshack, that vicious class-based satire of country club stupid was on the television. These days, far from being patterned after Caddyshack’s Judge Smails (a pompous blowhard), most country club members are capable of reciting the lines of the movie nearly verbatim. Not only that—they’ve internalized the central message of the film, the one indicated by the “snobs against the slobs” tagline on the movie poster: the moral that, as another 1970s cinematic feat put it, the way to proceed through life is to “trust your feelings.” Like a lot of films of the 1970s—Animal House, written by the same team, is another example—Caddyshack’s basic idea is don’t trust rationality: i.e., “the Man.” Yet, as the phenomena of country club members who’ve memorized Caddyshack demonstrates, that signification has now become so utterly conventional that even the Man doesn’t trust the Man’s methods—which is how, just like O.J. Simpson’s jury, the contestants in this year’s Medinah Classic were prepared to ignore probabilistic evidence that somebody was getting away with murder.

That’s a pretty abrupt jump-cut in style, to be sure, particularly in regards to a sensitive subject like spousal abuse and murder. Yet, to get caught up in the (admittedly horrific) details of the Simpson case is to miss the trees for the forest—at least according to a short 2010 piece in the New York Times entitled “Chances Are,” by the Schurman Professor of Applied Mathematics at Cornell University, Steven Strogatz.

The professor begins by observing that the prosecution spent the first ten days of the six-month long trial establishing that O.J. Simpson abused his wife, Nicole. From there, as Strogatz says, prosecutors like Marcia Clark and Christopher Darden introduced statistical evidence that showed that abused women who are murdered are usually killed by their abusers. Thus, as Strogatz says, the “prosecution’s argument was that a pattern of spousal abuse reflected a motive to kill.” Unfortunately however the prosecution did not highlight a crucial point about their case: Nicole Brown Simpson was dead.

That, you might think, ought to be obvious in a murder trial, but because the prosecution did not underline the fact that Nicole was dead the defense, led on this issue by famed trial lawyer Alan Dershowitz, could (and did) argue that “even if the allegations of domestic violence were true, they were irrelevant.” As Dershowitz would later write, the defense claimed that “‘an infinitesimal percentage—certainly fewer than 1 of 2,500—of men who slap or beat their domestic partners go on to murder them.’” Ergo, even if battered women do tend to be murdered by their batterers, that didn’t mean that this battered woman (Nicole Brown Simpson) was murdered by her batterer, O.J. Simpson.

In a narrow sense, of course, Dershowitz’s claim is true: most abused women, like most women generally, are not murdered. So it is absolutely true that very, very few abusers are also murderers. But as Strogatz says, the defense’s argument was a very slippery one.

It’s true in other words that, as Strogatz says, “both sides were asking the jury to consider the probability that a man murdered his ex-wife, given that he previously battered her.” But to a mathematician like Strogatz, or his statistician colleague I.J. Good—who first tackled this point publicly—this is the wrong question to ask.

“The real question,” Strogatz writes, is: “What’s the probability that a man murdered his ex-wife, given that he previously battered her and she was murdered?” That’s the question that applied in the Simpson case: Nicole Simpson had been murdered. If the prosecution had asked the right question in turn, the answer to it—that is, the real question, not the poorly-asked or outright fraudulent questions put by both sides at Simpson’s trial—would have been revealed to be about 90 percent.

To run through the math used by Strogatz quickly (but still capture the basic points): of a sample of 100,000 battered American women, we could expect about 5 of them to be murdered by random strangers any given year, while we could also expect about 40 of them to be murdered by their batterers. So of the 45 battered women murdered each year per 100,000 battered women, about 90 percent of them are murdered by their batterers.

In a very real sense then, the prosecution lost its case against O.J. because it did not present its probabilistic evidence correctly. Interviewed years later for the PBS program, Frontline, Robert Ball, a lawyer for one of the jurors on the Simpson case, Brenda Moran, said that according to his client, the jury thought that for the prosecution “to place so much stock in the notion that because [O.J.] engaged in domestic violence that he must have killed her, created such a chasm in the logic [that] it cast doubt on the credibility of their case.” Or as one of the prosecutors, William Hodgman, said after the trial, the jury “didn’t understand why the prosecution spent all that time proving up the history of domestic violence,” because they “felt it had nothing to do with the murder case.” In that sense, Hodgman admitted, the prosecution failed because they failed to close the loop in the jury’s understanding—they didn’t make the point that Strogatz, and Good before him, say is crucial to understanding the probabilities here: the fact that Nicole Brown Simpson had been murdered.

I don’t know, of course, to what degree distrust of scientific or rational thought played in the jury’s ultimate decision—certainly, as has been discovered in recent years, it is the case that crime laboratories have often been accused of “massaging” the evidence, particularly when it comes to African-American defendants. As Spencer Hsu reported in the Washington Post, for instance, just in April of this year the “Justice Department and FBI … formally acknowledged that nearly every examiner in an elite FBI forensic unit gave flawed testimony in almost all trials in which they offered evidence.” Yet, while it’s obviously true that bad scientific thought—i.e., “thought” that isn’t scientific at all—ought to be quashed, it’s also I think true that there is a pattern of distrust of that kind of thinking that is not limited to jurors in Los Angeles County, as I discovered this weekend at the Medinah Classic.

The Classic is a member-guest tournament, and member-guests are golf tournaments consisting of two-man teams made up by a country club member and his guest. They are held by country clubs around the world, played according to differing formats but usually dependent upon each golfer’s handicap index: the number assigned by the United States Golf Association’s computer after the golfer pays a fee and inputs his scores into the USGA’s computer system. (It’s similar to the way that carrying weights allows horses of different sizes to race each other, or how different weight classes allows boxing or wrestling to be fair.) Medinah’s member-guest tournament is, nationally, one of the biggest because of the number of participants: around 300 golfers every year, divided into three flights according to handicap index (i.e. ability). Since Medinah has three golf courses, it can easily accommodate so many players—but what it can’t do, however, is adequately police the tournament’s entrants, as the golfers I caddied for discovered.

Our tournament began with the member shooting an amazing 30, after handicap adjustment, on the front nine of Medinah’s Course Three, the site of three U.S. Opens, two PGA Championships, numerous Western Opens (back when they were called Western Opens) and a Ryder Cup. A score of 30 for nine holes, on any golf course, is pretty strong—but how much more so on a brute like that course, and how much more so again in the worst of the Classic’s three flights? I thought so, and said so to the golfers I was caddieing for after our opening round. They were kind of down about the day’s ending—especially the guest, who had scored an eight on our last hole of the day. Despite that I told my guys that on the strength the member’s opening 30, if we weren’t just outright winning the thing we were top three. As it turned out, I was correct—but despite the amazing showing we had on the tournament’s first day, we would soon discover that there was no way we could catch the leading team.

In a handicapped tournament like the Classic, what matters isn’t so much what any golfer scores, but what he scores in relation to the handicap index. Thus, the member half of our member-guest team hadn’t actually shot a 30 on the front side of Medinah’s Course 3—which certainly would have been a record for an amateur tournament, and I think a record for any tournament at Medinah ever—but instead had shot a 30 considering the shots his handicap allowed. His score, to use the parlance, wasn’t gross but rather net: my golfer had shot an effective six under par according to the tournament rules.

Naturally, such an amazing score might raise questions: particularly when it’s shot as part of the flight reserved for the worst players. Yet my player has a ready explanation for why he was able to shoot a low number (in the mid 40s) and yet still have a legitimate handicap: he has a legitimate handicap—a congenital deformity in one of his ankles. The deformation is not enough to prevent him from playing, but as he plays—and his pain medications wear off—he usually tires, which is to say that he can very often shoot respectable scores in the first nine holes, and horrific scores on the second nine holes. His actual handicap, in other words, causes his golf handicap index to be askew slightly from reality.

Thus, he is like the legendary Sir Gawain, who according to Arthurian legend tripled his strength at noon but faded as the sun set—a situation that the handicap system is ill-designed to handle. Handicap indexes presume roughly the same ability at the beginning of a round as at the end, so in this Medinah member’s case his index understates his ability at the beginning of his round while wildly overstating it at the end. In a sense then it could perhaps be complained that this member benefits from the handicap system unfairly—unless you happen to consider that the man walks in nearly constant pain every day of his life. If that’s “gaming the system” it’s a hell of a way to do it: getting a literal handicap to pad your golf handicap would obviously be absurd.

Still, the very question suggests the great danger of handicapping systems, which is one reason why people have gone to the trouble of investigating whether there are ways to determine whether someone is taking advantage of the handicap system—without using telepathy or some other kind of magic to determine the golfer’s real intent. The most important of the people who have investigated the question is Dean L. Knuth—the former Senior Director of Handicapping for the United States Golf Association, a man whose nickname is the “Pope of Slope.” In that capacity Mr. Knuth developed the modern handicapping system—and a way to calculate the odds of a person of a given handicap shooting a particular score.

In this case, my information is that the team that ended up winning our flight—and won the first round—had a guest player who represented himself as possessing a handicap index of 23 when the tournament began. For those who aren’t aware, a 23 is a player who does not expect to play better than a score of ninety during a round of golf, when the usual par for most courses is 72. (In other words, a 23 isn’t a very good player.) Yet this same golfer shot a gross 79 during his second round for what would have been a net 56: a ridiculous number.

Knuth’s calculations reflect that: they judge that the odds of someone shooting a score so far below his handicap to be on the order of several tens of thousands to one, especially in tournament conditions. In other words, while my player’s handicap wasn’t a straightforward depiction of his real ability, it did adequately capture his total worth as a golfer. This other player’s handicap though sure appeared to many, including one of the assistant professionals who went out to watch him play, to be highly suspect.

That assistant professional, who is a five handicap himself, said that after watching this guest play he would hesitate to play him straight up, much less giving the fellow ten or more shots: the man not only was hitting his shots crisply, but also hit shots that even professionals fear, like trying to get a ball to stop on a downslope. So for the gentleman to claim to be a 23 handicap seemed, to this assistant professional, to be incredibly, monumentally, improbable. Observation then seems to confirm what Dean Knuth’s probability tables would suggest: the man was playing with an improper handicap.

What happened as the tournament went along also appears to indicate that at least Medinah’s head professional was aware that the man’s reported handicap index wasn’t legitimate: after the first round, in which that player shot a similarly suspect score as his second round 79 (I couldn’t discover what it was precisely), his handicap was adjusted downwards, and after that second round 79 more shots got knocked off his initial index. Yet although there was a lot of complaining on the part of fellow competitors, no one was willing to take any kind of serious action.

Presumably, this inaction was on a theory similar to the legal system’s presumption of innocence: maybe the man just really had “found his swing” or “practiced really hard” or gotten a particularly good lesson just before arriving at Medinah’s gates. But to my mind, such a presumption ignores, like the O.J. jury did, the really salient issue: in the Simpson case, that Nicole was dead; in the Classic, the fact that this team was leading the tournament. That was the crucial piece of data: it wasn’t just that this team could be leading the tournament, it was that they were leading the tournament—just in the same way that, while you couldn’t use statistics to predict whether O.J. Simpson would murder his ex-wife Nicole, you certainly can use statistics to say that O.J. probably murdered Nicole once Nicole was murdered.

The fact in other words that this team of golfers was winning the tournament was itself evidence they were cheating—why would anyone cheat if they weren’t going to win as a result? That doesn’t mean, to be sure, that winning constitutes conclusive evidence of fraud—just as probabilistic evidence doesn’t mean that O.J. must have killed Nicole—but it does indicate the need for further investigation, and suggests what presumption an investigation ought to pursue. Particularly by the amount of the lead: by the end of the second day, that team was leading by more than twenty shots over the next competitors.

Somehow however it seems that Americans have lost the ability to see the obvious. Perhaps that’s through the influence of films from the 1970s like Caddyshack or Star Wars: both films, interestingly, feature scenes where one of the good guys puts on a blindfold in order to “get in touch” with some cosmic quality that lies far outside the visible spectrum. (The original Caddyshack script actually cites the Star Wars scene.) But it is not necessary to blame just those films themselves: as Thomas Frank says in his book The Conquest of Cool, one of America’s outstanding myths represents the world as a conflict between all that is “tepid, mechanical, and uniform” versus the possibility of a “joyous and even a glorious cultural flowering.” In the story told by cultural products like Caddyshack, it’s by casting aside rational methods—like Luke Skywalker casting aside his targeting computer in the trench of the Death Star—that we are all going to be saved. (Or, as Rodney Dangerfield’s character puts it at the end of Caddyshack, “We’re all going to get laid!”) That, I suppose, might be true—but perhaps not for the reasons advertised.

After all, once we’ve put on the blindfold, how can we be expected to see?

Advertisements

A Momentary Lapse

 

The sweets we wish for turn to loathed sours
Even in the moment that we call them ours.
The Rape of Lucrece, by William Shakespeare

“I think caddies are important to performance,” wrote ESPN’s Jason Sobel late Friday night. “But Reed/DJ each put a family member on bag last year with no experience. Didn’t skip a beat.” To me, Sobel’s tweet appeared to question the value of caddies, and so I wrote to Mr. Sobel and put it to him that sure, F. Scott Fitzgerald could write before he met Maxwell Perkins—but without Perkins on Fitzgerald’s bag, no Gatsby. Still, I don’t mention the point simply to crow about what happened: how Dustin Johnson missed a putt to tie Jordan Spieth in regulation, a putt that arguably a professional caddie would have held Johnson from hitting so quickly. What’s important about Spieth’s victory is that it might finally have killed the idea of “staying in the moment”: an un-American idea far too prevalent for the past two decades or more not only in golf, but in American life.

Anyway, it’s been around a while. “Staying in the moment,” as so much in golf does, likely traces at least so far back as Tiger Woods’ victory at Augusta National in 1997. Sportswriters then liked to make a big deal out of Tiger’s Thai heritage: supposedly, his mother’s people, with their Buddhist religion, helped Tiger to focus. It was a thesis that to my mind was more than a little racially suspect—seemed to me that Woods’ won a lot of tournaments because he hit the ball further than anyone else at the time, and it was matched by an amazing short game. That was the story that got retailed then however.

Back in 2000, for instance, Robert Wright of the online magazine Slate was peddling what he called the “the New Age Theory of Golf.” “To be a great golfer,” Wright said, “you have to do what some Eastern religions stress—live in the present and free yourself of aspiration and anxiety.” “You can’t be angry over a previous error or worried about repeating it,” Wright went on to say. You are just supposed to “move forward”—and, you know, forget about the past. Or to put it another way, success is determined by how much you can ignore reality.

Now, some might say that it was precisely this attitude that won the U.S. Open for Team Jordan Spieth. “I always try to stay in the present,” Jordan Spieth’s caddie Michael Greller told The Des Moines Register in 2014, when Greller and Spieth returned to Iowa to defend the title the duo had won in 2013. But a close examination of their behavior on the course, by Shane Ryan of Golf Digest, questions that interpretation.

Spieth, Ryan writes, “kept up a neurotic monologue with Michael Greller all day, constantly seeking and receiving reassurance about the wind, the terrain, the distance, the break, and god knows what else.” To my mind, this hardly counts as the usual view of “staying in the present.” The usual view, I think, was what was going on with their opponents.

During the course of his round, Ryan reports, Johnson “rarely spoke with his brother and caddie Austin.” Johnson’s relative silence appears to me to be much like Wright’s passive, “New Age,” reality-ignoring, ideal. Far more, anyway, than the constant squawking that was going on in Spieth’s camp.

It’s a difference, I realize, that is easy to underestimate—but a crucial one nonetheless. Just how significant that difference is might be best revealed by an anecdote the writer, Gary Brecher, tells about the aftermath of the second Iraq War: about being in the office with a higher-ranking woman who declared her support for George Bush’s war. When Brecher said to her that perhaps these rumors of Saddam’s weapons could be exaggerated—well, let’s read Brecher’s description:

She just stared at me a second—I’ve seen this a lot from Americans who outrank me; they never argue with you, they don’t do arguments, they just wait for you to finish and then repeat what they said in the beginning—she said, “I believe there are WMDs.”

It’s a stunning description. Not only does it sum up what the Bush Administration did in the run-up to the Iraq War, but it’s also something of a fact of life around workplaces and virtually everywhere else in the United States these days: two Americans, especially ones of differing classes, rarely talk to one another these days. But they sure are pretty passive.

Americans however aren’t supposed to think of themselves as being passive—at least, they didn’t use to think of themselves that way. The English writer George Orwell described the American attitude in an essay about the quintessentially American author, Mark Twain: a man who “had his youth and early manhood in the golden age of America … when wealth and opportunity seemed limitless, and human beings felt free, indeed were free, as they had never been before and may not be again for centuries.” In those days, Orwell says, “at least it was NOT the case that a man’s destiny was settled from his birth,” and if “you disliked your job you simply hit the boss in the eye and moved further west.” Those older Americans did not simply accept what happened to them, the way the doctrine of “staying in the present” teaches.

If so, then perhaps Spieth and Greller, despite what they say, are bringing back an old American custom by killing an alien one. In a nation where 400 Americans are worth more than the poorest 150 million Americans, as I learned Sunday night after the Open by watching Robert Reich’s film, Inequality for All, it may not be a moment too soon.

All Our Yesterdays

And all our yesterdays have lighted fools
The way to dusty death.
The Tragedy of Macbeth Act V, sc. 5

Right now the best writer on the Internet is one “Gary Brecher,” who composes a series called “The War Nerd.” The column began on the Russian-based, English-language magazine the eXile, and it’s about what you might expect: the subject of war from the point of view of somebody who’s read a lot about it. That might lead some readers to dismiss what “Brecher” has to say, except that not only has he read a lot, but he has spent a number of years teaching English in the Middle East—which is like writing about basketball from the vantage point of 64th and Stony Island Avenue. So it’s writing that’s sensible: like, for example, his piece about The Siege of Dammaj. What, you didn’t know about The Seige of Dammaj? Don’t worry, it’s not that important: it only explains just why both Al-Qaeda exists, but likely also why you haven’t gotten a raise in ever.

Dammaj is a small town, even by Yemeni standards, in northeastern Yemen, but it became important to many more people than the neighborhood farmers when, in the late 1970s, a local man named Muqbil al Wadi returned from his religious studies across the border in Saudi Arabia and opened a school to teach the Wahhabi (they’d prefer the term “Salafist,” or “pure”) brand of Sunni Islam taught in Ibn Saud’s kingdom. But Wadi was not exactly a simple educator. He’d split Saudi because he was hot: he’d been indicted and jailed in connection with the Mahdi Revolt of 1979, when an armed band declared their leader the Messiah and grabbed the Grand Mosque of Mecca, then held it for two weeks until the Saudis organized the inevitable counterraid that killed at least a few hundred hostages and militants.

Wadi however had the right connections, and soon enough had his school, the Dar al-Hadith, up and running, appealing to what Brecher calls “a cast of thousands of cheery kids from Minneapolis and Jakarta and other comfy, wealthy places.” Soon, as Theo Padnos put it in a story for NPR, “rumors circulated in the mosques across the West: In a village in Northern Yemen, Islam was as it had been in the time of the Prophet—pure, uncompromising, and gathering strength.” Wadi appreciated the entrepreneurial opportunities of education far ahead of the curve: he was selling “authentic learning experiences” way before education experts in the West were writing books about them.

What, presumably, wasn’t part of Wadi’s pitch was something that American universities, or the Peace Corps, also often leave off their descriptions of “year abroad” programs: the fact that Yemen is a nation divided, as a lot of nations are, between two different religions who more or less hate each other. Nor, likely enough, would any hypothetical brochure for the Dar al-Hadith have explained that while the sort of Salafist, Sunni Islam practiced at the school was also the religion of those in charge of the government in Sanaa, it most assuredly was not the kind practiced in Dammaj, which is situated in the slam middle of Yemen’s Shia Muslims.

As later events would demonstrate, Dammaj’s inhabitants did not much care for being told they were infidels, as the “Salafist” version of Islam does. Still, the scions of wealthy families from across the Muslim world Wadi attracted to their town spent a lot of jack down in the local souk, and as Brecher observes, “Many a town has put up with the local students because it loved their money more than it hated their guts.” For the most part then, the locals were content to take the cash and ignore the sermons—sensibly enough—while the students it seems (despite being, you know, students) were not interested in their surroundings enough to know that the guy who sold them their smokes down the way hated them both for religious and economic reasons. All in all, a typical town-and-gown arrangement.

Yet that isn’t the only way Dammaj was similar to the typical sleepy village in Ohio or Vermont that’s home to a liberal arts college. One sign of the similarity is a blog Blecher cites, called “Fear the Dunya,” written by a Salafist student in Dammaj calling himself Hassan as Somali. “Dunya” is Arabic for “the physical world”—0r, in other words, “reality.” And there’s something curious about Hassan’s blog, Brecher notices: so long as “Hassan’s blog deal with practical matters, he writes good, clear American English.” (Sample sentence: “The rooms are made of mud bricks and most of them have small bathrooms.”) But “as soon as Hassan starts talking religion … all that clarity and honesty vanishes.” (Sample sentence: “Before the da’wah of the Sheikh Yemen was plagued by tashayyu’ in the north and tasawwuf in the south and hizbiyyah.” These all refer to what a Salafist would regard as various heresies.)

In other words, there isn’t a hell of a lot of difference between the sentences written by these Muslim students from the back of beyond and sentences like this one, cited by philosopher Martha Nussbaum in an essay called “The Professor of Parody”: “The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure …” That’s not even the whole sentence, by the bye: the whole of which won its writer, professor Judith Butler, first prize in an annual Bad Writing Contest.

To Nussbaum, what’s really significant about this style of writing is that it demonstrates just how feminist “thinkers of the new symbolic type would appear to believe that the way to do feminist politics is to use words in a subversive way, in academic publications of lofty obscurity and disdainful abstractness.” People of this sort, Nussbaum says, “have been influenced by the extremely French idea that one does politics by speaking seditiously, and that this is a significant type of political action.” Such a turn signals, Nussbaum says, a “virtually complete turning away from the material side of life, toward a type of verbal and symbolic politics that makes only the flimsiest of connections with the real situation of real women.” But what’s interesting about thinking about Nussbaum’s argument in terms of the story of Dammaj, is that what Nussbaum sees in the young feminist scholars she engages with actually doesn’t seem to be very tied to feminism, or gender, or even the West, at all: if young American women are attracted to “French postmodernist thought,” young Muslim men are attracted to schools like Dammaj—and, as Brecher notes elsewhere, to more sinister places.

The ones that went to Dammaj, at least the Westerners Theo Padnas met, were mostly “refugees from the urban ills of home”: “They’ve grown up in troubled neighborhoods,” Padnos writes in his story “A Militia, a Madrassa, and the Story Behind a Seige in Yemen,” “haven’t always succeeded in school, have lived through substance abuse issues, jail sentences, and have usually drifted a bit from city to city before coming to Yemen.” But what really unites them, Padnos thought as he interviewed students at Dammaj, was one particular element:

Over time [Padnos says] … I started to notice that in the background of these discussions there lurked an especially troublesome, impossible-to-ignore force. The name of this force was “dad.” The father-son argument fell out along these lines: The dads wanted the sons to get jobs, to respect authority, to give up the ridiculous pretense of Islamic scholarship, and to stop dressing like terrorists. If the sons couldn’t reconcile themselves to the West, they could get the hell out of the family. The sons told the dads to study the Koran.

In all of these senses, and not excluding violence, the students of Dammaj much resembled another group of young Islamic men who turned away from the West and all it represented: save the difference that the students of Dammaj shared something with the young Butler-influenced feminists described by Nussbaum. Over the course of Padnos’ stay in Dammaj, he gradually found that “the goal of a religious education in Yemen” was “to disassociate oneself from the things of this life”—just as the women Nussbaum writes about uninterested in political change of the direct sort. That was also a belief shared, and yet rejected, by another group of young Muslims, Brecher points out.

Whereas, in other words, according to Nussbaum the “new feminism instructs its members there is little room for large-scale social change,” and according to Padnos the students of Dammaj felt that they “were surrounded by a monstrous Other,” this group of young Muslims certainly believed in their capacity to cause widespread change—although they also shared a great deal with the other two. Look back “with a good cold eye at what Al Qaeda was,” says Brecher, “and you see that they only recruited well in one demographic: Middle/Upper-Class, Not-That-Bright, Middle Eastern Surplus Young Men.” There are, as he says, “a lot of those around, thanks to oil money and high birth rates, and they bounce … from prostitutes and cognac in Paris to cults in Denmark to one after another school.” That’s who Mohammed Atta and his buddies were, in case you weren’t aware: Atta for instance studied architecture and engineering in Cairo and then in Germany; Ziad Jarrah, for another, who piloted United Flight 93 and came from a wealthy Lebanese family, studied aerospace engineering in Hamburg.

Yet as Elena Lappin wrote in the magazine Prospect a year after the 11 September attacks, while Atta was an “upwardly-mobile young man, with a technical specialisation,” he had “no place to go” because his “personal advancement in Egypt was blocked due to his family not having the right connections.” In Egypt, and in many other places in the Middle East, “jobs, housing and decent wages were scarce,” Lappin says, and she cites Max Rodenbeck’s observation from his 2000 book, Cairo: The City Victorious, that by the mid-1980s “Even sex was effectively denied many, since Egypt’s strict conventions demanded marriage, and marriage required money for dowries and furnishings and apartments.’” The suggestion, in other words, is that September 11th happened because too many Arabic speakers couldn’t get laid—which might sound amusing, of course, until one considers just how serious a matter that really is.

What, after all, does the sort of academic study advocated by people like Judith Butler encourage, as Nussbaum says, but a denial of the bodily? One of Butler’s “strong claim[s],” Nussbaum says, “is that the body itself … is also a social construction,” and whereas feminists of an earlier age had their “eyes always on the material conditions of real women,” what characterizes the new generations of feminist scholars “is the virtually complete turning away from the material side of life, toward a type of verbal and symbolic politics that makes only the flimsiest of connections with the real situation of real women.” Not only that, but the “new feminism instructs its members there is little room for large-scale social change.” To whom else, Nusssbaum wonders, is this pedagogy aimed at but “successful middle class people” who would like to “do something bold”—perhaps like a number of young men who met in Hamburg around the same time—but “without compromising their security?” In other words, isn’t Butler’s audience simply Middle/Upper-Class, Not-That-Bright, American Surplus Young Women? That is, women, and other supposed intellectuals, who are specifically uninterested in improving the material lives of other women, or anyone else, in the United States. Which, just maybe, is why wages have pretty much been stagnant since 1974 or thereabouts.

At some point, one presumes, something will have to be done about the continuing production of these sorts of people; that time, it seems, is not yet. However that debate ultimately turns out, though, it’s a choice the students of Dammaj do not have any longer. Just before the end of the Siege, Brecher says, the students were outraged to discover that the leader of the Yemeni government, a Sunni Muslim, “finally cut a deal to try to save his doomed regime” and sold Dammaj out to his Shia opponents. “It was as if the sheer power (and money, and guns, of course) of the Sunni revival held Dar al Hadith in place against all logic for a third of a century,” Brecher writes, “until the start of the pushback we’re seeing now.” The students, it turned out, were right to fear reality: playing dice with nature never, despite what the professors or imams or the businessmen say, ends well.

Fine Points

 

Whenever asked a question, [John Lewis] ignored the fine points of whatever theory was being put forward and said simply, “We’re gonna march tonight.”
—Taylor Branch.
   Parting the Waters: America in the King Years Vol. 1 

 

 

“Is this how you build a mass movement?” asked social critic Thomas Frank in response to the Occupy Wall Street movement: “By persistently choosing the opposite of plain speech?” To many in the American academy, the debate is over—and plain speech lost. More than fifteen years ago articles like philosopher Martha Nussbaum’s 1999 criticism of professor Judith Butler, “The Professor of Parody,” or political scientist James Miller’s late 1999 piece “Is Bad Writing Necessary?” got published—and both articles sank like pianos. Since then it’s seemed settled that (as Nussbaum wrote at the time) the way “to do … politics is to use words in a subversive way.” Yet at a minimum this pedagogy diverts attention from, as Nussbaum says, “the material condition of others”—and at worst, as professor Walter Benn Michaels suggests, it turns the the academy into “the human resources department of the right, concerned that the women [and other minorities] of the upper middle class have the same privileges as the men.” Supposing then that bad writers are not simply playing their part in class war, what is their intention? I’d suggest that subversive writing is best understood as a parody of a tactic used, but not invented, by the civil rights movement: packing the jails.

“If the officials threaten to arrest us for standing up for our rights,” Martin Luther King, Jr. said in a January 1960 speech in Durham, North Carolina, “we must answer by saying that we are willing and prepared to fill up the jails of the South.” King’s speech was written directly towards the movement’s pressing problem: bailing out protestors cost money. In response, Thomas Gaither, a field secretary for the Congress for Racial Equality (CORE), devised a solution: he called it “Jail No Bail.” Taylor Branch, the historian, explained the concept in Parting the Waters: America in the King Years 1954-63: the “obvious advantage of ‘jail, no bail’ was that it reversed the financial burden of protest, costing the demonstrators no cash while obligating the white authorities to pay for jail space and food.” All protestors had to do was: get arrested, serve the time—and thereby cost the state their room and board.

Yet Gaither did not invent the strategy. “Packing the jails” as a strategy began, so far as I can tell, in October of 1909; so reports the Minnesotan, Harvey O’Connor, in his 1964 autobiography Revolution in Seattle: A Memoir. All that summer, the International Workers of the World (the “Wobblies”) had been engaged in a struggle against “job sharks”: companies that claimed to procure jobs for their clients after the payment of a fee—and then failed to deliver. (“It was customary,” O’Connor wrote, “for the employment agencies … to promote a rapid turnover”: the companies would take the money and either not produce the job, or the company that “hired” the newly-employed would fire them shortly afterwards.) In the summer of 1909 those companies succeeded in banning public assemblies and speaking on the part of the Wobblies, and legal challenges proved impossible. So in the October of that year the Wobblies “sent out a call” in the labor organization’s newspaper, the Industrial Worker: “Wanted: Men To Fill The Jails of Spokane.”

Five days later, the Wobblies held a “Free Speech Day” rally, and managed to get 103 men arrested. By “the end of November 500 Wobblies were in jail.” Through the “get arrested” strategy, the laborers filled the city’s jail “to bursting and then a school was used for the overflow, and when that filled up the Army obligingly placed a barracks at the city’s command.” And so the Wobblies’ strategy was working: the “jail expenses threatened to bankrupt the treasuries of cities even as large as Spokane.” As American writer and teacher Archie Binns had put the same point in 1942: it “was costing thousands of dollars every week to feed” the prisoners, and so the city was becoming “one big jail.” In this way, the protestors threatened to “eat the capitalistic city out of house and home”—and so the “city fathers” of Spokane backed down, instituting a permitting system for public marches and assemblies. “Packing the jails” won.

What, however, has this history to do with the dispute between plain-speakers and bad writers? In the first place it demonstrates how our present-day academy would much rather talk about Martin Luther King, Jr. and CORE than Harvey O’Connor and the Wobblies. Writing ruefully about left-wing professors like himself, Walter Benn Michaels writes “We would much rather get rid of racism than get rid of poverty”; elsewhere he says, “American liberals … carry on about racism and sexism in order to avoid doing so about capitalism.” Despite the fact that, historically, the civil rights movement borrowed a lot from the labor movement, today’s left doesn’t have much to say about that—nor much about today’s inequality. So connecting the tactics of the Wobblies to those of the civil rights movement is important because it demonstrates continuity where today’s academy wants to see, just as much as any billionaire, a sudden break.

That isn’t the only point of bringing up the “packing the jails” tactic however—the real point is that writers like Butler are making use of a version of this argument without publicly acknowledging it. As laid out by Nussbaum and others, the unsaid argument or theory or idea or concept (whatever name you’d have for it) behind “bad” writing is a version of “packing the jails.” To be plain: that by filling enough academic seats (with the right sort of person) political change will somehow automatically follow, through a kind of osmosis.

Admittedly, no search of the writings of America’s professors, Judith Butler or otherwise, will discover a “smoking gun” regarding that idea—if there is one, presumably it’s buried in an email or in a footnote in a back issue of Diacritics from 1978. The thesis can only to be discovered in the nods and understandings of the “professionals.” On what warrant, then, can I claim that it is their theory? If that’s the plan, how do I know?

My warrant extends from a man who knew, as Garry Wills of Northwestern says,  something about “the plain style”: Abraham Lincoln. To Lincoln, the only possible method of interpretation is a judgment of intent: as Lincoln said in his speech at Peoria in 1858, “when we see a lot of framed timbers, different portions of which we know have been gotten out at different times and places by different workmen,” and “we see these timbers joined together, and see they exactly make the frame of a house or a mill,” why, “in such a case we find it impossible not to believe” that everyone involved “all understood each other from the beginning.” Or as Walter Benn Michaels has put the same point: “you can’t do textual interpretation without some appeal to authorial intention.” In other words, when we see a lot of people acting in similar ways, we should be able to make a guess about what they’re trying to do.

In the case of Butlerian feminists—and, presumably, other kinds of bad writers—bad writing allows them to “do politics in [the] safety of their campuses,” as Nussbaum says, by “making subversive gestures through speech.” Instead of “packing the jails” this pedagogy, this bad writing, teaches “packing the academy”: the theory presumably being that, just as Spokane could only jail so many people, the academy can only hold so many professors. (Itself an issue, because there are a lot fewer professorships available these days, and only liable to be fewer.) Since, as Abraham Lincoln said about what he saw in the late 1850s, we can only make a guess—but we must make a guess—about what those intentions are, I’d hazard that my guess is more or less what these bad writers have in mind.

Unfortunately, in the hands of Butler and others, bad writing is only a parody—it only mimics the very real differences between the act of going to jail and that of attempting to become the, say, Coca-Cola Professor of Rhetoric at Wherever State. A black person willing to go to jail in the South in 1960 was a person with a great deal of courage—and still would be today. But it’s also true that it’s unlikely the courageous civil rights volunteers would have conceived of, much less carried out, the act of attempting to “pack the jails” without the example of the Wobblies prior to them—just as it might be argued that, without the sense of being of the same race and gender as their oppressors, the Wobblies might not have had the courage to pack the jails of Spokane. So it certainly could be argued that the work of the “bad writers” is precisely to make those connections—and so create the preconditions for similar movements in the future.

Yet, as George Orwell might have asked, “where’s the omelette?” Where are the people in jail—and where are the decent pay and equal rights that might follow them? Butler and other “radical” critics don’t produce either: I am not reliably informed of Judith Butler’s arrest record, but I’d suspect it’s not much. So Nussbaum’s observation that while Butler’s pedagogy “instructs people that they can, right now, without compromising their security, do something bold” [emp. added] she wasn’t entirely snide then, and her words look increasingly prescient now. That’s what Nussbaum means when she says that “Butlerian feminism is in many ways easier than the old feminism”: it is a path that demonstrates to middle-class white people, women especially, just how they can “dissent” without giving up their status or power. Nussbaum thus implies that feminism or any other kind of “leftism” practiced along Butler’s lines is not only, quite literally, physically cowardly—but perhaps more importantly suggests just why the “left,” such as it is, is losing.

For surely the “Left” is losing: as many, many people besides Walter Benn Michaels have written, economic inequality has risen, and is rising, even as the sentences and jargon of today’s academics have become more complex—and the academy’s own power slowly dissolves into a mire of adjunct professorships and cut-rate labor policies. Emmanuel Saez of the University of California says that “U.S. income inequality has been steadily increasing since the 1970s, and now has reached levels not seen since 1928,” and Nobel Prize winner Paul Krugman says that even the wages of “highly educated Americans have gone nowhere since the late 1990s.” We witness the rise of plutocrats on a scale never seen before, perhaps at least since the fall of the Bourbons—or even the Antonines.

That is not to suggest, to be sure, that individual “bad writers” are or are not cowards: merely to be a black person or a woman requires levels of courage many people will never be aware of in their lifetimes. Yet, Walter Benn Michaels is surely correct when he says that as things now stand, the academic left in the United States today is largely “a police force for, than an alternative to, the right,” insofar as it “would much rather get rid of racism [or sexism] than get rid of poverty.” Fighting “power” by means of a program of bad, rather than good, writing—writing designed to appeal to great numbers of people—is so obviously stupid it could only have been invented by smart people.

The objection is that giving up the program of Butlerian bad writing requires giving up the program of “liberation” her prose suggests: what Nussbaum calls Butler’s “radical libertarian” dream of the “sadomasochistic rituals of parody.” Yet as Thomas Frank has suggested, it’s just that kind of libertarian dream that led the United States into this mess in the first place: America’s recent troubles have, Frank says, resulted from “the political power of money”—a political power that was achieved courtesy of “a philosophy of liberation as anarchic in its rhetoric as Occupy [Wall Street] was in reality” [emp. Frank’s]. By rejecting that dream, American academics might obtain “food, schools, votes” and (possibly) less rape and violence for both women and men alike. But how?

Well, I have a few ideas—but you’d have to read some plain language.

Outrageous Fashion

 

In difficult times, fashion is always outrageous.
—Elsa Schiaparelli.

images

The kid “wearing a bolo tie, a regular tie, Native American beads, a suit coat worn under a flannel shirt, and socks but no shoes,” as Mother Jones described one protestor’s outfit, wasn’t the worst of Occupy Wall Street’s stylistic offenses against civilization—for Thomas Frank, founder of the small magazine The Baffler, the stylistic issues of the protests went much deeper than sartorial choice. To Frank, the real crime of the movement was that it used “high-powered academic disputation as a model for social protest”: Occupy, he argues, chose “elevated jargonese” over actual achievements. To some, such criticisms might sound ridiculous—how can anyone dispute matters of style when serious issues are at stake? But in fact matters of style are the only thing at stake: the stylistic choices of Occupy, and movements like it, ultimately only fuel precisely the kinds of exploitation Occupy is supposedly meant to protest. There are real goals—chief of which being a reorganization of the American government on more democratic lines—an American left could conceivably achieve in the United States today. If only, that is, were these movements to sacrifice their style.

To say such things is, of course, super-uncool. In order to contrast itself against such unhipness, the style of Occupy takes two forms: the first being the kind of academese Frank castigates. Here is one sentence Frank cites, from an Occupier objecting to someone else complaining about how none of the Occupiers would claim to speak for the whole movement: “I would agree, an individualism that our society has definitely had inscribed upon it and continues to inscribe upon itself, ‘I can only speak for myself,’ the “only” is operative there, and of course these spaces are being opened up …” And so on. It should be recognized that this is actually a comparatively understandable sentence against some produced by the Occupiers.

The other rhetorical style practiced by the Occupiers is a virtually sub-verbal kind of soup. Here for instance is the first sentence of an article entitled “How Occupy Wall Street Began,” on the website occupytheory.org: “One of the protests that have been practiced in different countries is the Occupy Wall Street Movement.” This is not, as any competent speaker would recognize, even English, much less effective writing designed to persuade a national audience. The counterargument, of course, is that it gives the writer—who is not named—something to do, and appeals to other sub-literates. But while those goals are perhaps worthy enough, they are both incredibly myopic and hyperopic at once.

They are nearsighted in the sense that while creating jobs is nearly always laudable, one might imagine that telling the story of the movement’s origins is a task important enough to delegate to someone capable of telling it. They are farsighted—in this case, not a compliment—in the sense that while being “inclusive” is to be sure important, people who are at best para-literate are not likely to be people in positions of authority, and hence capable of making decisions in the here-and-now. Perhaps someday, many years from now, such things might matter. But as the economist John Maynard Keynes remarked, in the long-run we are all dead—which is to say that none of this would matter had Occupy achieved any results.

“There are no tangible results from the Occupy movement,” the “social entrepreneur” Tom Watson ruefully concluded in Forbes magazine a year after the end of the Zuccotti Park occupation—no legislation, no new leaders, no new national organization. By contrast, Frank notes that in the same timespan the Tea Party—often thought of as a populist movement like Occupy, only with opposite goals—managed to elect a majority in Congress, and even got Paul Ryan, the archconservative congressman who seems to misunderstand basic mathematics, on the 2012 presidential ticket. The Tea Party, in other words, chose to make real inroads to power—a point that, presumably, Occupiers might counter by observing that the Tea Party is an organization, at least in part, funded by wealthy interests. It never seems to occur to Occupiers that such interests are funding those efforts precisely because the Tea Party does serve their interests—that is, that the Tea Party takes a clear position that funding A will have political result B.

For the Occupiers and their sympathies, however, “the ‘changes’ that Occupy failed to secure” are “not really part of the story,” says Frank. “What matters” to the Occupiers, he writes, “is the carnival—all the democratic and nonhierarchical things that went on in Zuccotti Park.” Should anyone object that—shockingly—sitting in a park for two months does not appear to have done anything tangible for anybody, you’ve just exposed yourself as a part of the problem, man—not to mention been unveiled as incredibly uncool.

As Frank points out, however, “here we come to the basic contradiction of the campaign”: to “protest Wall Street in 2011” was to protest “deregulation and tax-cutting—by a philosophy of liberation as anarchic in its rhetoric as Occupy was in reality.” Want anarchy and anti-hierarchy? That’s just what corporate America wants, too. Nothing, I’m sure, delighted the boardrooms of Goldman Sachs or Chase more than to see, or read about, the characters of Zuccotti Park refusing to allow what Frank calls the “humorless, doctrinaire adults … back in charge” by refusing to produce demands.

Frank’s charge thereby echoes an argument that’s been ongoing in American academia for some time: “Something more insidious than provincialism has come to prominence in the American academy,” the prominent philosopher Martha Nussbaum charged some time ago—“the virtually complete turning from the material side of life, toward a type of verbal and symbolic politics.” Nussbaum was complaining about trends she saw in feminist scholarship; James Miller, a political scientist, more broadly described years ago how many “radical professors distrust the demand for ‘linguistic transparency,’ charging that it cripples one’s ability ‘to think the world more radically.’” The other side claims, alternately, “that plain talk is politically perfidious—reinforcing, rather than radically challenging, the cultural status quo.” Hence, the need for complex, difficult sentences—a stylistic thesis wholly believed in, it seems, by the Occupiers.

Yet, what are the consequences of such stylistic choices? I’d suggest that one of them is that certain academic arguments that might have a chance of breaking through to the mainstream, and then making a real difference to actual American lives, are being overlooked in the name of what Frank calls “a gluey swamp of academic talk and pointless antihierarchical posturing.” One of these arguments is the one that is being carefully constructed by historians Manisha Sinha and Leonard Richards at the University of Massachusetts in books like Richards’ The Slave Power: The Free North and Southern Domination 1780-1860 and Sinha’s The Counter-revolution of Slavery: Politics and Ideology in Antebellum South Carolina. Such books enable a naturalistic, commonsense explanation for much of the political structure of American life—and thus enable something to be done about it.

Richards’ book makes clear how “the slaveholders of the South” ran the United States before the Civil War by virtue of anti-majoritarian features built into the Constitution; Manisha Sinha’s account demonstrates how those features could have been imported into the Constitution by way of features already part of the structure of the government of South Carolina. Prior to the Civil War, for instance, Sinha notes how one South Carolinian described how the “government of South Carolina was an ‘oligarchy’ modeled after the ‘rotten borough system’ of England”—and placed next to accounts of the writing of the Constitution, Sinha’s detailed description of South Carolina’s government calls into question the prominence South Carolinian leaders during the debates in Philadelphia during the Constitution Summer of 1787.

South Carolinians like the younger and elder Charles Pinckneys and Major Pierce Butler had an overwhelming influence over the writing of the Constitution: as David O. Stewart remarks in his history of the writing of the Constitution, The Summer of 1787, “the [South] Carolinians came to Philadelphia with an appetite for work, and they would exercise an outsized influence.” It’s impossible of course in a paragraph or even an essay to summarize the details of such books, or the story they tell—the point is I shouldn’t have to: they are being ignored despite the fact that they could overwhelmingly do far more good to more Americans than a dozen occupations of Zuccotti Park.

Books like these can do so because, as Abraham Lincoln knew how to do, they tell a comprehensible story—and thus provide a means by which to restructure the American government more democratically. That was Lincoln’s technique in his speech of June 16, 1858: “If we could first know where we are, and whither we are tending,” he said, “we could then better judge what to do, and how to do it.” The speech is a model of rhetorical efficiency: it tells the audience—the people—what Lincoln is going to do in his speech;  it shows that he will begin at the beginning and proceed to the end; and above all, that he will do so transparently, directly in front of the audience. The speech may be known to you: it is usually called “House Divided.”

Lincoln, undoubtedly, wore a plain Brooks Brothers suit.

Left Behind

Banks and credit companies are, strictly speaking, the direct source of their illusory “income.” But considered more abstractly, it is their bosses who are lending them money. Most households are net debtors, while only the very richest are net creditors. In an overall sense, in other words, the working classes are forever borrowing from their employers. Lending replaces decent wages, masking income disparities even while aggravating them through staggering interest rates.
Kim Phillips-Fein “Chapters of Eleven”
    The Baffler No. 11, 1998


Note: Since I began this blog by writing about golf, I originally wrote a short paragraph tying what follows to the FIFA scandal, on the perhaps-tenuous connection that the Clinton Foundation had accepted money from FIFA and Bill had been the chairman of the U.S. bid for the 2022 World Cup. But I think the piece works better without it.

“Why is it that women still get paid less than men for doing the same work?” presidential candidate Hillary Clinton asked recently in, of all places, Michigan. But the more natural question in the Wolverine State might seem to be the question a lot of economists are asking these days: “Why is everyone getting paid less?” Economists like Emmanuel Saez of the University of California, who says that “U.S. income inequality has been steadily increasing since the 1970s, and now has reached levels not seen since 1928.” Or Nobel Prize winner Paul Krugman, who says that even the wages of “highly educated Americans have gone nowhere since the late 1990s.” But while it’s not difficult to imagine that Clinton  asks the question she asks in a cynical fashion—in other words, to think that she is a kind of Manchurian candidate for Wall Street—it’s at least possible to think she asks it innocently. All Americans, says scholar Walter Benn Michaels, have been the victims of a “trick” over the last generation: the trick of responding to “economic inequality by insisting on the importance of … identity.” But how was the trick done?

The dominant pedagogy of the American university suggests one way: if it’s true that, as the professors say, reality is a function of the conceptual tools available, then maybe Hillary Clinton cannot see reality because she doesn’t have the necessary tools. As well she might not: in Clinton’s case, one might as well ask why a goldfish can’t see water. Raised in a wealthy Chicago suburb, on to Ivy League colleges; then the governor’s mansion in Little Rock, Arkansas and the White House; followed by Westchester County, then back to D.C. It’s true of course that Clinton did write a college thesis about Saul Alinsky’s community organizing tactics, so she cannot possibly be unfamiliar with the question of economic inequality. But it’s also easy to see how economics is easily obscured in such places.

What’s perhaps stranger though is that economics, as a subject, should have become more obscure, not less, since Clinton left New Haven—and even if Clinton should have been wholly ignorant of the subject, that doesn’t explain how she could then become a national candidate for president of the party. Yet at about the same time that Clinton was at Yale, another young woman with bright academic credentials was living practically just down the road in Hartford, Connecticut—and the work she did has helped to ensure that, as Michaels says, “for the last 30 years, while the gap between the rich and the poor has grown larger, we’ve been urged to respect people’s identities.” That doesn’t mean of course that the story I am going to tell explains everything about why Hillary asked the question she asked in Michigan, instead of the one she should have asked, but it is, I think, illustrative—by telling this one story in depth, it becomes possible to understand how what Michaels calls the “trick” was pulled.

“In 1969,” Jane Tompkins tells us in “Sentimental Power: Uncle Tom’s Cabin and the Politics of Literary History,” she “lived in the basement of a house on Forest Street in Hartford, Connecticut, which had belonged to Isabella Beecher Hooker—Harriet Beecher Stowe’s half-sister.” Living where she did sent Tompkins off on an intellectual journey that eventually led to the essay “Sentimental Power”—an essay that took up the question of why, as Randall Fuller observed not long ago in the magazine Humanities, “Uncle Tom’s Cabin was seen by most literary professionals as a cultural embarrassment.” Her conclusion was that Uncle Tom’s Cabin was squelched by a “male-dominated scholarly tradition that controls both the canon of American literature … and the critical perspective that interprets the canon for society.” To Tompkins, Uncle Tom’s Cabin was “repressed” on the basis of “identity”: Stowe’s work was called “trash”—as the Times of London did at the time it was published—because it was written by a woman.

To make her argument, however, required Tompkins to make several moves that go some way towards explaining why Hillary Clinton asks the question she asks, rather than the one she should ask. Most significant is Tompkins’ argument against the view she ascribes to her opponents: that “sentimental novels written by women in the nineteenth century”—like Uncle Tom’s Cabin—“were responsible for a series of cultural evils whose regrets still plague us,” among them the “rationalization of an unjust economic order.” Already, Tompkins is telling her readers that she is going to argue against those critics who used Uncle Tom’s Cabin to discuss the economy; already, we are not far from Hillary Clinton’s question.

Next, Tompkins takes her critical predecessors to task for ignoring the novel’s “enormous popular success”: it was, as Tompkins points out, the first novel to sell “over a million copies.” So part of her argument is not only the bigotry, but also the snobbishness of her opponents—an argument familiar enough to anyone who listens to right-wing talk radio. The distance from Tompkins’ argument to those who “argue” that quality is guaranteed by popularity, and vice versa—the old “if you’re so smart, why ain’t you rich” line—is about as far from the last letter in this sentence to its period. So Tompkins deprecates the idea that value can be independent of “success”—the idea that there can be slippage between an economic system and reality.

Yet perhaps the largest step Tompkins takes on the road to Hillary’s question simply concerns how she ascribes criticisms of Uncle Tom’s Cabin to sexism, or Stowe’s status as a woman—despite the fact that perhaps the best-known critical text on the novel, James Baldwin’s 1949 essay “Everybody’s Protest Novel,” was not only written by a gay black man, but Baldwin’s based his criticism of Stowe’s novel on rules originally applied to a white male author: James Fenimore Cooper, the object of Mark Twain’s scathing 1895 essay, “Fenimore Cooper’s Literary Offenses.” That essay, with which Twain sought to bury Cooper, furnished the critical precepts Baldwin uses to attempt to bury Stowe.

Stowe’s work, Baldwin says, is “a very bad novel” for two reasons: first, it is full of “excessive and spurious emotion.” Secondly, the novel “is activated by what might be called a theological terror,” so that “the spirit that breathes in this book … is not different from that spirit of medieval times which sought to exorcise evil by burning witches.” Both of these reasons derive from principles propounded by Twain in “Fenimore Cooper’s Literary Offenses.”

“Eschew surplusage” is number fourteen of Twain’s rules, so when Baldwin says Stowe’s writing is “excessive,” he is implicitly accusing Stowe of breaking this rule. Even Tompkins admits that Uncle Tom’s Cabin breaks this rule when she says that Stowe’s novel possesses “a needless proliferation of incident.” Then, number nine on Twain’s list is “that the personages of a tale shall confine themselves to possibilities and let miracles alone”—the rule that Baldwin invokes when he criticizes Uncle Tom’s Cabin for its “theological terror.” When burning witches, after all, it is necessary to have a belief in miracles—i.e., the supernatural—and certainly Stowe, who not only famously claimed that “God wrote” her novel but also suffused her novel with supernatural events, believed in the supernatural. So, if Baldwin—who remember was both black and homosexual—is condemning Stowe on the basis of rules originally used against a white male writer, it’s difficult to see how Stowe is being unfairly singled out on the basis of her sex. But that is what Tompkins says.

I take such time on these points because ultimately Twain’s rules go back much further than Twain himself—and it’s ultimately these roots that are both Tompkin’s object and, I suspect, the reason why Hillary asks the question she asks instead of the one she should. Twain’s ninth rule, concerning miracles, is more or less a restatement of what philosophers call naturalism: the belief “that reality has no place for ‘supernatural’ or other ‘spooky’ kinds of entity” according to the Stanford Encyclopedia of Philosophy. And the roots of that idea trace back to the original version of Twain’s fourteenth rule (“Eschew surplusage.”): Thomas Aquinas, in his Summa Theologica, gave one example of it when wrote that if “a thing can be done adequately by means of one, it is superfluous to do by several.” (In a marvelous economy, in other words, Twain reduced Aquinas’ rule—sometimes known as “Occam’s Razor,” to two words.) So it’s possible to say that Baldwin’s criticisms of Stowe are actually the same criticism: that “excessive” writing leads to, or perhaps more worrisomely just is, a belief in the supernatural.

It’s this point that Tompkins ultimately wants to address—she calls Uncle Tom’s Cabin “the Summa Theologica of nineteenth-century America’s religion of domesticity,” after all. Also, Tompkins doesn’t try to defend Stowe against Baldwin on the same grounds that two other critics tried to defend Cooper against Twain. In an essay named “Fenimore Cooper’s Literary Defenses,” Lance Schachterle and Kent Ljungquist argue that Twain doesn’t do justice to Cooper because he doesn’t take into account the different literary climate of Cooper’s time. While “Twain valued economy of style,” they write, “such concision simply was not a characteristic of many early nineteenth-century novelists’ work.” They’re willing to allow, in other words, the merits of Twain’s rules—they’re just arguing that it isn’t fair to apply those rules to writers who could not have been aware of them. Tompkins however takes a different tack: she says that in Uncle Tom’s Cabin, “it is the spirit alone that is finally real.” According to Tompkins, the novel is not just unaware of naturalism: Uncle Tom’s Cabin actively rejects naturalism.

To Tompkins, Stowe’s anti-naturalism is somehow a virtue. Stowe’s rejection of naturalism leads her to recommend, Tompkins says, “not specific alterations in the current political and economic arrangements but rather a change of heart … as the necessary precondition for sweeping social change.” To Stowe, attempts to “alter the anti-abolitionist majority in the Senate,” for instance, are absurdities: “Reality, in Stowe’s view, cannot be changed by manipulating the physical environment.” Apparently, this is a point in Stowe’s favor.

Without naturalism and its corollaries—basic intellectual tools—it’s difficult to think a number of things: that all people are people, first of all. That is, members of a species that has had, more or less, the same cognitive abilities for at least the last 100,000 years or so, which implies that most people’s cognitive abilities aren’t much different than anyone else’s—nor are they much different from anyone in history’s. Which, one might say, is prerequisite to running a democratic state—as opposed to, say, a monarchy or aristocracy, in which one person is better than another by blood right. But if naturalism is dead, then the growth of “identity” politics is perhaps easy to understand: without the conceptual category of “human being” available, other categories have to be substituted.

Without grouping votes on some basis, how could they be gathered into large enough clumps to make a difference? Hillary Clinton must ask for votes on the basis of some commonality between voters large enough to ensure her election. Assuming that she does, in fact, wish to be elected, it’s enlightening to observe that Clinton is appealing for votes on the basis of the next largest category after “human being”—“woman,” the category of 51 percent of the population according to most figures. That alone might explain why Hillary Clinton should ask “Why are women paid less” rather than “Why is everyone paid less?”

Yet the effects of Tompkins’ argument, as I suspect will be drearily apparent to the reader by now, are readily observable in many more places than Hillary Clinton’s campaign in today’s world. Think of it this way: what else are contemporary phenomena like unpaid internships, “doing it for the exposure,” or just trying to live on a minimum wage or public assistance, but attempts to live without material substance—that is, attempts to live as a “spirit?” Or for that matter, what is credit card debt, which Kim Phillips-Fein was explaining in The Baffler so long ago as 1998 as what happened when “people began to borrow to make up for stagnant wages.” These are all matters in which what matters isn’t matter—i.e., the material—but the “spirit.”

In the same way, what else was the “long-time” Occupy Wall Street camper named “Ketchup” doing when she said, to Josh Harkinson at Mother Jones, that the “‘whole big desire for demands is something people want to use to co-opt us’” but, as Tompkins would put it, refusing to delineate “specific alterations in the current political and economic arrangements?” That’s why Occupy, as Thomas Frank memorably wrote in his essay, “To the Precinct Station,” “seems to have had no intention of doing anything except building ‘communities’ in public spaces and inspiring mankind with its noble refusal to have leaders.” The values described by Tompkins’ essay are, specifically, anti-naturalist: Occupy Wall Street, and its many, many sympathizers, was an anti-naturalist—a religious—movement.

It may, to be sure, be little wonder that feminists like Tompkins should look to intellectual traditions explicitly opposed to the intellectual project of naturalism—most texts written by women have been written by religious women. So have most texts written by most people everywhere—to study a “minority” group virtually requires studying texts written by people who believed in a supernatural being. It’s wholly understandable, then, that anti-naturalism should have become the default mode of people who claim to be on the “left.” But while it’s understandable, it’s no way to, say, raise wages. Whatever Jane Tompkins says about her male literary opponents, Harriet Beecher Stowe didn’t free anybody. Abraham Lincoln—by all accounts an atheist—did.

Which is Hillary Clinton’s model?